Publications

In Press
Thomas, M. P., Turkay, S., & Parker, M. (In Press). Explanations and interactives improve subjective experiences in online courseware. The International Review of Research in Open and Distributed Learning.Abstract
As online courses become more common, practitioners are in need of clear guidance on how to translate best educational practices into web-based instruction. Moreover, student engagement is a pressing concern in online courses, which often have high levels of dropout. Our goals in this work were to experimentally study routine instructional design choices and to measure the effects of these choices on students’ subjective experiences (engagement, mind wandering, and interest) in addition to objective learning outcomes. Using randomized controlled trials, we studied the effect of varying instructional activities (namely, assessment and a step-through interactive) on participants’ learning and subjective experiences in a lesson drawn from an online immunology course. Participants were recruited from Amazon Mechanical Turk. Results showed that participants were more likely to drop out when they were in conditions that included assessment. Moreover, assessment with minimal feedback (correct answers only) led to the lowest subjective ratings of any experimental condition. Some of the negative effects of assessment were mitigated by the addition of assessment explanations or a summary interactive. We were surprised to find no difference between the experimental conditions in learning outcomes, but we did find differences between groups in the accuracy of score predictions. Finally, prior knowledge and self-rated confusion were predictors of post-test scores. Using student behavior data from the same online immunology course, we corroborated the importance of assessment explanations. Our results have a clear implication for course developers: the addition of explanations to assessment questions is a simple way to improve online courses. 
2017
Milano, B. (2017). Adaptive learning featured in HarvardX course. Harvard Gazette. Publisher's Version
Milano, B. (2017). Harvard boosts on-campus reuse of online course content. Harvard Gazette. Publisher's Version
Ciregna, E. M., & Perez, E. (2017). Emerging challenges in digital higher ed. Harvard Gazette. Publisher's Version
Lopez, G., Seaton, D. T., Ang, A., Tingley, D., & Chuang, I. (2017). Google BigQuery for Education: Framework for Parsing and Analyzing edX MOOC Data. Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale . ACM. Publisher's Version
Whitehill, J., Mohan, K., Seaton, D., Rosen, Y., & Tingley, D. (2017). MOOC Dropout Prediction: How to Measure Accuracy?. Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale . ACM. Publisher's Version
Turkay, S., & Wong, T. (2017). What engages MOOC learners: An interview study with ChinaX learners. Poster presented at the Annual meeting of American Educational Research Association. turkay_wong_2017_aera.pdf
Turkay, S., Eidelman, H., Rosen, Y., Seaton, D., Lopez, G., & Whitehill, J. (2017). Getting to know English language learners in MOOCs: Their motivations, behaviors and outcomes. In Proceedings of the Fourth Annual ACM Conference on Learning at Scale (pp. 209 - 212) . ACM. turkay_et_al_2017_ells_in_moocs.pdf
Rosen, Y. (2017). Enabling adaptive assessments [and learning] in HarvardX. In . Microsoft Assessment Deep Dive Workshop, Redmond, WA. rosen-adaptive-assessment_3-2-17.pdf
Rosen, Y., Rushkin, I., & Ang, A. (2017). Enabling adaptive and principled assessment design in MOOCs. In National Council on Measurement in Education, San Antonio, TX. adaptivity_and_assessment_design_in_moocs_.pdf
Rosen, Y., Rushkin, I., Ang, A., Federicks, C., Tingley, D., & Blink, M. - J. (2017). Designing adaptive assessments in MOOCs. Proceedings of the Fourth ACM Conference on Learning @ Scale . Publisher's Version rosen_et_al_ls_2017.pdf
Williams, J. J., Rafferty, A. N., Maldonado, S., Ang, A., Tingley, D., & Kim, J. (2017). MOOClets: A Framework for Dynamic Experimentation and Personalization. In Fourth (2017) ACM Conference on Learning @ Scale (pp. 287-290) . ACM. designing-tools-dynamic.pdf
Williams, J. J., Rafferty, A. N., Ang, A., Tingley, D., Lasecki, W. S., & Kim, J. (2017). Connecting Instructors and Learning Scientists via Collaborative Dynamic Experimentation. In CHI'17 CHI Conference on Human Factors in Computing Systems (pp. 3012-3018) . ACM. connecting_instructors_and_learning.pdf
Rosen, Y. (2017). Assessing Students in Human-to-Agent Settings to Inform Collaborative Problem-Solving Learning. Journal of Educational Measurement , 54 (1), 36-53. Publisher's VersionAbstract

In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must apply the skills in varied contexts and tasks. One solution to these assessment demands is to use computer-based (virtual) agents to serve as the collaborators in the interactions with students. This article proposes a human-to-agent (H-A) approach for formative CPS assessment and describes an international pilot study aimed to provide preliminary empirical findings on the use of H-A CPS assessment to inform collaborative learning. Overall, the findings showed promise in terms of using a H-A CPS assessment task as a formative tool for structuring effective groups in the context of CPS online learning.

2016
Simon, C. (2016). MOOCs ahead. Harvard Gazette. Publisher's Version
Turkay, S., & Mouton, S. (2016). The educational impact of whiteboard animations: An experiment using popular social science lessons. In Proceedings of the MIT Learning International Networks Consortium.Abstract
Whiteboard animations are increasingly used in education despite little evidence of their efficacy. In this study, we measured the impact of whiteboard animations and other common instructional formats on learning outcomes, experience, and motivation. We recruited participants from Amazon’s Mechanical Turk (N=568; 326 females). Participants were randomly assigned to view online lessons about popular topics in social science from wellestablished scholars in one of five common instructional formats: whiteboard animation, electronic slideshow, stage lecture, audio, and text. Results showed a benefit of whiteboard animations in terms of learning and subjective experiences of enjoyment and engagement.
Reich, J., Stewart, B., Mavon, K., & Tingley, D. (2016). The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums. Proceedings of the Third Annual ACM Conference on Learning at Scale.
Williams, J. J., Kim, J., Glassman, E., Rafferty, A., & Lasecki, W. (2016). Making Static Lessons Adaptive through Crowdsourcing & Machine Learning. In Design Recommendations for Intelligent Tutoring Systems: Domain Modeling (Vol. 4).Abstract

Text components of digital lessons and problems are often static: they are written once and too often not improved over time. This is true for both large text components like webpages and documents as well as the small components that form the building blocks of courses: explanations, hints, examples, discussion questions/answers, emails, study tips, motivational messages. This represents a missed opportunity, since it should be technologically straightforward to enhance learning by improving text, as instructors get new ideas and data is collected about what helps learning. We describe how instructors can use recent work (Williams, Kim, Rafferty, Maldonado, Gajos, Lasecki, & Heffernan, 2016a) to make text components into adaptive  resources  that  semi­automatically  improve  over  time,  by  combining  crowdsourcing methods from human computer interaction (HCI) with algorithms from statistical machine learning that use data for optimization. 

Making Static Lessons Adaptive through Crowdsourcing and Machine Learning.pdf
Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K., Lasecki, W. S., & Heffernan, N. (2016). AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning. Proceedings of the Third Annual ACM Conference on Learning at Scale. AXIS - Generating Explanations at Scale with Learnersourcing and Machine Learning.pdf
Williams, J. J., Lombrozo, T., Hsu, A., Huber, B., & Kim, J. (2016). Revising Learner Misconceptions Without Feedback: Prompting for Reflection on Anomalous Facts. Proceedings of CHI (2016), 34th Annual ACM Conference on Human Factors in Computing Systems. Revising Learner Misconceptions Without Feedback - Prompting for Reflection on Anomalous Facts.pdf

Pages