%0 Conference Paper %B Proceedings of the 10th International Conference on Educational Data Mining %D 2017 %T Adaptive Assessment Experiment in a HarvardX MOOC %A Ilia Rushkin %A Yigal Rosen %A Andrew Ang %A Colin Fredericks %A Tingley, Dustin %A Mary Jean Blink %A Lopez, Glenn %X We report an experimental implementation of adaptive learning functionality in a self-paced HarvardX MOOC (massive open online course). In MOOCs there is need for evidence-based instructional designs that create the optimal conditions for learners, who come to the course with widely differing prior knowledge, skills and motivations. But users in such a course are free to explore the course materials in any order they deem fit and may drop out any time, and this makes it hard to predict the practical challenges of implementing adaptivity, as well as its effect, without experimentation. This study explored the technological feasibility and implications of adaptive functionality to course (re)design in the edX platform. Additionally, it aimed to establish the foundation for future study of adaptive functionality in MOOCs on learning outcomes, engagement and drop-out rates. Our preliminary findings suggest that the adaptivity of the kind we used leads to a higher efficiency of learning (without an adverse effect on learning outcomes, learners go through the course faster and attempt fewer problems, since the problems are served to them in a targeted way). Further research is needed to confirm these findings and explore additional possible effects. %B Proceedings of the 10th International Conference on Educational Data Mining %C International Conference on Educational Data Mining %P 466-471 %G eng %U http://educationaldatamining.org/EDM2017/proc_files/papers/paper_167.pdf %0 Newspaper Article %B Harvard Gazette %D 2017 %T Adaptive learning featured in HarvardX course %A Brett Milano %B Harvard Gazette %G eng %U http://news.harvard.edu/gazette/story/2017/02/adaptive-learning-featured-in-harvardx-course/ %0 Newspaper Article %B Harvard Gazette %D 2017 %T Harvard boosts on-campus reuse of online course content %A Brett Milano %B Harvard Gazette %G eng %U http://news.harvard.edu/gazette/story/2017/04/harvardx-boosts-on-campus-reuse-of-online-course-content/ %0 Newspaper Article %B Harvard Gazette %D 2017 %T Emerging challenges in digital higher ed %A Elise M. Ciregna %A Esten Perez %B Harvard Gazette %G eng %U http://news.harvard.edu/gazette/story/2017/05/emerging-challenges-in-digital-higher-education/ %0 Conference Proceedings %B Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale %D 2017 %T Google BigQuery for Education: Framework for Parsing and Analyzing edX MOOC Data %A Lopez, G. %A Seaton, D. T. %A Ang, A. %A D. Tingley %A Chuang, I. %B Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale %I ACM %P 181-184 %G eng %U http://dl.acm.org/citation.cfm?id=3053980 %0 Conference Proceedings %B Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale %D 2017 %T MOOC Dropout Prediction: How to Measure Accuracy? %A Whitehill, J. %A Mohan, K. %A Seaton, D. %A Rosen, Y. %A D. Tingley %B Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale %I ACM %P 161-164 %G eng %U http://dl.acm.org/citation.cfm?id=3053974 %0 Conference Proceedings %B Poster presented at the Annual meeting of American Educational Research Association %D 2017 %T What engages MOOC learners: An interview study with ChinaX learners. %A S. Türkay %A Wong, T. %B Poster presented at the Annual meeting of American Educational Research Association %G eng %0 Journal Article %J The International Review of Research in Open and Distributed Learning. %D 2017 %T Explanations and interactives improve subjective experiences in online courseware %A Thomas, M.P. %A S. Türkay %A Parker, M. %X As online courses become more common, practitioners are in need of clear guidance on how to translate best educational practices into web-based instruction. Moreover, student engagement is a pressing concern in online courses, which often have high levels of dropout. Our goals in this work were to experimentally study routine instructional design choices and to measure the effects of these choices on students’ subjective experiences (engagement, mind wandering, and interest) in addition to objective learning outcomes. Using randomized controlled trials, we studied the effect of varying instructional activities (namely, assessment and a step-through interactive) on participants’ learning and subjective experiences in a lesson drawn from an online immunology course. Participants were recruited from Amazon Mechanical Turk. Results showed that participants were more likely to drop out when they were in conditions that included assessment. Moreover, assessment with minimal feedback (correct answers only) led to the lowest subjective ratings of any experimental condition. Some of the negative effects of assessment were mitigated by the addition of assessment explanations or a summary interactive. We found no differences between the experimental conditions in learning outcomes, but we did find differences between groups in the accuracy of score predictions. Finally, prior knowledge and self-rated confusion were predictors of post-test scores. Using student behavior data from the same online immunology course, we corroborated the importance of assessment explanations. Our results have a clear implication for course developers: the addition of explanations to assessment questions is a simple way to improve online courses. %B The International Review of Research in Open and Distributed Learning. %G eng %U http://www.irrodl.org/index.php/irrodl/article/view/3076 %0 Conference Paper %B Proceedings of the Fourth Annual ACM Conference on Learning at Scale %D 2017 %T Getting to know English language learners in MOOCs: Their motivations, behaviors and outcomes %A Selen Turkay %A Hadas Eidelman %A Yigal Rosen %A Daniel Seaton %A Lopez, Glenn %A Whitehill, Jacob %B Proceedings of the Fourth Annual ACM Conference on Learning at Scale %I ACM %P 209 - 212 %G eng %0 Conference Paper %D 2017 %T Enabling adaptive assessments [and learning] in HarvardX %A Yigal Rosen %C Microsoft Assessment Deep Dive Workshop, Redmond, WA %G eng %0 Conference Paper %B National Council on Measurement in Education, San Antonio, TX %D 2017 %T Enabling adaptive and principled assessment design in MOOCs %A Yigal Rosen %A Ilia Rushkin %A Andrew Ang %B National Council on Measurement in Education, San Antonio, TX %G eng %0 Conference Proceedings %B Proceedings of the Fourth ACM Conference on Learning @ Scale %D 2017 %T Designing adaptive assessments in MOOCs %A Yigal Rosen %A Ilia Rushkin %A Andrew Ang %A Colin Federicks %A Tingley, Dustin %A Mary-Jean Blink %B Proceedings of the Fourth ACM Conference on Learning @ Scale %P 233-236 %G eng %U http://dl.acm.org/citation.cfm?id=3053993 %0 Conference Paper %B Fourth (2017) ACM Conference on Learning @ Scale %D 2017 %T MOOClets: A Framework for Dynamic Experimentation and Personalization %A Williams, Joseph Jay %A Anna N. Rafferty %A Samuel Maldonado %A Andrew Ang %A Tingley, Dustin %A Juho Kim %B Fourth (2017) ACM Conference on Learning @ Scale %I ACM %P 287-290 %G eng %0 Conference Paper %B CHI'17 CHI Conference on Human Factors in Computing Systems %D 2017 %T Connecting Instructors and Learning Scientists via Collaborative Dynamic Experimentation %A Williams, Joseph Jay %A Anna N. Rafferty %A Andrew Ang %A Tingley, Dustin %A Walter S. Lasecki %A Juho Kim %B CHI'17 CHI Conference on Human Factors in Computing Systems %I ACM %P 3012-3018 %G eng %0 Journal Article %J Journal of Educational Measurement %D 2017 %T Assessing Students in Human-to-Agent Settings to Inform Collaborative Problem-Solving Learning %A Yigal Rosen %X

In order to understand potential applications of collaborative problem-solving (CPS) assessment tasks, it is necessary to examine empirically the multifaceted student performance that may be distributed across collaboration methods and purposes of the assessment. Ideally, each student should be matched with various types of group members and must apply the skills in varied contexts and tasks. One solution to these assessment demands is to use computer-based (virtual) agents to serve as the collaborators in the interactions with students. This article proposes a human-to-agent (H-A) approach for formative CPS assessment and describes an international pilot study aimed to provide preliminary empirical findings on the use of H-A CPS assessment to inform collaborative learning. Overall, the findings showed promise in terms of using a H-A CPS assessment task as a formative tool for structuring effective groups in the context of CPS online learning.

%B Journal of Educational Measurement %V 54 %P 36-53 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/jedm.12131/abstract %N 1