1. Ibrahim Dahlstrom-Hakki
  2. Senior Academic Researcher
  3. Revealing the Invisible: Data-Intensive Research Using Cognitive, Psychological, and Physiological Measures to Optimize STEM Learning
  4. https://www.landmark.edu/research-training/grants-research
  5. Landmark College
  1. Cyrus Shaoul
  2. https://www.landmark.edu/about/directory/cyrus-shaoul
  3. Senior Academic Researcher
  4. Revealing the Invisible: Data-Intensive Research Using Cognitive, Psychological, and Physiological Measures to Optimize STEM Learning
  5. https://www.landmark.edu/research-training/grants-research
  6. Landmark College
Public
Choice
Public Discussion
  • Icon for: Brian Drayton

    Brian Drayton

    Researcher
    May 15, 2017 | 08:58 a.m.

    Very interesting, and what a challenge to take on!

       I've love to understand a bit more about how you are recognizing learning when you see it —  I am assuming that the data streams should (perhaps eventually) help you recognize learning that is starting to emerge -- maybe along the lines of microgenesis, in Vygotsky-speak.

     
    1
    Mark this discussion post as helpful

    Michael Stone
  • Icon for: Cyrus Shaoul

    Cyrus Shaoul

    Co-Presenter
    May 15, 2017 | 03:17 p.m.

    Thanks for stopping by! As you surmised, we are very interested in the development in implicit processing which occurs during the course of a learning session and leads to rapid change in the processing of information, which is very similar to the idea of microgenesis.

    Some of the analyses that we intend to perform on our dataset include looking at the amount of time that the student is fixating on the particles and seeing if there is a relationship between particle density and fixation time. We also will be looking at the development of strategies: some students may start looking for particles that are on a collision course, which is a great strategy and also demonstrates implicit understanding of Newton's laws. We have many other ideas that we are working on, and we will soon see how  these gaze analyses will align with the existing work on mouse click analysis in Impulse.

    Thanks for your question, and please feel free to ask a follow-up question.

     

     
    Mark this discussion post as helpful
  • Icon for: Michael Stone

    Michael Stone

    Facilitator
    May 15, 2017 | 10:04 a.m.

    What an amazing challenge to tackle! Once the baseline student learning trajectories are identified, what are some of the anticipated benefits of the adaptive prototype of the game?

    Additionally, does your team expect to be able to eventually bridge the eye tracking technology with non-digital interactions (i.e. a more traditional classroom setting)? Please pardon my naivety if this is a ridiculous projection. 

     
    Mark this discussion post as helpful
  • Icon for: Cyrus Shaoul

    Cyrus Shaoul

    Co-Presenter
    May 15, 2017 | 03:45 p.m.

    Thanks for your questions, which are excellent ones.

     

    First, let me describe some of the anticipated benefits of a future adaptive version:

     

    1) In the current version of the software, the is no "hint" feature to help a student who is stuck. In the adaptive version, the software would analyze the eye movements or other in-game data and detect the ineffective deployment of attentional resources. It would then start providing hints (perhaps with a glowing ring around a particle) that would direct the student's attention to particles that are on a collision course. This would reduce student frustration and enable more efficient learning.

     

    2) To help the student assess themselves during play, an adaptive version of this game could give students "brain points" each time they show from their behavior and eye movement that they are attending to the critical items in the game. This would mean that even though the student was not jumping to the next level, they would see that their implicit learning was progressing.

     

    To answer your second question, we are hoping that the data we are collecting and analyzing will help other experimenters and learning theorists to better understand the power of implicit learning. We see great potential for non-digital experiences that are similar to our software in their avoidance of textual content and explicit instruction. Mixing digital and non-digital implicit learning may be the most effective method of all, but first more research needs to be done.

    I hope I was able to answer your questions. If you have a follow-up question, please ask away!

     

     

     
    Mark this discussion post as helpful
  • Icon for: Jodi Asbell-Clarke

    Jodi Asbell-Clarke

    Researcher
    May 15, 2017 | 04:48 p.m.

    great job you guys....the best partners ever!!! :)

     

     
    Mark this discussion post as helpful
  • Icon for: Cyrus Shaoul

    Cyrus Shaoul

    Co-Presenter
    May 15, 2017 | 04:51 p.m.

    Thanks, Jodi! We could not have done it without you and the other folks at TERC and MIT.

     
    Mark this discussion post as helpful
  • Icon for: Chris Thorn

    Chris Thorn

    Facilitator
    May 16, 2017 | 12:18 a.m.

    So, if I understand where this is heading, one could imagine work helping students struggling with online homework to help detect the use of good strategies (students are looking at the relevant supporting information on the page, etc.). I supposed more adaptive versions of homework could actually deploy additional supports in response to students perceived struggle based on eye gaze and cursor movement. Thinking about the "bridging" to non-digital environments.  This learning could be leveraged to improve the design of ofline representations as well. 

     
    Mark this discussion post as helpful
  • Icon for: Ibrahim Dahlstrom-Hakki

    Ibrahim Dahlstrom-Hakki

    Presenter
    May 16, 2017 | 10:03 a.m.

    Thanks for your comment Chris. Yes we see this work leading to a number of potential research directions. On the one hand we are interested in bringing additional neurocognitive tools online given that we now have the infrastructure to easily do so, but we are hoping to match the highly instrumented patterns with more easily gathered data that can be collected out in the field (e.g. mouse stream, webcam) to allow for the detection of non-productive play patterns and the introduction on non-obtrusive prompts to put the students back on track without explicitly guiding their play behavior.

     
    Mark this discussion post as helpful
  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    May 16, 2017 | 12:16 p.m.

    Very nice project -- important. I have some clarification questions:

     

    1. What does it mean to track "implicit learning". What do you mean by "implicit learning." (I was thinking it means the same as tacit learning; what they are grasping in an intuitive way.)

    2. How does eye tracking help you track it. I can see how eye tracking can help you track their attention and figure out what help they need to direct their attention in productive ways.

    3. What do you mean by "student learning trajectories"?

    My naive understanding of what you are aiming to do is to help game players learn to direct their attention to the important stuff, whatever it might be for the domain (in this case, particle behavior). I don't entirely understand the details of what you are studying in that context and how you are studying it.

     

    Janet

     
    Mark this discussion post as helpful
  • Icon for: Ibrahim Dahlstrom-Hakki

    Ibrahim Dahlstrom-Hakki

    Presenter
    May 16, 2017 | 02:13 p.m.

    Thanks for the questions Janet and for your interest in our project. As you know it is hard to provide details in a short three minute clip but I will try to provide a bit more detail for you here. Please let me know if you'd like me to expand on any of my answers below:

    1. Implicit learning is similar to tacit learning and often used interchangeably. However there is a distinction between the two: Tacit knowledge is knowledge that is almost impossible to verbalize by its very nature, whereas implicit knowledge is knowledge that a given individual may not yet be able to verbalize but that can with instruction be verbalized.

    2. By looking at a learner's game behaviors and eye movements, you can deduce the type of information guiding those events. For example, if a student implicitly understands that a particle will continue along its trajectory unless acted upon by a force, then I would expect to see that student looking along that trajectory for potential collisions. If a student does not understand this at an implicit level then we would not expect them to be able to predict the behavior of particles in the game.

    3. The game our students play goes through multiple levels with increasing complexity that help students slowly develop their understanding of particle behavior in a particle simulator. As they interact with different particle types with differences in Mass, their behavior changes. We are interested in seeing if there is a predictable progression in terms of game behaviors and eye movement patterns observed for successful students and if the patterns of students who do not seem to develop behaviors consistent with this implicit knowledge differ in some uniform way.

    So at the end of the day we are interested in a paradigm that: 1. allows us to measure whether learners are developing knowledge that is guiding their eyes and game behaviors, and when they are not, 2. intervening to non-intrusively guide their attention to help them develop that knowledge.

     
    1
    Mark this discussion post as helpful

    Teon Edwards
  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    May 16, 2017 | 02:39 p.m.

    Thanks, Ibrahim. Excellent answers -- because they are succinct, because I now understand better what you are doing and why, and because it seems you are on a really good track. Good work! I look forward to hearing more as the project progresses.

     
    Mark this discussion post as helpful
  • Small default profile

    HENRY MINSKY

    Researcher
    May 18, 2017 | 12:45 p.m.

    I wonder if you could make a system which helped with learning to read a foreign language, if the eye tracker could 

    tell when the student was lingering on a phrase, it could display the translation, and adapt to the amount of time the student tended to look at the same place. It would want to avoid training the student to just wait until a translation appeared, so might need to wait longer and longer to display the translation  if the student was tending to pause too often or something. 

     

     
    Mark this discussion post as helpful
  • Icon for: Cyrus Shaoul

    Cyrus Shaoul

    Co-Presenter
    May 18, 2017 | 04:22 p.m.

    Henry, 

    Thanks so much for your question. This type of adaptive learning system is where we think the research should be headed. Learning foreign languages is definitely an area where the interaction between implicit and explicit learning is not yet well understood. There has been a deep and broad body of research using eye-tracking to look at how language is learned, and there are some theories that would predict a benefit of providing a "just-in-time" translation. 

    From other theoretical perspectives, having the native version of the word appear while reading the foreign language might inhibit learning, possibly due to "blocking" caused by the extremely well-learned native word appearing and preventing learning of the unfamiliar foreign word. See this paper on which I am a co-author for more thoughts on this.

    In any case, it would be an idea worth trying out! Thanks for the comment.

     
    Mark this discussion post as helpful
  • Icon for: Ibrahim Dahlstrom-Hakki

    Ibrahim Dahlstrom-Hakki

    Presenter
    May 18, 2017 | 02:19 p.m.

    Interesting idea Henry. It is certainly something that is possible and there is in fact a large body of research on reading that includes fixation contingent display changes. Now as to whether it could be designed in such a way as to improve the efficiency of learning a foreign language I'm not sure... but it would certainly be an interesting pilot study.

     
    Mark this discussion post as helpful
  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    May 18, 2017 | 03:17 p.m.

    I was also thinking learning to read (for those who are having trouble). Jack Mostow (at CMU) used eye tracking and other means to figure out how to help young kids learn to read better; I wonder if your technology would add to what they have been able to do. 

     
    Mark this discussion post as helpful
  • Icon for: Ibrahim Dahlstrom-Hakki

    Ibrahim Dahlstrom-Hakki

    Presenter
    May 18, 2017 | 03:28 p.m.

    I would actually say that the vast majority of eyetracking research, particularly in the field of cognitive psychology, is on reading and understanding the reading process. Unfortunately, most of the work is on neurotypical readers and there is relatively little work on readers with dyslexia. I certainly believe that more eyetracking work on reading and reading interventions for students with dyslexia is needed.

     
    Mark this discussion post as helpful
  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    May 18, 2017 | 04:11 p.m.

    I'm sure at least some of Mostow's kids were dyslexic, and others, I'm sure, had other learning and reading difficulties. My guess is that a good percentage of them are not neurotypical, but maybe I am wrong.


    Good work!!

     
    Mark this discussion post as helpful
  • Small default profile

    Jenna Welsh

    Undergraduate Student
    May 18, 2017 | 05:40 p.m.

    Wow what an amazing project. I personally don't know much about eye-tracking to track and help with learning, so this is very interesting to me. 

     
    Mark this discussion post as helpful
  • Icon for: Ibrahim Dahlstrom-Hakki

    Ibrahim Dahlstrom-Hakki

    Presenter
    May 19, 2017 | 09:18 a.m.

    Thanks for posting Jenna, I'd be happy to share more information if you are interested in learning more about our work.

     
    Mark this discussion post as helpful
  • Further posting is closed as the showcase has ended.