Icon for: Cynthia D'Angelo

CYNTHIA D'ANGELO

SRI International
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Meg Bates

    Researcher
    May 15, 2017 | 11:41 a.m.

    Super interesting!  I am curious whether you also looked at performance on the problems in relation to the Q & I codes or the automatic measures of speech.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 15, 2017 | 12:23 p.m.

    Hi! We did collect data on problem performance. The study was set up so that students had to get the problem correct before moving on, so one thing we looked at was how many incorrect answers they had before getting it right. There wasn't a clear relationship with Q or I codes, but right now we are looking at whether some affect/arousal features in the speech are related to correct/incorrect problem solving.

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 15, 2017 | 12:26 p.m.

    Hi everyone!

    I hope you enjoy the video and find the project interesting. A couple things that would be interesting to hear comments about would be:

    • For researchers, can you imagine other classroom learning uses for this kind of predictive ability using student speech?
    • For teachers, what would you like in terms of feedback to you from this kind of system? 

    Thanks!

  • Icon for: Sherry Hsi

    Sherry Hsi

    Researcher
    May 17, 2017 | 03:22 p.m.

    I am thinking of a learning scenario where there is a group design task which is commonly found in engineering, maker-oriented projects, or robotics team projects where the both hands are engaged with tools and materials. Learning is gestural and embodied. One could imagine the role of the computer as a collaboration detector in the environment to help identify and assess levels of collaboration and engagement. In an assessment situation when you need to figure out individual performance and contribution to a group ask, one could identify who was really contributing to the problem solving. 

    Can you say more about what you have planned next? Those Q codes are fascinating.

     

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Researcher
    May 16, 2017 | 11:28 p.m.

    Hi Cynthia,

    Great video! Say more about the big picture.... For example, someday, on a future PISA international test, what might become possible because of your advances?

    best,

    jeremy

     

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 22, 2017 | 06:12 p.m.

    Thanks! Although this is an early-stage research project, we believe that it could lead to a wide range of applications for different learning contexts. PISA has recently added a collaboration task to its assessment, and perhaps in the future one of the data streams collected will be audio to help better understand how students are working together. 

    This technology has the potential to reach all sorts of learners in many different contexts and support teachers in a) understanding how and when their students are learning and b) providing just-in-time support when it's needed. We might even be able to translate this into informal learning spaces as well (such as museums) or distance education. There are a lot of possibilities, which makes this work so exciting!

  • Icon for: Martin Storksdieck

    Martin Storksdieck

    Facilitator
    Director and Professor
    May 17, 2017 | 08:43 a.m.

    Very interesting project!  I might have gotten it wrong, but you are trying to assess collaboration in face-to-face interactions?  I am wondering how such a system would help understand collaboration in online situation, where instructors might not have a chance to "see"?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 18, 2017 | 08:02 p.m.

    Yes, we are trying to assess/measure face-to-face collaboration. I think the eventual system could definitely be used in online learning systems, as long as there was high quality audio coming from the participants. This would be a really interesting extension of the work. 

    Thanks!

  • Icon for: Andrea Gomoll

    Andrea Gomoll

    Researcher
    May 17, 2017 | 08:30 p.m.

    Great video! I'm interested in hearing more about the group collaboration codes and individual contribution codes. How did you did you define high quality group collaboration? How might the video data you've collected be used to add nuance to your analysis of group collaboration--incorporating embodied action? 

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 18, 2017 | 08:08 p.m.

    The group collaboration codes were trying to capture how much the individuals in the group were intellectually contributing to the problem solving. So, in a high quality group, all three members would be actively intellectually engaged (which could involve mostly listening and asking the right questions at the right of the other members). The individual codes were specific behaviors or actions, like asking a question, agreeing with someone, or verbalizing their thinking out loud to the group. 

    The issue about embodied action is really interesting and I'm glad you asked about it. One thing we were really concerned with early on is how much non-verbal communication would play a role in the collaboration. The video data helps in two ways: 1) it makes it a lot easier for the researchers to apply the codes to the data, especially the individual codes, and 2) it helps us see how much non-verbal communication is going on and whether or not that is having an impact on our coding. There was a good amount of non-verbal communication, but in general, that added nuance would not have changed our group-level codes. However, it did make it easier for the researchers to understand the group dynamics and apply the individual codes. 

    Thanks!

  • Icon for: Andrea Gomoll

    Andrea Gomoll

    Researcher
    May 19, 2017 | 01:40 p.m.

    Thanks for this detailed response! I look forward to seeing more of this work.

  • Icon for: Jackie DeLisi

    Jackie DeLisi

    Facilitator
    Research Scientist
    May 17, 2017 | 10:09 p.m.

    This is really interesting. I thought it made a lot of sense that students would have periods of time without any collaboration, but I wonder if the type of activity might also influence student collaboration. Have you coded student collaboration with different types of activities? Also, I wonder what implications this might have for teaching?

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 18, 2017 | 08:13 p.m.

    I'm sure the type of activity had an influence on this. We picked this type of activity specifically so that we could capture a good data set (one with a lot of communication). We needed to have lots of opportunities for students to talk to each other to solve the problem, and needed a range of outcomes (good collaboration to not collaborating) in order for our machine learning to work well. These activities were relatively short closed tasks, with a known answer. The nature of the collaboration would likely look different with longer open-ended tasks. 

    I think there are a lot of implications for teaching. One thing in particular would be that typically teachers, when going around to monitor groups, only see a small snapshot of the collaboration of the group (and one that is usually colored by the students knowing that the teacher is watching them). One take-away would be to not rely too much on that little snapshot, since we know that there is a lot of variation within groups and within tasks. 

  • Icon for: Bridget Dalton

    Bridget Dalton

    Researcher
    May 18, 2017 | 07:06 p.m.

    Very interesting work!  I may have missed this...have you found that certain words and phrases signal productive collaborative work? Are you looking at relative contribution from group members? I can imagine that some students might talk a lot, and move the group forward, and some students might only speak on occasion, but also move the group forward.  Do these children have relationships with one another outside of your experiment? thanks!

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 18, 2017 | 08:17 p.m.

    As of right now, we are not looking at the words and phrases. This may be something that we have time to investigate in more detail later this year, but it's not part of our current analysis. 

    We are looking at the relative contribution from group members. A lot of the speech metrics are normalized by individual and/or by group so as not to conflate more talkative groups with less talkative groups. 

    The Q codes do capture timely student contributions to the group effort, even if that student hadn't been talking a lot. If they are intellectually contributing, even in a small way, then they are helping that group.

    In Phase 1, many of the children knew each other from class, but not all of them. In Phase 2, it was intact classes that we went into to collect data so they all knew each other.

    Thanks!

  • Icon for: Bridget Dalton

    Bridget Dalton

    Researcher
    May 19, 2017 | 02:26 p.m.

    Thank you, Cynthia, this information is quite helpful.

     

  • Icon for: Kathy Perkins

    Kathy Perkins

    Director
    May 18, 2017 | 09:41 p.m.

    Such interesting work! I'd be interested in seeing if you could use this to identify the types of group activities, cues, and pedagogy that best encourage collaboration. E.g. does it work better when you assign roles, or not. When you have these type of math problems or sim-based challenges. Seems like a lot of potential to expand this research thread with this tool!  

  • Icon for: Cynthia D'Angelo

    Cynthia D'Angelo

    Lead Presenter
    Senior Researcher
    May 22, 2017 | 06:14 p.m.

    Thanks Kathy! I think these are really good questions for the next stage of our work. We want to look at longer, more open-ended tasks where things like assigned or not-assigned roles could make a difference, or how task milestones could affect how well they are able to self-manage their problem solving. There are so many possibilities and it's really exciting to see where this might go. I think incorporating interactive simulations (like PhET) could be really interesting down the road!

  • Further posting is closed as the showcase has ended.