1. Bruce Walker
  2. Professor
  3. Sonified interactive simulations for accessible middle school STEM
  4. https://phet.colorado.edu
  5. Georgia Institute of Technology, University of Colorado Boulder
  1. Carrie Bruce
  2. Senior Research Scientist
  3. Sonified interactive simulations for accessible middle school STEM
  4. https://phet.colorado.edu
  5. Georgia Institute of Technology, University of Colorado Boulder
  1. Emily Moore
  2. Director of Research and Accessibility
  3. Sonified interactive simulations for accessible middle school STEM
  4. https://phet.colorado.edu
  5. University of Colorado Boulder
  1. Brianna Tomlinson
  2. https://www.linkedin.com/in/brianna-tomlinson-697300107/
  3. PhD Student Researcher
  4. Sonified interactive simulations for accessible middle school STEM
  5. https://phet.colorado.edu
  6. Georgia Institute of Technology, University of Colorado Boulder
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Emily Moore

    Emily Moore

    Co-Presenter
    Director of Research and Accessibility
    May 15, 2017 | 10:09 a.m.

    Thank you for viewing our video! We would love to hear your thoughts and questions on the use of sonification and auditory display to create more engaging - and more accessible - interactive simulations. This project is just getting started, and we’d love to hear your ideas! In particular:

    • What sounds do you think would help convey the most important concepts in your favorite PhET simulation?
    • How do you think we can best support teachers in making use of sound in simulations?
    Looking forward to the conversation!
  • Icon for: Brian Drayton

    Brian Drayton

    Researcher
    May 16, 2017 | 10:24 a.m.

    This is fascinating work.  I am trying to wrap my head around the strategy a bit more.  There are sounds (splat, bang, splash, purr, rustle) which are suggestive of kinds of real-world events...  These seem to bring a fair amount of "meaning" possibilities, as long as they relate to something usually associated with the sounds (sort of an iconic relationship)  . Then there are musical cues (tempo, pitch, interval, sequence) which are more abstract, and some association needs to be built between a phenomenon (data trend, change over time, etc.) and a chosen cue--a more symbolic relationship, since the same musical cue might be associated with very different "meanings" in context.  

        SO:  Are you seeking to develop conventions within your PhET universe, so that some of the symbolic relationships become conventional (e.g. rising tone means increase along the y-axis), or will you need to create new symbolic relations for each simulation?

    It's possible I"m over-thinking this, but your challenge seems so broad, I am curious how you're narrowing things down a bit.

     

  • Icon for: Bruce Walker

    Bruce Walker

    Lead Presenter
    Professor
    May 17, 2017 | 12:25 a.m.

    Brian, excellent questions! We will be applying all those types of sounds, and more! In some cases, we can rely on sounds that are clearly, and perhaps universally, associated with a particular concept. The kinds of sounds used to represent, for example, a successful completion of an exercise, or successfully dropping a proton into the center of an atom...we can be confident that there is common agreement on those kinds of sounds. We can leverage sound design, foley artistry, and shared experiences for much of that.

    When it comes to more symbolic relationships, such as when we use sound to represent the temperature or velocity or mass of something in a simulation, we start with a foundation of years of research in the auditory display community to determine "population stereotypes" for sound-to-concept mappings. For instance, we know that increasing pitch is a good way to represent increasing speed, urgency, and vertical position on the screen. On the other hand, to represent increasing mass, we use *decreasing* pitch.

    We also know that in some cases increasing pitch best represents increasing temperature, but not always. The context, and the listeners, matter a great deal in those cases.

    So, we involve the listeners directly in our design process. To the extent possible, we will be employing population stereotypes; where those don't necessarily exist, we will attempt to determine best-designs in systematic ways (i.e., interacting with plenty of students!). And in some cases, we will simply have to make a design decision about how to represent a concept, based on the pedagogical goals of the sim, and our experience. Then we will do our best to make the "design language" we use consistent with and across the many PhET simulations, and congruent with the visual design language already in use. 

    These common design languages, or design conventions, will be clearly documented and consistently applied. This will allow us to train the teachers what to expect, and how to share this with their students.

  • Icon for: Michael Stone

    Michael Stone

    Facilitator
    Director of Innovative Learning
    May 16, 2017 | 01:00 p.m.

    I had basically the exact same question as Brian. Are you basing the symbolic sounds on the "weight" of the sounds? It is interesting to consider the possibility of using sound cues to give visually impaired students a more intuitive experience with digital simulations, but I am definitely curious as to how you are identifying what intuitive symbolic sounds are for the various processes in the simulation.

  • Icon for: Bruce Walker

    Bruce Walker

    Lead Presenter
    Professor
    May 17, 2017 | 12:40 a.m.

    Michael, thanks for your interest and your questions. We know that in some cases there is a good way to convey a certain concept using sounds. For instance, in teh "Build An Atom" simulation, the student picks protons, neutrons, or electrons up from "baskets" and drops them-one at a time- into the center of a target, which represents a nucleus and orbits. As the first proton is dropped, creating Hydrogen, we can play a sound representing the mass of the atom (one proton). As a second proton is added to the nucleus, the mass of the atom doubles, creating Helium. The sound of Helium should seem "heavier" than the sound for Hydrogen. We represent this using a lower pitch. As protons are added, the sound of the resulting element continues to drop in pitch. But what do we do when you have one proton (Hydrogen), and you add one neutron? The element stays the same, but the mass increases. Do we change the pitch, or do something else? We look at the pedagicical goals, and decide that element number is going to be represented by pitch, as described above, but adding neutrons will be represented by reverb. So adding a neutron makes the sound seem "fuller", representing more mass, but not lower pitch. 

    So it is a design challenge, and we tackle it with plenty of expertise; but there is also plenty of need to come up with *new* design decisions, since oftentimes we are sonifying things that no one has likely sonified before.

    Once we come up with a design framework, we prototype the sounds, plug them into the simulations, and  engage sim designers, teachers, and students in extensive iterative design sessions to see how our design decisions hold up. We evaluate comprehesion, performance, and preference, and factor it all in as we move forward.

    When we find sound design approaches that work, be it by good initial design, or iterative refinement, or even good luck, we document that, and re-use it as much as is appropriate.

     

  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    Regents' Professor Emerita
    May 16, 2017 | 01:17 p.m.

    Same question here, Bruce. I'd love to know more about the sound vocabulary and conventions you are developing. Even if you haven't gotten far yet, tell us what you are thinking about the vocabulary and conventions. Thanks.

     

    Janet

  • Icon for: Bruce Walker

    Bruce Walker

    Lead Presenter
    Professor
    May 17, 2017 | 12:47 a.m.

    Hi Janet! I hope my responses to earlier questions shed some light.

    I'll also add here that we not only have to consider the "message" we are conveying (e.g., "you successfully added a proton to the nucleus"), but we also have to consider the tone and the "feel" of the sound. The PhET simulations carefully and consistently use a visual look and feel that is playful yet not childish; clean but not cold; kind of cartoon-realistic but not photo-realistic...And our sounds have to fit into that ecosystem. Thus, beyond just sorting out whether to change pitch  to represent mass or not, we also have to think about the aesthetics of the sounds, and whether they match what the visual interface is portraying...even if the users happen to be blind and cannot see the visual interface. After all, they may very well be in a class with a mixture of sighted, low-vision, and totally blind students, and they all need to have a coordinated, consistent, and shared multimodal learning experience.

     

  • Icon for: Janet Kolodner

    Janet Kolodner

    Facilitator
    Regents' Professor Emerita
    May 17, 2017 | 12:31 p.m.

    Glad you are thinking about all this. My one suggestion: It would be great to think about the sounds you are creating as laying the groundwork for others to also sonify phenomena. That is, try for consistency across simulations in the sounds that are used so learners come to predict what sounds mean and so that other sonifiers will implement consistently with what you guys are doing.

     

    Janet

  • Icon for: Bruce Walker

    Bruce Walker

    Lead Presenter
    Professor
    May 17, 2017 | 02:38 p.m.

    Absolutely! We are definitely working towards consistency across the many PhET sims, but also making sonification strategies that make sense in other, non-PhET domains. We already do that in my lab, but this project is a great opportunity to have broader impact in the larger field of multimodal interfaces!

  • Icon for: Carrie Bruce

    Carrie Bruce

    Co-Presenter
    Senior Research Scientist
    May 16, 2017 | 06:34 p.m.

    Hi all, thank you for viewing the project video and getting involved in the discussion.    

     

    Your thoughts and questions point to elements of the challenge... an effective auditory display, whether it is in combination with another form of display (e.g., visual, tactile) or experienced as a standalone interaction, should be based on informed design decisions.  Brian and Michael, you both suggested strategies that are reasonable in the designing an auditory display -- assigning auditory icons or earcons, mapping characteristics of sounds (e.g., "weight", tempo, pitch) to attributes of or data arising from the sim.  We will likely use both of these strategies and others as we work on this project.  

     

    We are drawing on what is known about human perception, auditory display and sonification design, and pedagogical strategy for conveying STEM concepts.  We are also empirically investigating the design of sounds in the PhET sims through co-design and testing activities with students, teachers, and designers.  

     

    Much like building a visual display or visualization, we can expect to design an auditory display or sonification using: 1) sounds that are understood by most people because they are more conventional across contexts (like a "ding" to indicate that something has happened); 2) sounds that many people tend to understand more intuitively or are easier to learn based on prior experiences, mental models, visual/tactile associations, etc. (like a sound with rising pitch to indicate an increase in temperature or moving from a lower point on the screen to a higher point ); and, perhaps, 3) sounds that are learned and understood by people through experimentation or by instruction.

     

    We hope to develop effective auditory displays for the PhET sims, while also generating evidence for design guidelines beyond PhET and contributing knowledge about perceptual abilities for these types of displays.  

     

    Cheers!           

  • Icon for: Chris Thorn

    Chris Thorn

    Facilitator
    Director of Knowledge Mangement
    May 16, 2017 | 11:54 p.m.

    Thanks for that explanation. I'm thinking about how best one would communicate what you are learning to other designers. I sound vocabulary with tested conventions seems to be a basic requirement for uptake at any sort of scale. I was just looking a piece on haptic feedback and wondering about the use of haptic feedback in addition to auditory or visual feedback. 

  • Icon for: Bruce Walker

    Bruce Walker

    Lead Presenter
    Professor
    May 17, 2017 | 12:54 a.m.

    Chris, we are producing design documents for the sounds, which complement the design documents that exist or are being developed for the visuals, and for the (text) descriptions that are being put in place to support users of screen readers. Thus, a whole library of multimodal design documents will be an important outcome. Annotated examples will also be a big part of the dissemination. After all, you can interact with the simulation, move some protons around, and immediately get a sense of how the sounds support the experience.

    Finally, since the technology used to play the sounds is embedded into the same HTML5 code that is used to change the visual interface, and which also supports the various input modalities, the logic of how events trigger sounds is captured in code. This makes it very easy to understand what sounds wil be played, when, why, and how. 

  • Icon for: Carrie Bruce

    Carrie Bruce

    Co-Presenter
    Senior Research Scientist
    May 22, 2017 | 01:59 p.m.

    Again, thanks to all of our video viewers!  Please keep in touch with our project as we move forward.  If you have suggestions for classrooms, events, or other opportunities through which we can share our developments and get feedback, please let us know.

  • Further posting is closed as the showcase has ended.