Computational Methods for Music, Media, and Minds REU

National Science Foundation Research Experience for Undergraduates (NSF REU)

reu student presenting poster

Computational Methods for Understanding Music, Media, and Minds

How can a computer learn to read an ancient musical score? What can methods from signal processing and natural language analysis tell us about the history of popular music? Can a computer system teach a person to better use prosody (the musical pattern of speech) in order to become a more effective public speaker?

These are some of the questions that students will investigate in our REU: Computational Methods for Understanding Music, Media, and Minds. They will explore an exciting, interdisciplinary research area that combines machine learning, music theory, and cognitive science. Students will be mentored by faculty members drawn from Computer Science, Electrical and Computer Engineering, Biomedical Engineering, and Public Health.


PI Ajay Anand, PhD

Ajay Anand, PhD
Deputy Director
Associate Professor
Goergen Institute of Data Science


Co-PI Zhiyao Duan

Zhiyao Duan
Associate Professor
Department of Electrical and Computer Engineering, Department of Computer Science

Applications are now closed for Summer 2022.

How to Apply/Eligibility

You are eligible to apply if:

  • You are a 1st, 2nd, or 3rd year full-time student at a college or university.
  • You are a U.S. citizen or hold a green card as a permanent resident.
  • You will have completed two computer science courses or have equivalent programming experience by the start of the summer program.

We are unable to support international students via this federally-funded NSF REU program. If you are looking for self-funded research opportunities, you can reach out to one of our affiliated faculty members directly to discuss your research interests. 

Being a computer science major or having prior research experience are not requirements. We seek to recruit a diverse group of students, with varying backgrounds and levels of experience. We encourage applications from students attending non-research institutions, and from students from communities underrepresented in computer science.

Before starting the application, you should prepare:

  • An unofficial college transcript (a list of your college courses and grades) as a PDF, Word document, or text file. Please include the courses you are currently taking.
  • Your CV or resume, as a PDF, Word document, or text file.
  • A 300 word personal statement, in PDF, Word document, or text file format, explaining why you wish to participate in this REU, including how it engages your interests, how the experience would support or help you define your career goals, and special skills and interests you would bring to the program.
  • The name and email address of a teacher or supervisor who can recommend you for the REU.

To apply, students should fill out our Google application form OR fill out our NSF ETAP application form. Either application method is acceptable, and students do not need to fill out both application forms.

Applying via Google Form:

The application website does not allow you to save and resume your application before submitting - we recommend starting your application when you have the time to fully complete it.

  1. Apply online no later than February 6, 2022.
  2. Once you have submitted your application, please have the person recommending you for the REU upload a letter of recommendation (PDF or DOC).
  3. Notification of acceptance will be communicated between March 15 and April 15, 2022.

Apply via Google Form

Applying via NSF ETAP:

  1. Create an ETAP account and fill out all portions of the registration.
  2. Select our REU program and follow the prompts to apply. The personal statement should explain why you wish to participate in this REU, including how it engages your interests, how the experience would support or help you define your career goals, and special skills and interests you would bring to the program.
  3. Apply online no later than February 6, 2022.
  4. Notification of acceptance will be communicated between March 15 and April 15, 2022.

Apply via NSF ETAP

The REU Experience

The tentative 2022 REU dates are Tuesday, May 24, 2022 to Friday, July 29, 2022.

Students accepted into the REU will receive:

  • On-campus housing
  • Meal stipend
  • A stipend of $6000
  • Up to $600 to help pay for travel to and from Rochester

Your experience will include:

  • A programming bootcamp to help you learn and/or improve your programming skills in the Python language.
  • Performing research with a team of students and faculty on one of the REU projects.
  • Professional development activities, including graduate school preparation and career planning.
  • Social events and experiences, including opportunities to meet and network with students from other REU programs and members of the University community.

The David T. Kearns Center coordinates summer activities for all REU programs across the University of Rochester campus, including the REU orientation and undergraduate research symposium.

Visit our REU summer activities page for more detailed information on programs and events.

Projects, Participants, and Presentations

On the application form, you can specify your top project preferences. We will do our best to match you with one that matches your preferences and interests. You will be assigned to a project based on your background and skills.

Visit our past REU sessions page for more information on previous projects, participants, and presentations.

2022 Projects
Project #1

Title: Decoding the representation of music from the human brain

Mentors: Edmund Lalor (Biomedical Engineering and Neuroscience)

Music is one of the most emotive, universal, social and powerful things in human culture. But, despite being part of human life for hundreds of thousands of years, even defining what constitutes music is an issue that continues to be debated. One way to move forward on this issue would be to examine what constitutes music from the perspective of the human brain. However, how the brain creates coherent perceptions of music from complex combinations of sounds is itself poorly understood. One thing is clear: this process involves recognizing structure, and detecting meaning and associations from sounds that impinge upon our ears. In many ways, this is a similar challenge to processing speech. In this project, we aim to adapt recent progress on speech neuroscience to obtain a better understanding of how musical “meaning” and structure are computed by the human brain. In particular, we will use a combination of machine learning and the analysis of brainwave signals recorded from human subjects to identify neural correlates of musical structure and predictability. The student will analyze the structure and content of musical pieces and will analyze EEG data recorded from human subjects. They will also have the opportunity to learn how to collect this type of neural data.

Project #2

Title:Collecting and analyzing ego-centric audio-visual video data

Mentor:Chenliang Xu (Computer Science)

The multimodal perceptual system aims to build intelligent machines that can see and hear. The field has made strong processes in developing computer systems to understand the audio-visual association from a third-person “spectator” view. However, little attention has been paid to capturing fine-grained audio-visual association from the ego-centric point of view, which is closer to how humans perceive the world. To bridge the gap between computer systems and human perception, we plan to explore two directions in the project. First, being aware of the lack of first-person video data with spatial audio (binaural recordings), the REU students will survey the relevant datasets and benchmarks and develop a strategy for recording diverse ego-centric audio-visual data. Students will then apply this strategy to a data collection process during the summer. Another component of the project will involve developing computational methods by the students to solve the challenging audio-visual tasks, such as visually guided sound separation and sound sources visual localization in the ego-centric domain. The REU students will gain experience collecting ego-centric videos with sounds and developing computational methods to analyze such ego-centric audio-visual data through the project.

Project #3

Title:Machine Learning and Conceptual Complexity

Mentor:Jens Kipper (Philosophy)

This project uses language models and other Machine Learning models to investigate the complexity of concepts. Conceptual complexity is relevant to several issues both inside and outside of philosophy—for instance, to issues regarding the difficulty of reading comprehension and other cognitive tasks. It is difficult to investigate conceptual complexity in humans. However, progress in Natural Language Processing has made it possible to approach this issue by using Machine Learning models as models of human cognition. Among others, students will contribute to this research by probing and training language models. This research is part of a larger project on “Automated conceptual analysis”, which aims to use computational methods to study concepts. Students will have the opportunity to become familiar with this larger project, too.

Project #4

Title: Marketing, Advertisement and Users’ Perceptions of IQOS on Twitter

Mentor: Dongmei Li (Public Health Sciences, UR Medical Center)

IQOS is a new heated tobacco product from the tobacco giant Philip Morris International (PMI), which heats real tobacco leaves that contain toxins and harmful substances. On July 7, 2020, the US Food and Drug Administration (FDA) authorized the marketing of IQOS as a “reduced exposure” tobacco product. Social media platforms such as Twitter are commonly used by companies, factories, and stores to promote IQOS, as well as by users to share their opinions and perceptions. The purpose of the proposed study is to examine the strategies and techniques used in IQOS marketing and advertisements, as well as the public perception of IQOS, through mining Twitter data using natural language processing techniques and statistical methods. Our group has been continuously collecting IQOS related tweets since November 2019 using a Twitter stream application programming interface (API) and has rich experience in tobacco product related social media research. By applying content analysis (such as topic modeling) to IQOS related commercial tweets, we will explore how IQOS is being marketed and advertised on Twitter. Using sentiment analysis and topic modeling as well as time series analyses, we will explore Twitter users’ perceptions on IQOS before and after the FDA authorization. Results from the proposed project will inform FDA the marketing influences of IQOS and the public perceptions on IQOS. Thus, further actions could be taken appropriately to reduce overall nicotine dependence and tobacco product use to protect public health.

Project #5

Title: Accessible Learning for Artificial Intelligence in PreK-12 Classrooms

Mentor:Zhen Bai (Computer Science)

There is an emerging presence of AI technologies in our everyday life from voice assistants such as Echo and Google home to smart life systems such as Fitbit and Spotify music suggestions. It is more and more important for people with little CS and math background to understand the fundamentals of how a machine thinks and behaves, in order to better interact and collaborate with our increasingly intelligent work and life environment. This project aims to designand develop playful and exploratory learning environments that support accessible AI literacy for PreK-12 learners, centered with data visualization and AR/VR technologies. We are looking for students with interest and/or experience in one or more of the following areas: education technologies, data science/visualization, web development, AR/VR. The students will take part in the research ideation, interface prototyping and evaluation, and learner behavior analysis of this iterative design research project. Example research papers can be found at, and

Project #6

Title: Augmenting Inclusive Communication and Social Interaction

Mentor: Zhen Bai (Computer Science) 

Face-to-face interaction is the central part of human nature. Unfortunately, there are immense barriers for people with social-communicative difficulties, for example people with autism and people who communicate via American Sign Language (ASL), to engage in inclusive social activities. In this project, we seek design and technology innovation to create AI-powered Augmented Reality (AR) technologies that facilitate social-communicative behaviors without interrupting the social norm of face-to-face interaction. We are looking for students with an interest and/or experience in one of more of the following areas: assistive technology, Augmented and Virtual Reality, natural language processing and machine vision to take part in the design, interface prototyping, and evaluation of socially-aware AR environments that help people with special needs to navigate their everyday social life. Example research papers can be found at, and

2022 Participants
Maxwell BarlowFlorida Southern College
Merritt CahoonSamford University
Kalsey ColotlNew York University
Caitlin FitzpatrickUniversity of Rochester
Sara Jo Jeiter-JohnsonUniversity of Rochester
Rachel OstrowskiVassar College
Miranda RublaitusValencia College/Yale University
Samantha RyanSimmons University
Mimi TruongEast Los Angeles College
Caroline UsedaAmherst College