Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. Based on structural similarities of rhythmic signals in music and speech, prominent music and speech rhythm theories, and previously reported impaired timing in developmental speech and language disorders, we propose the processing rhythm in speech and music (PRISM) framework. PRISM outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders (focusing here on dyslexia, developmental language disorder, and stuttering), impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. In this talk, I will outline the PRISM framework and its links with developmental speech and language disorders, as well as discuss future research directions and implications of this framework.
Everywhere in the world people enjoy listening to and making music together. Over the past 30 years, research on the neurocognition of music has gained a lot of insights into how the brain perceives music. Yet, our knowledge about the neural mechanisms of music production remains sparse. How does a musical idea turn into action? And how do musicians coordinate sounds and actions when they perform in groups? The present line of research isolated distinct levels of action planning in solo pianists and identified dynamically balanced mechanisms of interaction in duetting pianists using 3T fMRI and dual EEG. The data converge on three main findings: (A) distinct neural networks for abstract harmonic and concrete motor planning converge in left lateral prefrontal cortex that acts as a hub for solo music production, (B) internal models of other-produced musical parts in cortico-cerebellar audio-motor networks shift the balance between self-other integration and segregation of duet partners, and (C) interbrain synchrony during joint musical action is not merely an epiphenomenon of shared sensorimotor information but is modulated by the alignment of cognitive processes. Altogether, it will become clear that solo and joint music performance relies on general principles of human cognition tuned to achieve the musical perfection required on stage.
Interpersonal coordination is a core part of human interaction, and its underlying mechanisms have been extensively studied using social paradigms such as joint finger-tapping. Here, individual and dyadic differences have been found to yield a range of dyadic synchronization strategies, such as mutual adaptation, leading–leading, and leading–following behaviour, but the brain mechanisms that underlie these strategies remains a topic of active research. In this talk I will present results from an EEG-study where we identified an action-perception related network as linked to synchronization strategies in rhythmic joint action. I propose that this network may also serve as an indicator of self-other integration, and present a model that capture some key features of rhythmic joint action.
Social bonds have long been associated with enhanced mental and physical health. How well we connect with others depends, among others, on our cultural background, individual preferences, and the context of a given situation. By providing temporal and affective frameworks, music creates a unique social context that can increase behavioral synchrony and emotional harmony. In a series of studies, we investigated how these temporal and affective aspects of social interactions with music are connected and how they are modulated by individual musical preferences, cultural familiarity, and trait empathy. Our findings suggest that the influence of movement synchrony on social bonding during musical activities is less affected by what music we are familiar with but more affected by what music we enjoy. In general, we found that higher trait empathy was associated with stronger social bonding. However, empathy also interacted with movement synchrony and type of music, which suggests that empathy plays a multifaceted role in how we enjoy, interpret, and use music in social situations.
Despite music’s omnipresence in cultures across time and space, it remains a mystery why humans invest valuable time and vast resources on crafting and listening to organized sounds. During the coronavirus lockdowns of 2020, musical engagement became the potentially most frequent leisure activity, beating exercise, sleep, and consumption of other media as the most effective strategy for enhancing mental health for at least half of the general population. In this talk, I draw on recent results from the global MUSICOVID research network and a brand-new special issue on the topic to demonstrate how corona-themed music was created and consumed to cultivate collective connections and seek solitary solace. Our international survey study (n=5113), for example, showed that interest in coronamusic emerged as the strongest predictor of successful coping via music. People experiencing negative emotions used music for solitary emotion regulation whereas positive-experiencers used it as a proxy for social interaction. Follow-up studies of coronamusic videos from our crowdsourced database, and social-media data from Twitter, Spotify, Reddit, and YouTube largely support this bifurcation in adaptive musical use during pandemic isolation. Throughout human prehistory, topical musical innovations such as coronamusic may thus have served to build psychological resilience when faced with societal crisis.
Music is one of life’s greatest pleasures. While abundant evidence points to the role of predictability (i.e. the knowledge of what comes next) in the experience of pleasure, little is known about how predictable musical features (e.g. melody, harmony, rhythm) come to be rewarding. I will present new work in my lab on behavioral and neuroimaging studies of the relationship between musical predictions and their reward value. Our behavioral studies test whether and how it is possible to acquire reward value solely from newly-formed predictions, by exposing participants to novel, acoustically-controlled musical stimuli with different statistical properties without extrinsic paired rewards. Our neuroimaging studies capitalize on activity of the dopaminergic reward system, and its connectivity to the auditory system, to test for individual differences in reward sensitivity from music. Results show that this reward sensitivity is dependent on age and on short-term experience as well as long-term experiences with specific musical predictions encoded throughout the lifespan.
Using individual differences approaches, a growing body of literature finds positive associations between musical and language-related abilities, complementing prior findings linking musical training with language skills. Despite these associations, musicality is often overlooked as a factor in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of both musicality and language, and how they are intertwined, we have recently proposed the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework, which posits shared genetic architecture, overlapping neural endophenotypes, and shared genetic influences on musically/linguistically enriched environments. In this talk, I will share findings from a synthetic review of over 70 studies demonstrating that individual differences in musical abilities are robustly correlated with language-related skills, and discuss findings in terms of potential underlying biology. I will also outline ongoing and future studies for unraveling the shared genetic architecture of musicality and language, based on testable predictions put forth by the MAPLE framework. These efforts can allow us to leverage our understanding of the biological basis of music, towards better understanding individual differences in language abilities across development.
Individual differences in musicality arise from processes involving both genes (G) and the environment (E). To understand the genetic architecture and GE interplay in complex traits and behaviors is one of the major challenges at the research frontier today and essential if we wish to better understand the processes underlying musicality and identify true causal environmental factors on music acquisition. In this talk, we will provide an overview of the state of research on the genetics of musicality and provide examples from our work, highlighting how we can apply well established and novel methods using large scale twin and genetically informative data to enhance our understanding of the etiology of music.
Evolutionary approaches to music have tended to focus on universal aspects of music and their potential biological bases and functions. But music also evolves through cultural evolution, which can give rise to the diversity of musical forms found throughout the world. Biological and cultural evolution can also interact to create gene-culture evolutionary feedback loops and shape regularities across diverse musical systems. I will discuss studies that unite these biological and cultural evolutionary approaches to cross-cultural musical diversity and discuss potential areas of future work (including music-language and human-animal song comparisons).
Music is present in every known society, yet varies from place to place. What, if anything, is universal to the perception of music? This question has remained unanswered because previous cross-cultural experiments have compared only small numbers of cultures. We measured mental representations of rhythm in 39 participant groups in 15 countries across 5 continents, spanning urban societies, indigenous populations, and online participants. Listeners reproduced random seed rhythms; their reproductions were fed back as the stimulus (as in the game of “telephone”), such that their biases (the prior) could be estimated from the distribution of reproductions. Every tested group showed a sparse prior with peaks at integer ratio rhythms. However, the occurrence and relative importance of individual integer ratio categories varied across groups, often reflecting local musical practices. By contrast, university students and online participants in non-Western countries tended to resemble Western participants, underrepresenting the variability otherwise evident across cultures. Our results provide evidence for a universal feature of music perception – discrete rhythm “categories'' at small integer ratios. These discrete representations likely help to stabilize musical systems in the face of cultural transmission, but interact with culture-specific traditions to yield diversity that is evident when perception is probed at a global scale.