Machine learning & AI

Machine learning (ML) techniques offer our best bet at interpreting audio signals in other species: they provide rigorous strategies for identifying salient acoustic features, facilitate transfer of pre-trained models between data sets (thereby hugely enhancing efficiency and accuracy), and allow for analyses of enormous datasets, which is necessary for identifying and understanding large-scale syntactical patterns used across populations and species. Combined with ethological techniques for interpreting animal social behavior and community science approaches for data collection, ML/AI tools can help us to decipher the information embedded with animal signals and reveal underlying similarities with human language.

​During my PhD, I developed an unsupervised ML approach to identify patterns and underlying structure within songs used by British birds (Parus major), revealing that song sharing amongst individuals was inversely correlated with geographic distance between nests and that birds that had recently immigrated into the populated used more complex songs that were rarely passed on to birds in the next generation. This work suggests that social environment shapes bird communication on multiple levels; although nearby birds are often "tutors" for young birds learning songs, additional social and ecological factors may ultimately determine which songs remain in the population during generational turnover. In my current research, I continue to use both supervised and unsupervised ML techniques to study animal communication.

Read the paper

Visual representation of song types of Parus major, a songbird found across Europe and in much of Asia.