Explored the audiovisual storytelling of the show "Off the Air" using a variety of multimodal processing and
machine learning tools. These include PCA clustering, K-means color extractions, frame-level audio classification, and
real-time visualization of audio features.
Trained a machine learning model to extract posteriograms and word alignments from a song and then tested whether these
features could be used to create more nuanced vocal source separation models. Described in research paper with code on
Github.
Created a high-performance script
that facilitates async calls within
a multiprocessing framework to
simulate an active music listener,
allowing Spotify's API rate-limiting
to be bypassed. This resulted in
being able to extract almost all artists on
Spotify (over 13 million) in a very
small amount of time.
Example songs made by my auto-remixing program which decomposes all songs in a music library into their stems using a
source separation model. It then repeatedly chooses four stems at random that it warps to match in tempo, pitch, and
onset position creating a cohesive output song.
Music Library Database & Web Interface
Created a PostgreSQL database and a
corresponding Flask web interface to
store music library data dynamically
with multiple linked
versions of songs
Created dynamic visualizations of the nature
and efficacy of the various methods
to find all pairs and triples in an
array that sum to a
value. Created graphic comparing
time and operations for each type of
algorithm.