What more is there?
Pitfalls
Currently the user must wait several seconds for the program to complete analysis of the song and play, which creates a less than ideal audio and visual experience.
We were initially going to identify key and correlate with a sentiment; for example, major to happy and minor to sad. However the library we were using didn’t have clear enough documentation and the API we planned to use as back-up was purchased by another company and put offline. Instead we have identified what chord is being played, but we have not found a way to map a single chord to a sentiment. So the chord determines the function path, but is not indicative of a mood. Currently we do not make the relationship between the function path and chord clear in our visualization, which could improve the experience of our users.
Future Work
The next logical step in code implementation would be to ‘chunk’ the audio and text so that we can simultaneously perform analysis and display the visualization. In addition we would attempt to make the connections between visuals and information more clear. After that, we would aim to increase our code flexibility and increase our song library so users have more song options.