pennews
www.pennews.net
AI and Music part 2
Sunday Magazine

AI and Music part 2

Ullas Ponnadi

Ullas Ponnadi

https://www.pennews.net/sunday-magazine/2019/10/20/ai-and-music

Is technology enhancing and aiding creation of perfect music and thus help spread its reach? What tools are available for musicians and composers to do so? What is the role of a sound engineer in this process? And will pure, live music lose its way, over time?

For musicians who perform on the stage and to aid their practice sessions, there is now electronic tanpura and shruti box, tabla and mridangam, and even an app than can generate the accompanying harmonium and tabla patterns, in a dynamic manner, following the nuances of a Hindustani classical singer! This can be seen at https://www.naadsadhana.com/ Certainly great tools that aid practice, and help go the musicians deeper into music.

In the visual media space, as in the movies, the background musical score that brings out the emotions in the scenes, is one that requires an enormous effort to create. Often via on premise recording, capturing natural sounds and in sync dialogue delivery. This is being replaced by pre stored sounds of various kinds, that can capture the emotion of the actor or the scene, and overlay music on top of it, in studio via AI and algorithms that follow either the live shoot or pre-recorded scenes, tracking and following the emotions and actions and gestures of the actors and the scenes. A sample of such software is at: https://www.aiva.ai/

An artist with decent quality voice, can now sing anywhere, send the recording, and then an array of technology and sound engineering can autotune the voice, and blend it with pre created music, to create musical tracks.

Several of the artists now lip sync during live stage programs. Many of the accompanying musicians in a large stage program tend to do the same. This ensures that while the actions on the stage are happening, there is consistent quality of music that is delivered to the huge audience...

The music reality shows could be highly unreal, since there is enormous effort spent in pre-recording, editing, sequencing, and correcting, before it reaches the audience. Not to say that this is true for all such shows, but it is a definite reality.

In brief, what you hear and what you see today, is possibly seriously aided by digital tools, that enhances, corrects, and then delivers a near perfect version to the listener. Often tiny phrases and minute interludes are corrected, and they appear perfect when we hear. AI algorithms act as serious tools for such efforts.

The process of music creation has evolved so much now that there is talk about software creating music, all on its own. Lyrics written by software, composed and refined by software, and then rendered via computers and the digital media networks to the world!

A sample of this can be seen at: https://www.youtube.com/watch?v=XUs6CznN8pw. In this clip, only the music is by AI and not the lyrics.

In this process, do we tend to lose out the emotions embedded in musical creations? If the music created is so perfect, why is it that the music of the past, recorded on ancient technology, sometimes noisy, and often imperfect, still is charming and finds a nostalgic audience? More importantly, why do we listen to these over and over again, and not get tired of it? What magic could have been created and captured, during the process of creation of such music, that lingers as an everlasting perfume?

Is that also why live stage programs with limited musicians, and classical musicians with limited instruments to accompany, still finds a connect with the audience? Does that mean that such music will continue to thrive, and will find an alternate and possibly more mainstream audience? Or will they be relegated and become insignificant, as AI and associated algorithms become more and more powerful and perfect?

Let us save that thought, for the concluding part of this series…