When Roger Dannenberg, Robert Rowe and others started creating computer programs that would automatically accompany a live human musical performance well over twenty years ago, our current era of machine learning did not yet exist.
Their pioneering work was done before the development of Support Vector Machines, Convolutional Neural Nets and other powerful modern algorithmic tools of machine learning. These recent algorithmic techniques now underlie much software that we use every day, such as Google search and Google translate, and will soon be seen in our self-driving cars.
When we apply what these pioneers were trying to do to the various fields of live artistic performance (such as dance, acting, puppetry), we begin to see the modern possibilities for “smart instruments” — the piano that has “learned” your musical style, the virtual actor that embodies your unique body language, the paint brush which paints in ways that you might.
A lot of this has already been happening, in the work of Aaron Hertzmann and others, and we are poised for it to happen on a much larger scale. To be clear, we are not talking about machines replacing people. The live human performer is still very much in control, but is playing an instrument that has already been infused with styles of human performance, and can therefore be played at a higher level.
Through the use of smart instruments, a single talented performer can conduct an entire symphony, or direct a large virtual acting troupe. There is nothing mystical about this process — the human element is still very much present within these instruments. Such techniques are simply an enhanced way to distribute, and to gain the collective benefits from, old fashioned human talent.