The Aha moment, part 10

A.I. is having quite a moment. Convolutional Neural Nets have led to large data models based on General Purpose Transformers, and the results seem astonishing.

A mere eight days ago, OpenAI unveiled Sora, and we are now starting to see text prompts turn into richly detailed and nearly realistic 3D animations. And in the coming years the technology will only grow more impressive.

Within another decade, talented creators will be able to create compelling feature length animations simply by properly describing them. It will still take great human talent and visual judgement to make a great movie, but the work of applying that human talent and judgement will have shifted from manual labor to higher level cognitive and linguistic skill.

By 2034 there will be another animated movie as compelling and groundbreaking in its own way as Fantasia was in 1940. But unlike the original, the brilliant and highly skilled people who make this new movie will do so by talking to a computer.

To recap:

In 1974 I first saw Fantasia. Inspired by that, in 1984 I created the first procedural shader language, with user-specified matrix operations at every pixel. By 1994, animated feature films were incorporating procedural shaders as standard practice.

By 2004 hardware accelerated shaders powered by graphics processors from Nvidia had become standard in computer games. Around 2014, those Nvidia processors began to be repurposed to train the convolutional neural nets of A.I.

Now in 2024, general purpose transformers are starting to create the first believable short A.I. movies. In another ten years from now, in 2034, it will be possible to create new A.I.-enabled versions of Fantasia simply by talking to a computer.

And we will have come full circle.

Leave a Reply

Your email address will not be published. Required fields are marked *