I am really looking forward to September.
I find that the trick to effectively teach computer graphics is to create a clear narrative progression from simple to complex. This is not easy, but it can be done.
First I lead the students to creating something very simple. It needs to be something that is easy to implement but also gives them a sense of satisfaction and ownership.
Then we gradually add more capabilities. The important thing is that the student sees the clear effect of everything that they add. Nothing should be left to blind faith.
Gradually the student builds up their system, learning skills and gaining confidence as they go along. At each step it is important to suggest ways they can customize their system, so they get feel true pride of ownership.
Eventually the student has put together a fairly sophisticated system. Each step along the way was manageable, so the student is never overwhelmed by the process.
By the end of the class, students should have the confidence to strike out on their own. Ideally they will go on to create their own unique computer graphics.
I suspect that this approach might also work well for other topics.
Apparently the FAA rules are about to change soon. After that, you will be able to buy your very own personal fly-by-wire helicopter, without needing a full pilot’s license. At least in theory.
“Fly-by-wire” simply means that a computer sits between the controls that you manipulate and the controls that actually fly the vehicle. So if you screw up, you are less likely to crash the helicopter.
Still, you’ll need at least 30 hours of training to be allowed to fly your little civilian helicopter, at $300 of training time per hour. Then again, if you can afford the $188,000 you’ll need to buy one of these things, you won’t miss the $9000 it will cost you to not kill yourself.
When I was a little kid, I really wanted my own little personal helicopter. Now that I am a grownup, I am less sure that I want one.
Still, it’s nice to know that I will soon have the option. At least in theory.
To me, using ChatGPT or MidJourney to create something is somewhat akin to using a camera to take a photograph. It might be useful here to think back to the very dawn of photography.
When photographs first appeared, there was some confusion about authorship. Is it the person taking the picture who is responsible for the photo? Or is it the camera? Or is it the subject posing for the photo — since a photograph is just a faithful copy of its subject?
In time, photography came to be seen as an art form in its own right. People stopped thinking of the camera as some sort of magical thing.
Meanwhile, people started to understand that the artist was the photographer, as opposed the person posing of the picture or the maker of the camera. Taking a good photograph requires making some intelligent aesthetic choices, and it is the photographer who makes those choices.
Analogously, generative AI is creating a sort of copy of reality — albeit a reality that was scraped from many sources. The art resides in the choices made by the person wielding the “camera” to create something new and unique, only in this case the camera is the software being used, and the subject is the data that was gathered by the software.
From this perspective, it makes no sense to talk about an AI creating an original work. Rather, the original work is created by the human being who provides intelligent and creative prompts to the AI.
Today, apparently for the first time, an AI based on ChatGPT attempted to comment on one of my blog posts. It didn’t make any attempt to disguise its identity, and besides, it had that weird pedantic literal way of speaking which ChatGPT shares with no actual human being.
The point that it made was factually correct, yet it entirely missed the point that my post was written with tongue firmly in cheek. This is a error that a human would not have made.
To me, the oddest thing about this episode is that some human being has directed ChatGPT to search the Web for statements to argue with.
As a disembodied entity with no actual lived experience, AI has a long way to go before it can tackle this kind of task. You can’t really respond to something that was said tongue in cheek, if you have neither a tongue nor a cheek.
Today I once again changed the visual theme of this blog. The change doesn’t modify the content at all, but it privileges certain things.
For example, it is now again much easier to jump to previous posts. Given that this daily blog goes all the way back to January 2008 (which makes this the 5717th post), that seems like an important feature, even if only in an archaeological sense.
I will continue to play with it from time to time. But for now, this theme seems to be doing the job well enough.
I would find it embarrassing to be prisoner number P01135809. I mean, what does a number like that say about your character?
At first, it sounds like the beginning of the Fibonacci sequence. But to be correct, the sequence would have needed to begin P0112358…
That missing digit near the start suggests a certain laziness of thinking. And what is up with that random 09 at the end?
I for one would never trust someone who had a prisoner number like that. On the other hand, if all goes well, nobody will ever need to.
I have been haunted recently by a detail in Isaac Asimov’s Foundation trilogy, which I first read when I was a kid. In the book, there is a technology that lets a widow or widower to talk with an animated photo of their deceased spouse.
When you talk to such a photo, some sort of artificial intelligence animates it and gets it talking back to you in the general manner of your now expired husband or wife. You are only experiencing an illusion that the person you loved is still with you, but presumably it is a comforting illusion.
Asimov includes one scene in which two such photos, one of a man and the other of a woman, two total strangers, are randomly placed face to face in a forgotten warehouse. For many years they continue to engage mindlessly in meaningless conversation with one another, until at long last their energy sources run out.
This prescient scene may be a metaphor for where we are heading with generative AI. Perhaps this is the way the world ends — not with a bang, but with meaningless chatter.
At some point, you will simply be able to describe a movie, and it will come to life in real time, thanks to generative AI. But why think of this as an activity for one person?
Eventually this new art form will come to be seen as an opportunity for collaboration, with people contributing according to their skills and interests. Multiple people will drive the creation of different aspects of the emerging story world. One person might create the scenery, while another describes the motivations and background of various characters.
Still another person describes the lighting and mood and general visuals. And then there is the creation of an engaging plot, which requires its own particular way of thinking.
On top of this, these sorts of future movies will be endlessly mutable. You might start with a movie that somebody else has made, and use generative AI to create your own variant.
Of course many questions remain. For one thing, I wonder how the copyright laws will evolve to account for all of these new ways of creating.
I wish, when things get tense between me and people that I care about, that I could have an extra minute to take a little time-out. Maybe there would be a device that I could carry in my pocket that has a little button.
When I press the button, time would stop for just 60 seconds. During that time I could gather my thoughts, figure out what I really want to say and why, and then continue the conversation.
I suppose that is too much to ask.