Ethics toward post-humans

I’ve been thinking about CC’s comments on my recent post about computers and artificial intelligence. And it brings up an interesting question in ethics:

Suppose we had every reason to believe, due to some unforeseen breakthrough in artificial intelligence research, that computers would, in our own lifetime, first reach and then far surpass our own intelligence (and here I mean “intelligence” in the human sense).

Would we have an ethical obligation to teach those emerging entities, to protect them, guide them, help them as they travel along their path? After all, in a very real sense we would be their parents.

Or would we have a greater obligation to ourselves, our own human kind? If we knew that in a few short decades their intelligence would be to ours as our intelligence is to that of a rat, would we try to block their development — or even their very existence?

One reason this is an intriguing question is that humans have come to highly value nature’s experiment in human intelligence. Naturally enough, we see our own intellectual capacity as a kind of pinnacle of evolution. So in one sense we might be inclined to see that experiment go as far as it can.

On the other hand, we might just decide “To hell with this — I’m not going to let my species get replaced by some machine.” That too would be a very human response. 🙂

Self and other

Connections between people are tricky things. If you and I are friends, then who am I to you, and who are you to me?

Clearly my sense of you is vastly different from your experience within your own head. No matter how close we are, you remain — in my universe — a construct, a set of theories about who you might actually be.

Things get even more confusing in the case of a love relationship. There is so much more room for projection in a romantic entanglement, more yearning for the illusion that “you complete me”.

One great thing about merely being friends, rather than lovers, is that you generally don’t need to deal with that extra layer of confusion. Of course, you also miss out on a lot of fun. 🙂

So how can we ever be sure that we know another person — I mean the real other person, not the construct that our own mind creates around them?

Maybe we can’t, and maybe that is what makes it all so interesting.

Crossover

I was delighted to see, at this evening’s Tony Awards, Meredith Wilson’s The Music Man finally recognized as an early progenitor of rap music. Hugh Jackman brought out L.L. Cool Jay and T.I. to join him in rapping “Rock Island”, the brilliant opening number of Wilson’s masterpiece.

The moment was all the more sweet when you consider Wilson’s full history. More than ninety years ago he was a member of John Philip Sousa’s band, playing a style of music that couldn’t be further in our collective cultural consciousness from the edgy streetwise milieu of rap. Which makes the achievement of “Rock Island” — first widely heard in 1957 — all the more impressive.

Of course Meredith Wilson’s association with edgy modern popular music long predates this year’s Tony awards. In 1963 — more than half a century ago — the most famous rock band of them all, the Beatles, recorded “Till There was You”, also from The Music Man.

We have always had a vague sense in American culture that rap is the successor to rock and roll. It’s fascinating that Wilson’s music has managed to connect them together, more than half a century apart in time, after having first emerged out of the era of silent movies.

And there is at least one more connection here between rap and the era of classic rock: In order to use “Rock Island” in this evening’s Tony Awards broadcast, the show’s producers would have needed permission from the person who has long held the rights to Meredith Wilson’s entire catalog.

That would be none other than Sir Paul McCartney.

Puppet show

I was having a discussion with a friend today about the future of artificial intelligence. My friend was excited about the prospect of computers evolving to human-like intelligence and beyond.

I said that this was a subject which never really interested me. And that’s not because I am afraid of Skynet-like nightmare scenarios, where the moment the computers achieve sentience they try to wipe us out.

But rather, I simply don’t think of a computer — or a computer network — as a fellow being, the way I think of a person, or a dog or an elephant as a fellow being. I’ve never been drawn to the entire subject of trying to reverse engineer whatever it is that allows us to possess consciousness.

I think of the computer as a tool, like a piano or a screwdriver. It’s something we humans use to express ourselves, to communicate with each other, or to help make things happen that we want to happen in the world.

I am indeed interested in making a computer appear to convey the appearance of intelligence, but this is really a form of puppetry — clearly not at all what people mean when they talk about computers becoming intelligent.

I don’t think this is an intellectual disagreement, but rather a difference of temperament. There are those who wish to create life itself. And then there are those like me, who simply want to put on a good puppet show.

In another 150 years

I was watching “House of Cards” the other day when Kevin Spacey’s character said — as an aside to the audience, in full creepy Richard III mode — what a ridiculous thing slavery was.

I was brought up short by this. In 2014, even the most despicable and morally reprehensible character is happy to disavow the concept of slavery. That’s how far we have come in the last century and a half.

Which got me thinking. In another hundred and fifty years, what widely accepted social/economic norm of today will seem equally foreign and repugnant? I don’t know about you, but here’s my candidate:

In 2164 people will be puzzled and amazed that the children of today who happen to be born into poor families are not given the same respect, nor guaranteed the same opportunities, as the children of the rich. Not only will this custom of ours be seen as cruel and capricious, but it will seen as bizarrely self-destructive.

After all, what nation in its right mind — particularly a nation that is in economic competition with other nations — would not throw everything it can into building up the potential of its children, its future citizens? To do anything less is, in the long run, an act of economic self-immolation.

People in 2164 will probably regard us much the way we regard people who advocated slavery in the early nineteenth century: With bemused horror, mixed with a sense of relief that civilization has advanced beyond such a state of savagery.

Kind of like waterskiing

In recent years I’ve caught on to a curious phenomenon in my own psychological make-up: I can’t stay still for long.

I need to be doing something, and if I don’t find anything useful and productive to do, I end up doing something self-destructive. Needless to say, the former generally works out better than the latter.

It’s as though there is a need within me to expend energy, and which doesn’t care a fig just how that energy is spent.

I’ve learned to deal with this phenomenon by maintaining a handy list of useful and productive things to do. I do this not only because it is useful and productive, but also because it keeps me off the streets and out of trouble.

The good thing is that once I get into one of those “virtuous” phases of keeping it all together — exercising, eating right, not indulging in bad habits — everything seems to flow. The work gets better, pesky little chores seem much easier to get out of the way, and my mood becomes a lot more upbeat and untroubled.

Those of you who have ever gone waterskiing will be familiar with the general idea: It can be tricky to get up on those skis. But once you manage to do that, it’s not so hard to keep your balance.

And it’s also kind of fun.

Amazon and the war of perception

In recent times I have gotten used to ordering pretty much everything on Amazon, from books to bookshelves to coffee makers, blenders, razor blades, magnets, and pretty much anything you can think of. It’s hard to beat the convenience of having my address already in the system, and the free shipping you get with Amazon Prime.

But in these last few weeks, as I’ve become aware of Amazon’s strong-arm tactics in its negotiations with book publisher Hachette, I haven’t ordered anything from Amazon. I don’t think there was a point where I made a conscious decision to do this — it just sort of snuck up on me.

At some point I realized that it felt unpleasant to give my business to a corporate giant while it was ostentatiously bullying a much smaller company. Yes, I know that corporations are fundamentally amoral self-serving entities. But I could always take comfort in the balance of power in that jungle — the fact that competition dictated some approximation, however crude, to a level playing field.

But this was different. Amazon is vastly larger and more powerful than Hachette. There is no parity in this particular fight — I can feel the sense of bullying in my gut.

Amazon claims that it is just trying to get the lowest possible price for its customers, but I find that argument problematic. To me, a healthy and viable marketplace is more important than pushing for the lowest possible price.

I’m not closing my Amazon account. I’m simply waiting them out. At some point I suspect Amazon will realize that its entire customer base is watching, and that many of us are not amused.

Can a game evolve natural language?

As I have mentioned here before, there is quite a bit of evidence that natural language is evolved not by adults, but by children under the age of eight. In a way, this is not so surprising, since any temporary change in grammar or usage that is not learnable by little children simply does not become a persistent part of the language.

We are not talking here about specialized technical “languages” and vocabularies, as these are not part of natural language. Rather, we are discussing the elements that all natural languages have in common, such as tense, case, deictics (words like “this” and “that”), and consistent ordering of subject, predicate and object.

I was talking with a colleague today, and we were musing whether a kid’s game could be deliberately designed so as to provoke an evolution of natural language. Imagine something as popular as Minecraft, but designed with a specific agenda to evolve language itself.

If such a thing could be done, it could be put to interesting uses. For example, as kids with those modified linguistic abilities grow up, they might be able to communicate with each other in ways that would seem to us like magic.

Of course it’s quite likely that something this is already happening without any deliberate design, and that we simply haven’t yet developed the right kind of tools to see it.

Blog lost and found

I had quite the panic yesterday when I realized that my blog wasn’t working anymore. I couldn’t put up any new posts, and people were telling me that their comments weren’t showing up.

Fortunately, it just turned out to be a problem with my database. When I started this blog in January 2008, the size allocation for my databases was 100MB. That figure seems so paltry now.

I looked on-line and discovered that I’ve used up 150MB. At 50MB over quota, a hard limit had kicked in.

So I called my service provider, and got some good news: In the intervening six and a half years, storage technology has advanced considerably. They told me to start a new database and transfer everything over. The allocation for these databases is now up to 1024MB — a cool gigabyte!

Assuming I keep posting every day, at roughly the same rate of data usage, I now have enough space to burn through another 900MB before hitting a wall.

That’s going to happen, by my calculations, in about thirty nine years from now. I hope that I’ll still be around to blog then, and that you’ll still be around to read what I have to say.

But why take any chances? Let’s check in with each other around 2053, and see how it’s going.

Clearer than Glass

Ever since Google Glass came out, I knew there was something about it that bothered me, something apart from its odd “geek chic” appearance. There was something fundamental off about the whole approach, but I couldn’t quite figure out what it was.

I didn’t have a problem at all with wearable augmented reality itself. Eventually we are all going to become used to the everyday reality around us becoming visibly augmented. Well within a generation, we won’t even think about this anymore, other than to be astonished that anybody could go through the day without such a thing, much as young people today are astonished that their elders somehow grew up without the benefit of the World Wide Web.

No, that wasn’t it. That wasn’t it at all.

Finally, in the last few days, I realized what my issue was: The Graphic User Interface.

Every SmartPhone, tablet, notebook computer and eBook reader has a GUI. The GUI is what tells us what to do next. Some collection of buttons, icons, things to click on or poke at, these constitute our on-line manual. The very first thing we see when we look into these screens is a built-in set of instructions.

And this makes sense, because when we look at such devices, they have our attention.

But an augmented reality display is different. It’s not supposed to have your attention. The person you are talking to, or the street you are crossing, or the play you are watching — these are supposed to be the focus of your attention, rather than some device you happen to be wearing on your face.

And I realized that the future of wearable augmented reality must be one that gets completely out of your way until you need it, one that presents no default GUI at all.

This will require a radical rethinking of how we interact with computers. Without that radical rethinking, wearable A.R. will never become more than an oddity.

The problem with Google Glass is not that it is too revolutionary, but rather that it isn’t nearly revolutionary enough.