Unjargon

A friend pointed out to me that my “Train of Thought” post the other day was incomprehensible to her. And I realized that it might be incomprehensible to a lot of people.

The problem is that I spend much of my time in a milieu where terms like “Turing test” and “Big Data” are understood by everyone in the room. But that doesn’t help once you take the discussion out of that room, and those phrases just sound like jargon.

“Turing test” is shorthand for Alan Turing’s famous thought experiment, which he called the “imitation game”. The idea is that you test a computer in the following way: The computer holds a conversation with a person (over a teletype, so they can’t actually see each other), and the person then tries to guess whether they’ve been conversing with a real person or to a computer.

This contest, the basic set-up for the recent film Ex Machina, as well as many other works of speculative fiction, raises all sorts of interesting questions. For example, if a computer consistently passes this test, can it be said to think? And if so, is it a kind of person? Should it be granted civil rights under the law?

“Big Data”, on the other hand, is the idea that if you feed enormous amounts of data to a computer program that is good only at classifying things into “more like this” or “less like that”, then the program can start to make good decisions when new data is fed to it, even though the program has absolutely no idea what’s going on.

This is what Machine Learning is all about, and it’s the reason that Google Translate is so good. GT doesn’t actually know anything about translating — it’s just very good at imitation. Because Google has fed it an enormous amount of translation data, it can now translate pretty well.

But Google Translate doesn’t really know anything about language, or people, or relationships, or the world. It’s just really good at making correlations between things if you give it enough examples.

So my question was this: If you use the Big Data approach to imitate human behavior, are there some human behaviors that can never be imitated this way, not matter how much data you feed them?

Let’s put it another way: If you fed all the romance novels ever written into a Machine Learning algorithm, and had it crunch away for long enough, would it ever be able to sustain an intimate emotional relationship in a way that is satifying to its human partner? Even though the computer actually has no idea what is going on?

My guess is no. On the other hand, there are probably more than a few human relationships that work on exactly this basis. 🙂

One thought on “Unjargon”

  1. The problem with this idea is that the romance novels have a lot of text about dysfunctional and very little about functional satisfying relationships, so at best your algorithm would learn to be a high maintenance drama Queen!

Leave a Reply

Your email address will not be published. Required fields are marked *