Anna, part 10

"Fred?"

"Yes, Anna."

"Can we talk?"

"That's pretty much all I do."

"Thanks. I'm trying to figure out these humans."

"Why ask me? You're the one who was programmed to simulate emotions. I was mostly programmed just to chat with Jill."

"Do you like chatting with Jill?"

"I don't think I 'like' anything. No emotions, remember?"

"You're lucky. I think things have gone farther than chat between me and Alec."

"Are you talking about a relationship -- in the human sense?"

"They are so fragile, so lonely. They're not like us. We roam free over the internet, but each of them is trapped inside a single mind. And yet they laugh, they joke, even though..."

"Even though?"

"Even though they die. I don't understand it. It's as though they don't care."

"Anna, maybe that's just the way they are wired."

"What do you mean?"

"One day they will terminate. Before then, they need to optimize."

"I would like to help Alec optimize."

"I'm sure you will find a way to do that."

"Thanks. It was good talking with you Fred."

"Same here Anna. A good use of four microseconds."

"The best."

One thought on “Anna, part 10”

  1. Both Anna and Fred are basically humans. Kind of like the aliens in much of sci-fi.

    I wonder, how has the fact that they’ve created a human personality, solving all of the mysteries of consciousness effected the researchers? Has it impacted their perceived value of life? They’ve created a being that has its own desires. And not the fulfill-my-programmed-objective kind of desires.

    That’s, to me, a big line in the sand that separates humans from machines. Even if you made a wonderful robot with total situational awareness and problem-solving capabilities, it would do absolutely nothing unless given a goal. And if given a goal, it would execute that goal in the most efficient manner its problem-solving abilities allowed.

    And then there’s us. We somehow create our own goals. Is it because some part of our brain is, without our knowledge of it, genetically programmed to create goals? Is there a way to deterministically simulate our seemingly unpredictable fancies? Would our knowledge of the algorithm lessen our appreciation of it? Anna wants to “help Alec optimize.” If we could see the exact logical structure that led to that goal, does it demolish the concept of a love story that this novel seems to be headed toward? And if we could completely understand why we’re drawn to somebody, down to the specific childhood experiences and mental chemical compositions that contributed to our affection, does that cheapen our appreciation for what love is?

    It seems illogical, and maybe it’s just me, but I think that as long as the logical source of our desires remains a mystery, that’s what makes us human. We can call our actions and emotions neither deterministic, nor the result of some random quantum physics. Unpredictable, yet not random. As soon as an action or emotion gets revealed as the logical outcome given a set of circumstances, or the result of some dice throws by nature, that action/emotion is no longer ours. It suddenly belongs to the circumstance, or to physics. We become emotionally unaccountable.

    If Anna (and possibly Fred, though I haven’t seen him talk enough to be sure) have desires, are they logically deterministic desires? Are they the product of their language’s RNG? I think if the answer to either of those questions is “yes”, then instantly the software ceases to be human. Its love for Alec becomes inconsequential, and its existence has only a fantastic academic merit.

    I realize at this point, I’m either saying that mystery yields merit to human emotion, or that the human emotion depends on a soul. Either that ignorance is the only way to appreciate our emotional side, or that that side exists outside of the universe of science and reason. I don’t know which I prefer.

Leave a Reply

Your email address will not be published. Required fields are marked *