Anna, part 15

“Surely Anna and Fred have seen this coming,” Alec said. They were nearing the lab.

Jill nodded. “There’s no way to know for sure.”

“So the question is,” Bob asked, “whether there is any way to recover all our data.”

Alec stared at his advisor. “Bob, somebody may have just burned down our lab, and you’re worrying about data retrieval?”

Jill looked quizzically from Alec to Bob. “This all seems eerily familiar, like we’ve already had this conversation.”

“I can’t see how,” Alec said, “it’s not like anybody ever set fire to our lab before.”

“I guess that does sound pretty crazy,” Jill shrugged.

“What’s crazy is thIs fire,” Bob said. They had now arrived at their building. A firetruck was parked nearby, and campus security was on the scene.

As Jill started to enter, the guard put his hand up. “I’m sorry Ma’am, but nobody can go in there right now. At least until the fire department gives us the all clear.”

“Do they know how it started?” Bob asked.

“Right now,” the guard said, “we don’t know very much at all. Sorry I can’t be more helpful.”

“Wow,” Alec said, looking at the smoke drifting out of the lab window. “Guess the University was serious about shutting us down.”

“I know the Dean’s an idiot,” Bob replied, “but I’m pretty sure he wouldn’t burn down our lab.”

Just at that moment the Dean was in his office, speaking on a private line. “It’s been done sir … Yes, burned completely … No, they don’t seem to suspect a thing. To them I’m just a clueless administrator … Yes, of course the A.I. programs have been destroyed. Have I ever let you down?”

3 thoughts on “Anna, part 15”

  1. Continues to intrigue me. Thanks.

    And 10 items down in my RSS feed this evening:

    http://www.theregister.co.uk/2013/11/15/google_thinking_machines/

    Ah… El Reg says…

    This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.

    Google doesn’t expect its deep-learning systems to ever evolve into a full-blown emergent artificial intelligence, though. “[AI] just happens on its own? I’m too practical – we have to make it happen,” the company’s research chief Alfred Spector told us earlier this year.

Leave a Reply

Your email address will not be published. Required fields are marked *