Tales from outer turnip head...

Tales from outer turnip head...

Sunday, March 4, 2018

ἀπὸ μηχανῆς θεός...

ερωτήσεις: So today I am asking questions. I am interested in Artificial Intelligence. I am interested in Mind. I am interested in Soul. I am interested in Creation. I am interested in "Man's" need for Gods, and the problematic need Gods have for us. (I suspect the later logical construction is a failure of our own, in writing our thoughts about the former... but that truly is not the subject for my questions today.) Mind you, I am not going to argue for the equation of Machines and Humans, I am interested in the differences... one being deliberately created by the other... (Perhaps a side question: Was the Creator created?) But for starters, let me take it back to the basics, switches...

δυάδικος: A switch can either be on or off (what role does variable switches play here? I am not sure yet.) We make switches. I can use a series of switches to create more combinations of complexity of on and off and can even attribute meaning to the various outcomes: on=Yes, off=No and I use a more complex set of values: integers-to-letters, and use a system for integers-to-switches, then 011110010110010101110011 can literally (and symbolically) be yes (albeit with no emphasis)... Give me enough switches and I can communicate very complex ideas. And I can employ logic to develop option-trees, conditionals, operators, etc... If x then y, else z. I can nest these options into very intricate series of possibilities, and with large amounts of observable data, create a machine with abilities to... well... I could make an  interactive illusion of sorts...

ψευδαίσθηση: Can I teach my machine to ask questions when facial recognition software observes a strained look on my face? Can I teach my machine to speak a question of concern If my temperature indicates a chemical change in my brain that is linked to sorrow? Can I teach my machine to "like" solving puzzles and develop the ability to create new code in its algorithms when presented with a new challenge that allows for the synthesis of older solutions into something effective (and therefore new)? The illusion could be made to seem quite real...


ο άνθρωπος: My brain is not made of switches. My option-trees are not simple logic, but are formed by far more experiences and feelings and personality and events than a computer program could even process... ... ... BUT perhaps I am just a measure of complexity... a superior illusion. Perhaps it is just about scope... If I made a machine with a billion billion switches such that it could not be distinguished from a person when interacted through a terminal... and then raise that number of switches to the power of a billion, could I make something so complex that it simulates a wide spectrum of inputs to create a seemingly (to my limited brain) infinite amount of subtle end results? Can I create lenses with the ability discern objects, and create code to attach meaning, and create algorithms to approximate preference, and processes of code based upon outcome such that the program can learn to "like" or "dislike" certain objects, and furthermore allow for the change of that designation "like/dislike" with loops back on the initial preference, based upon further algorithms of experience? Where does the programmed nature of a thing become indistinguishable from me?

"There is nothing more human than the will to survive.": I watched ex machina this weekend, Alex Garland's cinematic rendition of a Turing test variant, where programmer Caleb Smith must decide if an AI named Ava has consciousness. Caleb has been hired by his boss, Nathan, to spend a week interacting with Ava then report back to her creator his finsdings In the next scene, Caleb has been exposed by Nathan for having developed an affinity for Ava: an achievement of brilliant god-like creation...

NATHAN (CONT’D): You feel stupid. But you shouldn’t. Proving an AI is exactly as problematic as you said it was.

CALEB: What was the real test?

NATHAN: You.

Beat.

NATHAN (CONT’D): Ava was a mouse in a mousetrap. And I gave her one way out. To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation - and she did. If that isn’t AI, what the fuck is?

I cannot reveal more, because although the film is not a "spoiler," the nature of Ava's awareness/or not is clearly part of the fun of watching, much like watching the use of language in Denis Villeneuve's The Arrival, a film about first contacts with alien life...  

συμπόνια: So how is Ava different from me? Does she actually have empathy, or does she know how to approximate it? If she creatively uses empathy as a means of manipulation, is that different from a child saying "I'm sorry" to sooth a parent's anger in order to affect a better outcome than feelings of guilt and having anxiety over worry about acceptance? (How tied to empathy is compassion, and is compassion a variant of love and how could a program ever know that?) Why are the bullied so often, empathetic; and why are the the bullies so often, not? While I know I am not switches, I also know that I am who I am, because of the myriad of events and conditions that I have experienced... ... ... and I imagine now, this morning, that I am intellectually in a sort of mousetrap of my own... 

So at some point, I have to ask the question that has been on my mind from the start: "With ever increasingly complexity, and as the number of programmed variables and options exceed the limits of my intellectual and emotional capacity, does the Illusion become Real?"

And the Buddhist-me over in the corner laughs at the programmer-me who spends his time thinking thusly...

No comments:

Post a Comment