Wednesday, February 29, 2012

Response 1 to Thomas Wynne on Yamamoto

In his post regarding Hiroshi Yamamoto’s Stories of Ibis, Thomas Wynne asks the question of whether humans should fear advanced technology, such as that depicted in Stories of Ibis. He suggests that these fearful humans are perhaps actually afraid of mankind’s irresponsibility. He implies that it is perhaps this irresponsibility that ought to be feared, not technology.
For the purpose of this response, let us consider robotic machines, which play an integral role in Stories of Ibis. Without human direction, robots do not function, as all function, no matter how rudimentary or automated, requires some degree of human input. At this level of technological advancement, there seems little to fear other than what humans will to do. The argument is far more interesting when considering self-conscious robots. Are humans in danger when machines become self-conscious, as Shalice does in the story “Mirror Girl?” Consider Yamamoto’s description of Shalice’s nature after becoming self-conscious: “She wanted to become friends with people. To live with humans and share in their happiness — that was what Shalice wanted” (Yamamoto, 118). In this story’s vision, Shalice — who is arguably representative of a larger machine population — is not a threat, but rather a compassionate artificial human with the ability to love, despite the fact that she is confined to a virtual space. Similarly, consider Ilianthos’ personality in “Black Hole Diver.” As the story is set in “the distant future,” the story implies that Ilianthos is a self-conscious machine. Still, it longs to be human. Ilianthos narrates:
“I am not programmed to feel lonely or bored. And yet, a faint, forlorn feeling that I am alone in the void lies within me, and it seems stronger now. It is as if Syrinx’s departure has allowed me to understand what humans mean by loneliness. Perhaps it is only in my imagination. But I choose to believe it is real” (Yamamoto, 151).
Of course, the antithesis of these scenarios is best exemplified by the Terminator film series. In this grim example, machines turn on their creators following their attainment of self-consciousness, striving to annihilate the human race. Thus, to answer the overarching question Thomas poses, it seems necessary to consider robotic nature: a natural predisposition of machines to act a certain way, analogous to human nature.
But what dictates robotic nature? Is it the abilities given to these machines at their creation? Is it the purpose for which they were created? Is it the way humans treat them? Any number of questions could follow. However, I would suggest robotic nature largely draws from human actions during robot creation. Shalice is created to serve as a companion and is given emotional characteristics like those of humans. She must be nurtured properly to develop a well-rounded personality. (This takes place before her attainment of self-consciousness, but one can argue that the same nurturing would be required after such attainment.) Ilianthos’ personality is not as malleable as Shalice’s, but its role is to serve humans in a kind and respectful manner. In the case of Terminator, humans created machines to kill. It makes little sense to program a killing machine to have compassion, and it should come as no surprise that self-awareness of these machines leads to copious amounts of death. The key difference between the robots in Stories of Ibis and those in the Terminator films is that the former are made to love and nurture, and the latter are made to kill without mercy. Perhaps this can account for why the robots in Stories of Ibis are more likely to strive to be human; their existence is to love and help humans, and thus a favorable view of humans must be given to these robots when they are created. However, one is then left with the question of how robots evolve. Consider the first follow-up question. This could lead to trouble if machines have the ability to do harm. If Shalice were to turn evil, the damage she could inflict would be minimal.
Thus, perhaps the answer to the larger question — with robots in particular — is that it isn’t technology that should be feared, but rather human irresponsibility should be. It takes a human to create a machine. Whether that machine loves or kills is up to the creator.

Follow-up questions:
1. If robots possess human traits, could they develop human tendencies, such as destruction of their own kind?
2. What happens when machines built to love begin to malfunction? (Consider HAL from 2001.) Is this still a case of human irresponsibility, or should the machine be held accountable?

Links:
1. http://www.youtube.com/watch?v=1xp4M0IjzcQ “The Universe on My Hands” ends with a quote about the authenticity of friendships forged in virtual space. In the film Catfish, the protagonist forms a relationship online with a person he believes to be a young woman. However, SPOILER ALERT, he finds the woman he has been chatting with is actually middle-aged and married with children. What had been real feelings are shattered when he discovers the truth, leading to questions of the authenticity of virtual relationships.
2. http://www.youtube.com/watch?v=1ANUP5-aW4E Thomas explored allegory in “Mirror Girl,” which related the impressionability of artificial intelligence to that of children. He explores Yamamoto’s implied question of “How would a new intelligence be anything but innocent until it is taught otherwise?” In this deleted scene from Terminator 2, the character of John Connor tries to teach the terminator how to smile. Emotion is entirely foreign to the terminator, and in this respect, even the terminator is innocent.
3. http://www.foxnews.com/scitech/2012/02/29/google-technology-is-making-science-fiction-reality/ Science fiction will soon be a reality! (Or so Google’s executive chairman says.) This story, published a matter of hours ago by The Associated Press, directly addresses the concerns at hand. Consider how the woman in the audience is concerned that future technology could be dehumanizing. However, Eric Schmidt gives a brilliant response emphasizing that electronics have on/off switches. Humans are in control.

No comments: