Saturday, March 03, 2012

Response 2 to Angela Daniels on Hiroshi Yamamoto's The Stories of Ibis


In our discussion of Yamamoto’s book, Angela asked the following question: “How can we use science fiction to disrupt the tendency towards binary thinking? Is it valuable to try to do so?

It’s no secret that science-fiction has tried to change our minds about a great deal of issues over time. Science-fiction of course tries to extrapolate the course the future will take, but this is a method to try to speak to the people of today, and The Stories of Ibis is a wonderful example of science-fiction that has a strong message.

Ibis wants humans to break away from their binary thinking by making them realize that the differences between humans and AI’s are just that: differences. She succeeds in finally convincing the narrator of this, when at the end he thinks, “Who would feel inferior for not being able to run as fast as horses do? Who would feel resentful for not being able to fly as birds do? Like Ibis said, this was just a difference in our specs” (422).

Simply being different is not the same as being inferior. As the narrator notes, lacking the ability to fly does not make humans inferior to birds. However, it seems to be a weakness that runs in the blood of humanity itself that makes us instantly believe difference to mean a lacking. 

Just look at the ads denouncing high fructose corn syrup. High fructose corn syrup is an ingredient used in many different products in lieu of common table sugar. Many read the name of the ingredient, and because it sounds so odd and different, they have no trouble believing that high fructose corn syrup must be bad. However, high fructose corn syrup is basically the same as ordinary sugar when it comes down to it. Most people understand that sugar is bad for you when eaten in excess, and yet it seems most people are not crying out for common table sugar to be excluded from their sweet treats. Because high fructose corn syrup was labeled differently than table sugar people naturally jumped to the conclusion that one must be worse than the other.

So, binary thinking does exist, but how can science-fiction move to change it? Science-fiction works like a mirror, showing us an image that is dissimilar from reality, but similar enough to show us reality’s flaws. In The Stories of Ibis, we are looking at the relationship between AI’s and humans. In the works of Octavia Butler, which Angela brought up, we’re looking at the relationship between humans and aliens. At the end of the day though, these works are really critiquing how humans relate with other groups of humans.
It is natural for humans to fall into an “us or them” mentality. We like to belong to something larger than ourselves, whether it is an organization, a religion, a race, a nation, or what have you. We like to be able to say, “I’m part of X group.” And whether stated or implied, this affiliation is often augmented by the assumption that being part of X group makes one better than those not in X group. It may not be intentionally insidious; it can be as simple as “I’m part of the business fraternity, so that means I’m more qualified than those that are not.” This statement may or may not be true, but the important thing is that so many people do put faith in such claims, and being able to make these claims can make people feel better about themselves. 

However, is it right to categorize ourselves by organizing into cliques like these? Business fraternities are mostly harmless and can help people, but what about nation divides? Many conflicts in history could be boiled down to “My country is better than your country, so my country deserves what your country has.”  Binary thinking, or hierarchal as the example from Butler would phrase it, definitely has its repercussions. 

To address Angela’s second point, if science-fiction can make us look in the mirror and see our imperfections, should it? Should science-fiction even bother to try to change the way we think? Is there value in that?

Personally, I think that shaking ones assumptions is always valuable. In order to grow as an individual, one must have their beliefs challenged at some point. If you never had to defend your point of view, you would end up as delusional as the humans in The Stories of Ibis who believe they are fighting a war against the robots. 

Binary thinking may be completely bad, or maybe it does have its merits. But if we never acknowledged it and questioned it, we wouldn’t be able to grow past it. We have to face our demons if we are ever to control or defeat them.

Questions:
  1.       In class, we spent a lot of time talking about breaking out of binary thinking by disrupting the binary with a third point, much in the way Butler’s work disrupts the gender binary by adding a third gender. However, is this any better? Does adding a spectrum of shades of gray really make it a better way of thinking, or is it just adding more ways to categorize things as better than or worse than others?
  2. It is believable that science-fiction can influence a person’s point of view by exposing flaws in society. Is this influence always a good thing? Can science-fiction be harmful? Is science-fiction just another form of propaganda, for better or worse?


Links:
This is a clip from the original Star Trek episode “May That Be Your Last Battlefield.” This part of the episode shows the hatred between members of different factions of the same race. The only difference between the two is that one faction is black on the right while the other is black on the left. Spoilers: Turns out their entire species already killed each other in a massive civil war. 


This is a strip from the webcomic Questionable Content. While its main subject matter is not AI’s, there are a few AI characters featured (one of which is the character reading in this strip). I think this strip in particular is interesting, because even though the main plot is not about the relationship between AI and humans, the mere presence of AI characters seems to call for an in-depth analysis such as this, if only to place whether the relationship of AI’s and humans is one of equality or not.

This is an extremely powerful article entitled “The Hidden Message in Pixar Films.” I won’t spoil the main point of the article, as the author goes into huge depths about his interpretations and they need to be read in full. But it does go along a similar vein about breaking down binary thinking about human vs. non-human, and proves that a work does not have to be specifically labeled as science-fiction to wrestle with ideas such as these.  


Response 1 to Angela Daniels on Hiroshi Yamamoto's The Stories of Ibis

Yamamoto’s robots in Stories of Ibis experience emotion in a wholly different way than humans do; they describe they way they feel by using a word and an equation using the imaginary/complex number system: “Love (5+7i),” (p. 334). To help illustrate just how foreign to humans this sort of thinking is, Angela developed an exercise in mapping the emotions mentioned by the robots in Stories of Ibis on a two dimensional plane was an excellent idea in that it further drove the point that humans--both in Stories and the readers--cannot fully grasp how an artificial intelligence would deal with emotions. The robots use a word that we might be able to understand, such as love or doubt but they then quantify it with a “fuzzy” number that might approximate the degree to which they feel it.

This explanation is given in the context of a machine explaining that “human thoughts are digital” and how we humans want to see things in terms of right/wrong, good/evil, black/white or any other binary that might be applicable to a particular situation. Angela asks how we might use science fiction in a manner similar to Yamamoto to disrupt this sort of binary thinking and if it is valuable to do so. Science fiction easily gives the audience a point of view that can be placed outside the normal binaries of human thought (Yamamoto uses robots, Butler has aliens: two common “others” used in science fiction). By providing these characters that do not necessarily think as humans do, science fiction authors/filmmakers/et cetera make the audience identify at some level with something decidedly non-human; even if it is only to be afraid of the aliens or robots.

An example of such things that might not be “science fiction” are comic book superheroes such as the X-Men. Most of the X-Men are human in the sense that they were all “normal” at some point in their lives and through some sort of mutation, they became more than human, perhaps transhuman or posthuman (Magneto calls them Homo Superior). The readers of these comics are able to identify with the characters because they are still human at some level: some still have human desires such as finding love or just wanting to be treated like a “normal” human. Time travel is another and perhaps one of the older ways writers have been able to get their audience to step back from their lives and look at things differently. H.G. Wells’s The Time Machine not only coined the term “time machine” but also introduced society at large to a new form of literature. His Eloi and Morlocks probably represented the aristocrats and working classes of the late nineteenth century, respectively. Introducing something outside what we might consider “normal” is the easiest way to get humans to step outside themselves for a time and perhaps step back inside to themselves with a new, enlightened point of view.

Different technologies throughout the ages have all had the hopes and dreams or humanity projected on to them. All sorts of things from hiding hair loss to printing replacement organs on an inkjet printer. Humans cannot help but dream of all the great things science can bring us in the future. The characters in the stories Ibis tells the narrator all are using technology to escape from the limits of their everyday lives, even Shion, a piece of advanced technology herself, wishes to overcome her fear of death by turning to books, television and humans to find a deeper understanding of her own existence. By telling stories to the narrator, Ibis hopes to reveal a truth about humanity’s relationship with technology. The narrator is lead to believe that the robots only wish to care for their humans and wants the narrator to tell his brethren about the goals of the robots. A great deal of the technologies humans create are made to help ease some facet of human life and in doing so these technologies allow humans to explore other things. Allowing humans to spend time exploring other ideas and concepts is beneficial. The fruits of that exploration may not always benefit humanity, but they occasionally do and that makes it worthwhile.

Science fiction allows the audience to explore ideas and cultures in a way that can be entertaining while inviting the audience to look at things in a different light than before. Attempts at this may not always be successful but if the story gets at least one person to see the world in a more enlightened view, then the attempt was worthwhile. One of my favorite things about science fiction is that the minds behind the stories know that it is not multimillion dollar budgets and elaborate plot twists (though those things can help make the work engaging) that make the piece of science fiction work but exposing the audience to different ideas while providing them a lens to look at their own world.

Followup Questions:
1. What would be a way to map human emotions using an understandable “fuzzy” system?
1a. Could that system be a single axis spectrum or does it need to be multi-axis? (Like Angela’s plot of the emotions mentioned by the robots)
2. Why do other genres of entertainment, while capable of disturbing the normal modes of human thought, do so less than science fiction?

The Facial Action Coding System:
This is an relatively simple way that has been used to help computers comprehend human emotions based on the facial expressions we use (though not too reliably). FACS breaks down movements and gestures into codes (0 is the “natural face,” 1 is raising the inner eyebrow and 19 is sticking out your tongue), once all of the movements in a particular expression are accounted for, a computer can make a guess at the emotion the human is portraying: happiness is 6+12 (cheek raising+raising the corners of the lips) while sadness is 1+4+15 (inner brow raised+lowering of the brow+lowering of the lip corners). We can teach computers to recognize our emotions but they would probably never understand them.

Gwap.com/ESP Game:
Games With a Purpose offers a few different games that help to teach computers to solve problems that humans excel at. My favorite is the ESP Game; both players are shown an image and both players have to describe the image, if both players use the same word, they get points and move on to the next one. Each match is applied to that image and used to help computers identify images and hopefully return more relevant image search results.

Thursday, March 01, 2012

Response 2 to Thomas Wynne on Yamamoto

Thomas posed the question of whether or not our fears of technology are well founded. Prior to reading the Yamamoto book I had never considered looking at technology from a perspective that did not see it leading to an inevitable doom and a world run by human-hating robots (thus, the Shaviro text fell more in line with the thinking I was used to). However, the Yamamoto book opened my eyes and allowed me to think of the human-technology relationship as something that could evolve and better our world.  

In the western world, our relationship with technology tends to be framed in a negative light. This ongoing understanding of what could happen if technology gets too smart is something that has structured my understanding of the world as well as a stereotypical framing of the relationship that can be found throughout popular culture artifacts and political rhetoric. It is clear through the books we have read during the course that the fear of technology is not a novel concept. However, I would argue that the contemporary fear has been heightened through the political rhetoric of the Cold War during the 1980s and the artifacts of popular culture produced during this time. 

It has become common knowledge among communication scholars that Ronald Reagan’s rhetoric was effective because of the way he spoke. He tended to use transcendence during his speaking, thus moving away from the day-to-day, live experiences of American citizens. Instead, he relied not only on nostalgia and a love of the past, but on the inevitable progress that would lead to the future. What this type of political rhetoric allows us to do is to think about a better place rather than having to deal with our realities. For example, the effectiveness of Reagan’s rhetoric could be found in its juxtaposition to Jimmy Carter’s rhetoric. Carter asked people to restrain from over-spending and asked the American people to see their role in a poor economy. Reagan criticized such rhetoric, blamed the poor economy on the Democrats, and asked the American public to follow his lead towards progress without having to “change” anything (of course, this led to many things; deregulation being perhaps the most notorious). 

Reagan’s rhetoric of transcendence also worked to create clear demarcations between good and evil (ex. good vs. bad capitalism) and in no place is this more evident than in his framing of the American citizenry against the people of the Soviet Union. A significant element of this binary was the American human versus the Soviet machine. Thus, Americans had autonomy and freedom. Soviet citizens were more like trained machines that did everything in support of the communist government. Not only did this allow us to separate our identities from our enemies and continue seeing Soviet citizens as people very different from us, this rhetoric affected artifacts of popular culture, specifically in film, that reaffirmed the differences between humans and machines that I would argue, continue to frame our understanding of this relationship in contemporary times.  

Two examples of said cultural products are The Terminator (1984) and Rocky IV (1985). In both films the villain is framed as a machine-like foreigner. In The Terminator the villain is literally a machine hell bent on destroying the human race. In Rocky IV the villain is a Soviet Russian named Ivan Drago who speaks, trains, and fights like a machine. He is juxtaposed to the film’s hero, Rocky, who is framed as human in various contexts, but not more clearly than during his training as he becomes one with nature (running in the snow, chopping wood, etc.) and thus setting himself in clear contrast to the (Soviet) machine world of Drago.

There is more depth to both films and I admit doing a bit of a disservice to the plots of each. However, the duality between human and machine that was laid out in the political rhetoric of Ronald Reagan and later mirrored in many of the cultural artifacts coming from the 1980s does make clear that humans are good and machines are bad. Therefore, to answer Thomas’s question about or not our fears of technology make sense, I would say that whether or not that make sense does not matter. It is the fact that this is how we, as members of the western world, have been conditioned to understand this relationship. Thus, Yamamoto’s understanding remains very novel to us two decades after the end of the Cold War.  

I would like to offer some follow-up questions about how contemporary political rhetoric has changed our understanding of the human-technology relationship. 

1. What effects, if any, has the war on terrorism had in our understanding of this relationship? Has it changed as a result of 9/11? 

2. How do contemporary American films represent this relationship? Are we getting any closer to Yamamoto’s stance or have we remained loyal to Regan’s rhetoric? 

Below are two clips worth viewing. The first is a pre-1980s understanding of the human-technology relationship in the form of The Jetsons. I would argue this is much closer to Yamamoto’s beliefs. The second is a clip from Rocky IV as Drago’s training in a computer lab-like environment is juxtaposed against Rocky’s training in nature. 

http://www.evtv1.com/player.aspx?itemnum=15520

Wednesday, February 29, 2012

Response 1 to Thomas Wynne on Yamamoto

In his post regarding Hiroshi Yamamoto’s Stories of Ibis, Thomas Wynne asks the question of whether humans should fear advanced technology, such as that depicted in Stories of Ibis. He suggests that these fearful humans are perhaps actually afraid of mankind’s irresponsibility. He implies that it is perhaps this irresponsibility that ought to be feared, not technology.
For the purpose of this response, let us consider robotic machines, which play an integral role in Stories of Ibis. Without human direction, robots do not function, as all function, no matter how rudimentary or automated, requires some degree of human input. At this level of technological advancement, there seems little to fear other than what humans will to do. The argument is far more interesting when considering self-conscious robots. Are humans in danger when machines become self-conscious, as Shalice does in the story “Mirror Girl?” Consider Yamamoto’s description of Shalice’s nature after becoming self-conscious: “She wanted to become friends with people. To live with humans and share in their happiness — that was what Shalice wanted” (Yamamoto, 118). In this story’s vision, Shalice — who is arguably representative of a larger machine population — is not a threat, but rather a compassionate artificial human with the ability to love, despite the fact that she is confined to a virtual space. Similarly, consider Ilianthos’ personality in “Black Hole Diver.” As the story is set in “the distant future,” the story implies that Ilianthos is a self-conscious machine. Still, it longs to be human. Ilianthos narrates:
“I am not programmed to feel lonely or bored. And yet, a faint, forlorn feeling that I am alone in the void lies within me, and it seems stronger now. It is as if Syrinx’s departure has allowed me to understand what humans mean by loneliness. Perhaps it is only in my imagination. But I choose to believe it is real” (Yamamoto, 151).
Of course, the antithesis of these scenarios is best exemplified by the Terminator film series. In this grim example, machines turn on their creators following their attainment of self-consciousness, striving to annihilate the human race. Thus, to answer the overarching question Thomas poses, it seems necessary to consider robotic nature: a natural predisposition of machines to act a certain way, analogous to human nature.
But what dictates robotic nature? Is it the abilities given to these machines at their creation? Is it the purpose for which they were created? Is it the way humans treat them? Any number of questions could follow. However, I would suggest robotic nature largely draws from human actions during robot creation. Shalice is created to serve as a companion and is given emotional characteristics like those of humans. She must be nurtured properly to develop a well-rounded personality. (This takes place before her attainment of self-consciousness, but one can argue that the same nurturing would be required after such attainment.) Ilianthos’ personality is not as malleable as Shalice’s, but its role is to serve humans in a kind and respectful manner. In the case of Terminator, humans created machines to kill. It makes little sense to program a killing machine to have compassion, and it should come as no surprise that self-awareness of these machines leads to copious amounts of death. The key difference between the robots in Stories of Ibis and those in the Terminator films is that the former are made to love and nurture, and the latter are made to kill without mercy. Perhaps this can account for why the robots in Stories of Ibis are more likely to strive to be human; their existence is to love and help humans, and thus a favorable view of humans must be given to these robots when they are created. However, one is then left with the question of how robots evolve. Consider the first follow-up question. This could lead to trouble if machines have the ability to do harm. If Shalice were to turn evil, the damage she could inflict would be minimal.
Thus, perhaps the answer to the larger question — with robots in particular — is that it isn’t technology that should be feared, but rather human irresponsibility should be. It takes a human to create a machine. Whether that machine loves or kills is up to the creator.

Follow-up questions:
1. If robots possess human traits, could they develop human tendencies, such as destruction of their own kind?
2. What happens when machines built to love begin to malfunction? (Consider HAL from 2001.) Is this still a case of human irresponsibility, or should the machine be held accountable?

Links:
1. http://www.youtube.com/watch?v=1xp4M0IjzcQ “The Universe on My Hands” ends with a quote about the authenticity of friendships forged in virtual space. In the film Catfish, the protagonist forms a relationship online with a person he believes to be a young woman. However, SPOILER ALERT, he finds the woman he has been chatting with is actually middle-aged and married with children. What had been real feelings are shattered when he discovers the truth, leading to questions of the authenticity of virtual relationships.
2. http://www.youtube.com/watch?v=1ANUP5-aW4E Thomas explored allegory in “Mirror Girl,” which related the impressionability of artificial intelligence to that of children. He explores Yamamoto’s implied question of “How would a new intelligence be anything but innocent until it is taught otherwise?” In this deleted scene from Terminator 2, the character of John Connor tries to teach the terminator how to smile. Emotion is entirely foreign to the terminator, and in this respect, even the terminator is innocent.
3. http://www.foxnews.com/scitech/2012/02/29/google-technology-is-making-science-fiction-reality/ Science fiction will soon be a reality! (Or so Google’s executive chairman says.) This story, published a matter of hours ago by The Associated Press, directly addresses the concerns at hand. Consider how the woman in the audience is concerned that future technology could be dehumanizing. However, Eric Schmidt gives a brilliant response emphasizing that electronics have on/off switches. Humans are in control.

Angela Daniels on Hiroshi Yamamoto's The Stories of Ibis


The second part of Hiroshi Yamamoto’s science fiction work The Stories of Ibis reveals the reason for the downfall of the human civilization, and the truth is less sinister than our imaginations would lead to believe. In the telling of “The Day Shion Came” and “AI’s Story”, Yamamoto creates a world that is much more optimistic than the science fiction of the Western world.

As I thought about this story as a whole, words from our discussion of Shaviro kept coming to the forefront of my mind – science fiction creates an alternative world to examine what is wrong in the present. Yamamoto also presents this point when Ibis says “You recognize that fiction can’t simply be dismissed as ‘just fiction’ – that it is at times more powerful than the truth, that fiction has the power to transcend the truth” (290). With that thought in mind, I present the following quote for exploration:

Human thoughts are digital.

Most people see things as 0 or 1, as black or white. They see nothing in between. All chemicals are dangerous. You are either friend or foe. If you aren’t left-wing, you’re right. If you aren’t conservative, you’re liberal. Everything that great man says must be true. Everyone who thinks differently from us is evil. Everyone in that country – even the babies – is evil.

We TAIs find it surprising that humans have trouble understanding Fuzzy Concepts. When we say “Love (5+7i),” people incorrectly assume that means we only love at 50 percent, or fifty points out of a hundred total. They can’t understand that 5 is a Fuzzy Measurement. How could a concept like love possibly be expressed as an integer? (334-335)

The idea that human beings think in dichotomies has become a recurring theme in our class, but we have never compared it to being digital. It is a comparable concept, though, when examined. Right and wrong are the same as on and off, which all boils down to 0 or 1 in the digital language. What Phoebus asserts in this declaration seems to be that the TAIs in their ability to use “fuzzy measurements” are able to sense emotions in a more gradient and less extreme way. When something is neither the extreme of completely right or completely wrong, there is room for learning, growth, and development. In this way, Yamamoto signifies that our digital thinking is hindering us from reaching our full potential. If we could think in a more abstract way, utilize the Fuzzy Measurement system or at least not dismiss it as foolishness, we would be better off for it.

  This is further expressed at the very end of “AI’s Story” when Ibis tells Hideo, “You don’t need to understand. Just accept it… Rather than avoid the things we do not understand, we can simply accept them. That alone will remove all conflict from the world. That is i.” (398) Instead of avoiding or labeling the unknown or different as “bad”, we can simply accept that it is unknown and different. From there, we can choose to learn more about it until we can create a knowledgeable fuzzy measurement for it or we can decide that it’s ok to remain unknown and move on. Both options are more reasonable than fear and the often negative reactions fear induces.

The artifact I would like to examine is the Lilith’s Brood trilogy by Octavia Butler. This includes the books Dawn, Adulthood Rites, and Imago. I will more closely look at a quotation from Dawn, but I feel like the entire trilogy is pertinent to the discussion of Yamamoto. Lilith’s Brood has very similar themes to The Stories of Ibis, including what it means to be human. However, Butler’s tale is much darker and very pessimistic. This is a prime example of cultural differences in handling similar subject matter, as Octavia Butler was an African American woman who grew up in the time of the Civil Rights movement and, from that, had a very specific viewpoint.

Lilith’s Brood is set in a post-apocalyptic world. Instead of humanity realizing they are flawed and leaving the planet for beings more equipped to handle it as in The Stories of Ibis, the US and Russia have annihilated the planet in a nuclear war. Those that survived were rescued by an alien species known as the Oankali and kept in suspension on their spaceship while the Earth was restored. The first book follows Lilith Iyapo who is chosen by the Oankali to awaken the first group of people that are to be “mated” with the alien species and being again on Earth. She is met with fear, distrust, and physical violence. At one point, Lilith is discussing humanity with an Oankali and he mentions that humans have two fatally contrasting characteristics that lead to their demise. The first is intelligence, which he says is the newer characteristic. The second is my artifact, and is written so well that it deserves to be quoted at length:

You are hierarchical. That’s the older and more entrenched characteristic. We saw it in your closest animal relatives and in your most distant ones. It’s a terrestrial characteristic. When human intelligence served it instead of guiding it, when human intelligence did not even acknowledge it as a problem, but took pride in it or did not notice it at all...That was like ignoring cancer. I think your people did not realize what a dangerous thing they were doing. (Butler, 38-39)

A hierarchy can be viewed the same as digital thoughts. In saying that this thing is better than that thing, you are creating rightness for one and wrongness in the other. In this example, that type of thinking lead to the destruction of a majority of Earth and humanity. The rebuilding of humanity is also hindered by this thinking as the survivors try to assign blame for their situation on the Oankali and Lilith without examining the hatred and lack of understanding that lead to the nuclear war in the first place.

The second book follows Lilith’s son, Akin, who is the first human-born male construct (half Oankali, half Human). Prior to his birth, all construct males were Oankali-born since they were afraid that human males were too unpredictable. Akin is abducted by a band of resisters that are allowed to live on their own, but have been genetically altered so that they cannot procreate. He spends 3 years with them, and comes to the conclusion that humans must be allowed to have their own society and children, and eventually convinces the Oankali (who operate in a hive-mind/consensus society) to allow the resisters to colonize Mars. This is a very hard decision for them to make, because they are certain the fatal dueling characteristics of humanity will prevail and the resisters will eventually destroy themselves entirely.

Both Yamamoto and Butler use their stories to explore flaws in human thinking. Looking through this artifact using the original lens of science fiction as a tool for revealing current problems in the world, I have come up with the following questions:

How can we use science fiction to disrupt the tendency towards binary thinking? Is it valuable to try to do so?

In examining this question, I decided to try a thought exercise. I started by mapping out some of the fuzzy measurements that Yamamoto mentions, such as Ibis’s perfect love for her master as (3+10i) (Yamamoto, 398). I was hoping that in by mapping these varying emotions, a pattern I could recognize or understand would emerge. It did not, but it did get me to examine emotions in a different way. From there, I began to think about a subject that raises my gedoshield, a concept Yamamoto describes as “the phenomenon of people who were convinced they knew the truth unconsciously shutting out information that would correct their misconceptions” (356) and something I had recently had a discussion about with my husband. We were talking about the current divide in the political climate of America, and he was criticizing how Republicans shut out facts at any cost to cling to their beliefs. I interjected that I was equally flawed. When presented with a statistic from a right-leaning institution, I immediately assume that it was taken out of context or question the legitimacy of the study itself. I am right and they are wrong, there is no room for compromise.

To critically analyze this phenomenon within myself, I took the emotion that I have for certain conservatives and attempted to turn it into a fuzzy measurement. I decided that (-8 – 10i) was my standard for the worst possible actions: rape, pedophilia, or genocide for example. Would I put Republicanism at the same level of revulsion? Certainly not, that would clearly be unreasonable. After consideration, I decided that my feeling for Republicans can be expressed as (-2 – 3i). I added this to the fuzzy concepts map and could almost feel my gedoshield weaken a little. My views on the subject have not changed and likely will not, but I feel more open to having dialogue with someone of the opposing viewpoint. By creating a physical representation for my emotions and acknowledging my own prejudices, I am able to recognize that labeling one party right and the other wrong is counterproductive. Hopefully, this will lead to a little less black and white and a little more “fuzziness” in my own mind. In this way, a concept from a science fiction book will have helped to disrupt my own binary thought processes.

In case any of you are interested, UT-Austin's Cinema Studies department puts out an awesome blog on film, TV, and new media. It's called Flow and you can access it at http://flowtv.org/. I worked as an editor for them when I lived down there and they do a great job of bridging the academia-public divide.

Sam

Tuesday, February 28, 2012

Thomas Wynne on Hiroshi Yamamoto's The Stories of Ibis


“’The accident happened right after I was born, so it doesn’t bother me much. Plus, I have MUGEN Net now. When I’m online, I’m able to live just the way the normal people do. I can go window shopping, see movies, read books. I really love being able to read words that aren’t in Braille.’” (84)
            Yamamoto, The Stories of Ibis, “A Romance in Virtual Space”

“’Yeah,’ I nodded. ‘They probably couldn’t bear the solitude. So they created fantasies to escape reality.’
            ‘But those stories aren’t any less valuable than the truth. At least the heroine recognized that.’” (60)
            Yamamoto, The Stories of Ibis, Intermission 2


Selecting a quote from this work that genuinely reflected the author’s thesis was a bit difficult because the first half of the book is essentially a collection of short stories with a parallel narrative woven between them to tie them together. The first quote comes from the second story in the book, in which a young girl who spends much of her time in a virtual reality world is later revealed to be blind. This girl is able to experience things in this world that she could never experience in the real world because of her disability. I feel that this quote accurately describes in microcosm the point the author is making. The technology in “A Romance in Virtual Space” is referred to often as problematic. Addiction is mentioned, and the interface devices even feature a recommended daily use limit to assuage fears of radiation poisoning. For the girl, however, none of these fears matter because this device is also the only way that she could ever experience the sensation of sight. The goals of this quote and this story are to highlight the amazing potential a technology like virtual reality has to better the human experience in spite of all of the unfounded fears around it. When looking at the book as a whole, we see each story has a revelation much like the one from which this quote is derived, and each tells a story that downplays the modern fear of technology by presenting a scenario where the technology in question has a positive impact on the lives of those interfacing with it. In this regard, the position of the author is exposed and the closest thing to a thesis can be excavated from the text, as it were; specifically, that technology is not to be feared.

The second quote is important as well, because it makes clear the means by which Yamamoto is conveying his message. The narrator raises the same point this quote makes very often in the early intermissions following the stories and as such draws attention to why the author is using the narrative devices he is using. The author uses powerful, positive fictional stories to act as counter-points to the aspects of modern or theoretical technology that people fear. Yamamoto is trying to make the reader question, “Are the technologies that so many fear really that frightening?” After all, most of the fears of the public are founded upon fiction in the first place.

Yamamoto does this by pantomiming this dialogue by pitting the skeptical narrator against the knowledgeable Ibis. In this instance, the narrator takes the place of the reader (this is even alluded to a few times, when Ibis explains that when reading a story the reader essentially role-plays the main character, or narrator) and the author takes the place of Ibis, explaining a world where these technologies can be used positively. The debate between the two of them is essentially the author responding to the skeptical, fearful public as well as the obvious problem his approach of using fictional works presents, namely “It’s all fiction, why does it matter?”

These debates are punctuated by stories that Yamamoto intricately weaves together of progressively more abstract science fiction told in such a way that a problematic technology is introduced then solved by a story, which then also introduces another problematic technology to be solved in another story. In doing so, he is making a case that these technologies aren’t as bad as people fear. The first story deals with society’s dubious stance on virtual Internet communities and explains how the bonds formed in these communities are just as real as those formed in “real life.” This also serves as a great frame for the author’s technique, ending with the quote, “An escape from reality? Laugh if you want. To be certain, no such vessel named the Celestial existed in real life. But the bond, faith, and friendship of the crew were undeniably real.” The other stories deal similarly with virtual reality, virtual communities, and artificial intelligence.

While the author’s stance on the fear of these technologies is fairly clear, it can also be reasoned that this piece is a reaction to other fictional works and popular tropes across a variety of media. The concept of internet/virtual world addiction is a fairly hot topic in the news right now, with many news stations decrying the freedom, privacy, and escape both the Internet and virtual worlds grant as potentially dangerous. Many people also believe that the technologies Yamamoto writes about could easily be used to control the population and create dystopian futures as with Brave New World, and Noir. Then we have the giant cloud of fear surrounding AI. Countless movies, books, videogames, and TV shows have been made with networked AIs as the ultimate antagonist. To name a few, Terminator, Star Trek, and System Shock come to mind. (The intro to System Shock 2 serves as an interesting example of a female AI less enlightened than Ibis - http://www.youtube.com/watch?v=MXPn6wcsUmk ) There are plenty of fictional works that capitalize on this fear to create an interesting narrative. Yamamoto seems to almost be moving in direct opposition to these works, however, glorifying sentient machines and VR while demonizing the people who dismiss the technologies as silly or dangerous. The detective in the first story, for instance, reminded me exactly of the naysayers of yesteryear who looked at children and adults reading comic books and playing video games as ridiculous, unable to see the point. The “color timer” on the VR interface in the second story mirrored the apprehension in the news surrounding cell phone radiation. The fear that a child could be addicted to Internet communication or a virtual world thusly stunting their social interaction also comes up in the third story, “Mirror Girl.” All of the tales Ibis tells have some level of commentary on recent events.

This is essentially the “limit” of the text. Yamamoto generates a compelling counter to the fear-driven technological fiction and theory of other authors, but never directly says that the more positive world and stories he has crafted represents the way it will or must be. The result is a planting of a seed of doubt – doubt that the creation of AI or virtual reality really will be the end of human civilization. It is up to the reader to then form his own opinion on what must be done, given the possibilities of such technology. As such, the question that Yamamoto raises about the validity of this technophobia still stands. This novel does a very good job of being a counterbalance to the more prevalent fictions where technology can result in the destruction of humanity or freedom, and while it often presents technology in a positive light, it also does so with reservation. The AI in “Mirror Girl” is an enlightened, radiant being because it was essentially raised with humanity’s best intentions in mind. In this story, Yamamoto likens AI to a child. This makes a few logical leaps, but the allegory is fairly powerful. After all, children can turn into terrors, and eventually into vile people, but to what extent is that the fault of the parent? How would a new intelligence be anything but innocent until it is taught otherwise? In this regard, Yamamoto seems to subscribe to the school of thought that humanity is ultimately behind whether or not our technologies will destroy us, not the technologies themselves.

While this doesn’t answer the question of whether fear of powerful technologies like AI is well founded, it does strike at the heart of the issue, and allows us to look at it from a different angle. Are these technologies to be feared, or is mankind’s irresponsibility that is frightening? Do we trust each other enough to believe that an AI would take on the selfless qualities of its creators? Do we love life and each other enough to prevent our society from being consumed by a virtual world? There will be a time when some of the fiction in this book becomes reality. When that time comes, will we be ready? I'd like to think that we will be, that humanity is mature enough as a species to leave behind its questionable history. Although the book raises another possibility as we see in "A World Where Justice Is Just" - perhaps we are just as dangerous as we seem to think, but maybe the machines and AIs that we create will end up saving us from ourselves... hopefully before there is nothing left to save. 

Finally, this book immediately made me think about a humorous, but equally curious video I stumbled upon a while ago. The video is part of a series entitled, “Kids React” and in this particular case, the video shows children reacting to a huge Japanese concert in which the star performer is actually a computer-generated image with a computer-generated voice (KIDS REACT to Hatsune Miku http://www.youtube.com/watch?v=egcfC7PCneQ ). The concert itself had a massive showing (as can be seen in the video) but the children seem taken aback by the fact that the “artist” isn’t real, going so far as to say, “It isn’t music because it isn’t made by a person.” It’s a very striking dichotomy of a culture that embraces new and interesting technology like this and one that is inherently skeptical of it. Also, one of the first comments made by the children is, of course, “Robots are going to take over.” Upon reflection, if this is where AI and VR are headed, I wonder if we have anything to worry about at all.