Thursday, March 08, 2012

Response 2 to Melanie Smith and E.M. Forster's 'The Machine Stops'

It is a fascinating thing to watch/participate in the coming of age of the most advanced transhuman generation to have yet existed, and maybe even more so intimidating to consider what will come of the younger generations. Such generations whose childhood and developing years occurred at the same time as the greatest expanse in transhumanist qualities yet to be developed. So what is happening to these children, what is happening to us? I find it interesting to consider Melanie’s questions “Does becoming a transhuman make you less human?” with our current trends and relationship with technology. Her more detailed version of the question though seems to be the most provocative to our current understanding of the world - “Does becoming transhuman cause the individual to lose certain traits, generally ascribed to humanity at large, that would hinder them?” As we plunge deep into the seemingly shallow waters of facebook/twitter/google and interconnect our infinite web of intranets into an internet, what does/will it mean to have a human interactions?

When trying to define what the transhuman state is as related to the human state, we become wrapped up in the semantics of what separates us from any other creature. Melanie briefly describes it from Kuno’s point of view as being a way of life, to love, to have a sense of adventure; all which seem very dormant qualities within Forster’s The Machine Stops. The characters though do still hold on to many human traits, but only now in a neo-cultural sense. I find this to be a crucial point in understanding what it is to be human - that we are a part of a larger and developing culture, and further even educated/raised within a certain cultural view point. To then explore what it is to be a transhuman, one would be merging with machine to heighten and extend their cultural understanding. This is a broad sense, spanning the military industrial complex to the pretentiousness of the art world. All are adapting transhuman tendencies by incorporating their existence with machine/computers - lenses, gears, timers, communication, sight, etc. These are not actual combinations between human and mechanism either. My point here is that we are exploring transhumanism as a way to extend our abilities from a cultural perspective, and what may change as Melanie references in the Dresdan Codex is the human culture.

In the culture change of Human to Transhuman, a story such as The Machine Stops has the most relevance and importance. Yes, there might be the possibility of the plunge into Forster’s imagined world, but these culture shifts will also occur on the surface level. The most immediate and over-played argument of our contemporary time is how our personal relationships are effected. I find myself most fascinated with what we consider art, craft, and trade service. In the transhumanist life, these parts of our existence are being collaboratively understood with the machine - the problem being that a machine understands the world in different light, which was so eloquently illustrated by the japanese author Yamamoto. This is important because while our brains are not setup like a computer system to run through programs by nature; we are creature affected by the nurturance of society. Such it is that we are already limited by our languages and abilities to express ourselves that we might be able to elevate our understanding of the world through transhumanist tendencies. This is ultimately our current direction, but as we are limited it is also important that we understand the limits of the machine. It might be that the transhumans are the one who are able to experience the greatest sense of existence(enlightenment?) if a diversity between man and machine/computer can be asserted - and the posthuman whose embrace of hyper  cultural functionality may be the most limited and confined.

Could the transhuman existence actually be an enlightenment of both human and machine? Then is there an optimistic future for the posthuman?

In transhuman living, your are able to visualize a subject while a machine is only processing code. How can humans capitalize on the use of machine with out being confined to the limits of a machines capabilites?

Response 1 to Melanie Smith on E.M. Forster's "The Machine Stops"

          The question posed in regards to “The Machine Stops” was, “Does becoming transhuman make you less human?” This is an incredibly difficult question. To really answer this question, you have to analyze what it means to be human. Philosophers, theologists, and scientists have been struggling with this question for centuries and haven’t really found a concrete answer. I initially posited in class that we were differentiated from other species by our propensity for tool use. It was quickly brought to my attention that many other animals could use tools as well. While humans are definitely more famous for this, chimps and even ravens definitely use tools. Other animals also demonstrate problem-solving skills like humans. As I began to reflect on what it means to be human, I started to realize it was more of a philosophy. In fact, philosophy is a distinctly human idea. It is then that I began to realize that it isn’t necessarily the use of tools or problem solving that makes humanity very different, but more our imagination.

Our advanced technology is really only a reality because someone along the line dared to believe it could exist. The reason we are able to launch satellites and men and women into space is because our ancestors looked to the stars and said, “I want to go THERE.” People usually strive to be something better than themselves, and this behavior only exists because people can imagine that they truly CAN be something better, something greater. Now, saying imagination is the reason we are different from other animals is a very difficult point to prove. After all, we can create tests to determine the different ways chimps and other reasonably intelligent animals can think about things, and we might be able to take brain scans and all that stuff, but we’ll never know for sure what a chimp is thinking. For all I know, many animals have imaginations… but they definitely haven’t acted on them on the scale that we have.

This concept of imagination being what separates humanity from other animals ties in very well with the story in that a lack of imagination was the downfall of the humans in “The Machine Stops.” The Machine was, presumably, born from human imagination, specifically the idea that people could create and live in a society free of scarcity. This seemingly accomplished the idea of instant communication across all walks of life too. The humans before the Machine seemed to be much like the humans today; they were filled with imagination and pride, constantly trying to become better than nature itself. Then, they became slaves to the Machine. This happened, however, not because of the Machine itself, but rather the pervasive apathy that began to rule their lives after living removed from a challenging life and removed from each other.  They lost their imaginations, their drives to be more than a lump of flesh, and thusly they were unable to imagine a world where the Machine didn’t exist. They were unable to imagine that the Machine could fail, and THAT is why they met such a catastrophic end. In that way, I suppose becoming a transhuman correlated to a loss of humanity, but I don’t think one can necessarily blame the technology for this. The humans in the Machine made a choice to abandon imagination and ambition.

Much like with other science fiction stories, it was the peoples’ own shortcomings – their complacency and close-mindedness in this case – and not the technology itself that led to their downfall. In this fictional world, some humans made the choice to surrender their autonomy and their imagination to the comforting womb of the Machine. I suppose then that this story isn’t necessarily an anti-technology story, but rather a parable about the human condition, specifically the worst parts of it. It could also be seen as a cautionary tale about taking the "easy way out"of life and other situations, especially because of how nonchalantly people in the story made the decision to be euthanized. In this way, I don’t believe that this story necessarily suggests that transhumanity equates to a lack of humanity. Putting it rather crudely, making the choice to abandon what made the humans in the story human is what made them less human. I remain unconvinced that technology and humanity are opposed, and continue to support the idea that to be human is to use technology in some way or at least dream about it.


   1.      In “The Machine Stops,” it was insinuated that the Machine was using the humans. This is a fairly prophetic statement, as it also appeared in The Matrix several years later. In what way are the technologies of today using us? Do we have a symbiotic relationship with technology or a parasitic one?

   2.     The machine that caused all the trouble in this story was known simply as the Machine, but the airships played a prominent role as well. Are the ways humanity used to use airships fundamentally different from the ways it uses the Machine, or are they one in the same?


This is a Wired article containing several different reactions to the question, “What does it mean to be human.” This provides some very interesting viewpoints of credentialed individuals on the very topic we discussed in class.

As further proof that humans aren’t the only tool-using animal (or even primates for that matter) here is a video of a crow using 3 tools in sequence. I’m not sure how long it took for this bird to figure out how to do this, but the fact that he can shows a much higher level of intelligence than we give birds credit for.

Seeing as we talked about LSD a bit in class, I checked out the first few pages of this book and did some digging on LSD and spirituality. There’s quite a lot of research on it out there, too. I didn’t read the whole book, but a 40-some-odd page preview should be enough to pique your interest.

Tuesday, March 06, 2012

Hey everyone. I saw this article and thought some of you may find it interesting. It is about technological development and the risk of human extinction.

Melanie Smith on E. M. Forster's "The Machine Stops"

“"Cannot you see, cannot all you lecturers see, that it is we that are dying, and that down here the only thing that really lives is the Machine? We created the Machine, to do our will, but we cannot make it do our will now. It has robbed us of the sense of space and of the sense of touch, it has blurred every human relation and narrowed down love to a carnal act, it has paralysed our bodies and our wills, and now it compels us to worship it. The Machine develops--but not on our lies. The Machine proceeds--but not to our goal. We only exist as the blood corpuscles that course through its arteries, and if it could work without us, it would let us die. Oh, I have no remedy--or, at least, only one--to tell men again and again that I have seen the hills of Wessex as Alfrid saw them when he overthrew the Danes” (13-14).

Far flung from the hopeful world Hiroshi Yamamoto would write about nearly 100 years later, E.M. Forster paints a bleak picture of humanity’s future with technology. 

In 1909, when The Machine Stops was first published, technology was a much different matter than it is today. Radio technology was still in its infancy.  The telephone was still relatively new, with Alexander Graham Bell having patented it only 33 years prior. The historic flight of the Wright Brothers had only occurred a mere 6 years before. It’s interesting to keep in mind what technologies Forster was surrounded with when writing this bleak tale of technology killing humanity. He did not need supercomputers and ubiquitous networking to feel fear for the coming future. 

Forster is arguing that technology is seeking to remove us from the natural world, stripping humans of their very humanity. Human contact, love, and direct experience are all being stripped away in favor of the mechanical. This seems like a fear of becoming Transhuman; the fear that if humanity starts to augment itself with technology, it will become something altogether different and alien. 

The term “transhumanism” was first coined in 1957 by Sir Julian Huxley, an evolutionary biologist and humanist, and brother of the science-fiction writer Aldous Huxley. When he coined the term, he envisioned it as a way for humanity to better itself through science and technology. He left open the possibility that eugenics may be one way to approach this, but transhumanism has since come to refer to any science that may help humans better themselves. 

This is something that The Machine Stops is on the border of, but since the idea had not come to fruition yet, the story stops short of invoking transhumanism. The humans in The Machine Stops are completely cocooned by technology from birth to death, which in its own way can still be seen as transhumanism. They are, after all, using technology to better the lives of the people, at least in the common person’s point of view. However, technology has not become a part of their physical bodies, and does not seem to have augmented their bodies in any way, as we normally think of transhumans.

The question I would like to ask is “Does becoming a transhuman make you less human?” One could argue about the true meaning of “human,” but the question could be rephrased “Does becoming transhuman cause the individual to lose certain traits, generally ascribed to humanity at large, that would hinder them?”  Kuno certainly seems to think so, when he makes the speech above to his mother, Vashti.

Kuno makes it very clear that he believes that humanity is dying because of its dependence on the Machine. Humans no longer touch each other now, as is shown in the scenes where Vashti is on her trip in the airships. She is outright appalled when the flight attended reaches to steady her and keep her from falling. Kuno also drives home this tendency when he talks about the Machine “narrow[ing] down love to a carnal act.” We do not hear much about reproduction in this story, other than the fact that one must apply to become a parent, and that upon birth a parent’s responsibilities are officially over. In short, even the process of falling in love, marriage, and bearing children has now been eliminated and mechanized. There is no more need for love, and perhaps not even sex, in the new mechanized world. 

Furthermore, Kuno believes that humanity is now nothing more than “blood corpuscles,” serving as part of the Machine as a way to keep it going rather than the other way around, similar to how humanity is treated in The Matrix. Humans created machines to serve them, but now they are simply the fuel that sustains them.

In this speech, Kuno references the lecturers, the people within the society who study various subjects and transmit their speeches to others. This group of the population is fascinating enough on its own, and, at points, seems deserving of Kuno’s criticisms. From observing Vashti, we can see that she is very far removed from the real world (the planet’s surface, ocean, stars, etc.) though her fellow lecturers are not all as such. She references that some of the lecturers do get permits to explore the outside land or the ocean to further their knowledge on the subject. While it’s not illegal, Vashti makes it clear that is not something “spiritually minded people do” (10). Her reaction seems like it is most likely not isolated to her, and may be something that is widely accepted from the academic community. Later, when travel outside of the Machine becomes forbidden, most do not find this distressing. In fact, it causes one lecturer to give a speech about how first-hand knowledge is not needed, but also not preferable. “Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element--direct observation.” (16). This point of view, however, is in stark contrast to Kuno’s speech, as he believes the only way to restore humanity is to tell them of his exploits on the hills of Wessex. On one side, direct experience is condemned, and on the other direct experience is the only way to become truly human. 

The artifact I would like to bring in then is the webcomic The Dresden Codak; specifically the Hob storyline. The storyline starts with this comic: I highly recommend everyone read the whole comic, especially those interested in transhumanism, futurism, or just science and philosophy. However, I’m going to point out two specific comic strips from the Hob series. The first is “Metropolis” Not to give away too much of the plot, this strip explains things that happen in the future (or rather an alternate future) relative to the time of the main plot. They explain that humans created technology to meet all of their needs, and soon they became dependent on it and grew unable to understand it, as the humans did in The Machine Stops. Unlike them, however, in this future world they had the transhumans, humans that had integrated machines into their body, and were able to speak the machine language and act as intermediaries. As far as those that were not augmented were concerned, the machines turned against them, and the resulting war destroyed the mother computer and left the world in ruins. 

The next strip I’d like to bring up is “Eloi” It explains the story from a transhuman point of view. The humans are coming to beg for their problems to be solved, but the transhuman alludes to the fact that humans are dying out anyway. It’s clear though that everything she says goes straight over the human’s heads, especially when one utters, “Who is homosapiens? I don’t understand.” The chilling part of this strip, though, is when the transhuman dismisses the humans with the cold line, “We can give you anything you want. Save relevance.”

The transhuman in “Eloi” may seem cold and uncaring, but from her point of view humanity has reduced her role to that of a glorified babysitter. Think back to the strip “Metropolis” where the story was told from the point of view of the human survivors who destroyed the machines. Think about how they refer to the transhumans. They say that they “sacrificed their humanity” and that their only role was to “ensure that the ever-expanding net still served humanity.” It shows that the common human did not view the transhumans as one of them, and only saw them as another way that the machines would serve them. Not only does that not seem fair, it does not seem to be how the transhuman in “Eloi” would see herself.  

With both works, The Machine Stops and The Dreden Codak’s “Hob,” we see two differing points of view on transhumanity. In The Machine Stops, transhumanity is commonly viewed as a good thing as people even begin to worship the Machine that gives them everything they could need. In “Hob,” people still enjoy what machines can do for them, but those that bridge the gap into transhumanity are seen as giving up something, to become mere servants to the “real” humans. Both situations end in the destruction of the world. 

However, the question I asked earlier is still valid: Does becoming a transhuman make one less human? Putting aside the destruction of the world for a moment, the people in The Machine Stops had large advantages in technology. It seemed that sickness, hunger, and poverty had all been eliminated, except of course for those who were exiled. Their society had lost a lot of what we, and Kuno, might describe as humanity; love, sense of adventure, etc. all seemed to be missing in the humans of that age. But does that make them worse off than people of today? One could argue that love and adventure are dangerous or potential harmful concepts, and perhaps they are worth trading for technology that could end starvation and other problems. 

As for the transhumans in “Hob,” they are obviously mentally superior to their human counterparts, and likely superior in other ways such as physical strength. They were able to understand the changes that were going on in the evolution of their people, and saw that humans were becoming outdated. The cold and callous way that they treated the humans may seem inhumane, but humanity is obviously capable of similar callousness. For whatever reason, people of today often to prefer not to think about poverty, sickness, and starvation going on in other countries. The cold manner of the transhumans is not far flung from that. 

The end of “Hob” has a more positive outcome than The Machine Stops, insinuating that both man and machine are needed in order for the two to advance to something greater. In a way, this could also be taken from The Machine Stops; the Machine could not run all by itself, and ultimately needed human’s help in order for both to continue co-existing. The difference between the two works is that in The Machine Stops, humanity never got the chance to work past its ignorance and work in true partnership with the Machine.

Personally, I think humanity can only benefit from the pursuit of science, and using what we learn from science to better the lives of people. Certainly, technological advances can cause us to lose certain traits. Even now, we see diseases like obesity linked with new technologies like television and internet as people begin to have less reason to leave their homes for entertainment or information. But the internet is a valuable tool for knowledge and communication, something that most people would not give up for anything. Some would argue that a connection to nature is essential to humanity; farming, hunting, etc. was a way of life for all of our ancestors, though now a much smaller group participates in these activities. Society at large no longer has to work for our food. We have been disconnected from something that was very closely tied to the life of all people in the past, and that disconnect could be seen as a loss for humanity. But to me, it seems we’ve gained more than anything we may have lost. The average person may have no idea the work that goes into bringing food to the supermarkets, but I don’t think that makes them disconnected from the human condition. If anything, this allows people to open up to new ways to be human, new ways to express ideas and live life. But if The Machine Stops is teaching us anything, it is that we can’t become too disconnected from the knowledge of these vital roots, or our society could crumble just as easily.

Saturday, March 03, 2012

Response 2 to Angela Daniels on Hiroshi Yamamoto's The Stories of Ibis

In our discussion of Yamamoto’s book, Angela asked the following question: “How can we use science fiction to disrupt the tendency towards binary thinking? Is it valuable to try to do so?

It’s no secret that science-fiction has tried to change our minds about a great deal of issues over time. Science-fiction of course tries to extrapolate the course the future will take, but this is a method to try to speak to the people of today, and The Stories of Ibis is a wonderful example of science-fiction that has a strong message.

Ibis wants humans to break away from their binary thinking by making them realize that the differences between humans and AI’s are just that: differences. She succeeds in finally convincing the narrator of this, when at the end he thinks, “Who would feel inferior for not being able to run as fast as horses do? Who would feel resentful for not being able to fly as birds do? Like Ibis said, this was just a difference in our specs” (422).

Simply being different is not the same as being inferior. As the narrator notes, lacking the ability to fly does not make humans inferior to birds. However, it seems to be a weakness that runs in the blood of humanity itself that makes us instantly believe difference to mean a lacking. 

Just look at the ads denouncing high fructose corn syrup. High fructose corn syrup is an ingredient used in many different products in lieu of common table sugar. Many read the name of the ingredient, and because it sounds so odd and different, they have no trouble believing that high fructose corn syrup must be bad. However, high fructose corn syrup is basically the same as ordinary sugar when it comes down to it. Most people understand that sugar is bad for you when eaten in excess, and yet it seems most people are not crying out for common table sugar to be excluded from their sweet treats. Because high fructose corn syrup was labeled differently than table sugar people naturally jumped to the conclusion that one must be worse than the other.

So, binary thinking does exist, but how can science-fiction move to change it? Science-fiction works like a mirror, showing us an image that is dissimilar from reality, but similar enough to show us reality’s flaws. In The Stories of Ibis, we are looking at the relationship between AI’s and humans. In the works of Octavia Butler, which Angela brought up, we’re looking at the relationship between humans and aliens. At the end of the day though, these works are really critiquing how humans relate with other groups of humans.
It is natural for humans to fall into an “us or them” mentality. We like to belong to something larger than ourselves, whether it is an organization, a religion, a race, a nation, or what have you. We like to be able to say, “I’m part of X group.” And whether stated or implied, this affiliation is often augmented by the assumption that being part of X group makes one better than those not in X group. It may not be intentionally insidious; it can be as simple as “I’m part of the business fraternity, so that means I’m more qualified than those that are not.” This statement may or may not be true, but the important thing is that so many people do put faith in such claims, and being able to make these claims can make people feel better about themselves. 

However, is it right to categorize ourselves by organizing into cliques like these? Business fraternities are mostly harmless and can help people, but what about nation divides? Many conflicts in history could be boiled down to “My country is better than your country, so my country deserves what your country has.”  Binary thinking, or hierarchal as the example from Butler would phrase it, definitely has its repercussions. 

To address Angela’s second point, if science-fiction can make us look in the mirror and see our imperfections, should it? Should science-fiction even bother to try to change the way we think? Is there value in that?

Personally, I think that shaking ones assumptions is always valuable. In order to grow as an individual, one must have their beliefs challenged at some point. If you never had to defend your point of view, you would end up as delusional as the humans in The Stories of Ibis who believe they are fighting a war against the robots. 

Binary thinking may be completely bad, or maybe it does have its merits. But if we never acknowledged it and questioned it, we wouldn’t be able to grow past it. We have to face our demons if we are ever to control or defeat them.

  1.       In class, we spent a lot of time talking about breaking out of binary thinking by disrupting the binary with a third point, much in the way Butler’s work disrupts the gender binary by adding a third gender. However, is this any better? Does adding a spectrum of shades of gray really make it a better way of thinking, or is it just adding more ways to categorize things as better than or worse than others?
  2. It is believable that science-fiction can influence a person’s point of view by exposing flaws in society. Is this influence always a good thing? Can science-fiction be harmful? Is science-fiction just another form of propaganda, for better or worse?

This is a clip from the original Star Trek episode “May That Be Your Last Battlefield.” This part of the episode shows the hatred between members of different factions of the same race. The only difference between the two is that one faction is black on the right while the other is black on the left. Spoilers: Turns out their entire species already killed each other in a massive civil war. 

This is a strip from the webcomic Questionable Content. While its main subject matter is not AI’s, there are a few AI characters featured (one of which is the character reading in this strip). I think this strip in particular is interesting, because even though the main plot is not about the relationship between AI and humans, the mere presence of AI characters seems to call for an in-depth analysis such as this, if only to place whether the relationship of AI’s and humans is one of equality or not.

This is an extremely powerful article entitled “The Hidden Message in Pixar Films.” I won’t spoil the main point of the article, as the author goes into huge depths about his interpretations and they need to be read in full. But it does go along a similar vein about breaking down binary thinking about human vs. non-human, and proves that a work does not have to be specifically labeled as science-fiction to wrestle with ideas such as these.  

Response 1 to Angela Daniels on Hiroshi Yamamoto's The Stories of Ibis

Yamamoto’s robots in Stories of Ibis experience emotion in a wholly different way than humans do; they describe they way they feel by using a word and an equation using the imaginary/complex number system: “Love (5+7i),” (p. 334). To help illustrate just how foreign to humans this sort of thinking is, Angela developed an exercise in mapping the emotions mentioned by the robots in Stories of Ibis on a two dimensional plane was an excellent idea in that it further drove the point that humans--both in Stories and the readers--cannot fully grasp how an artificial intelligence would deal with emotions. The robots use a word that we might be able to understand, such as love or doubt but they then quantify it with a “fuzzy” number that might approximate the degree to which they feel it.

This explanation is given in the context of a machine explaining that “human thoughts are digital” and how we humans want to see things in terms of right/wrong, good/evil, black/white or any other binary that might be applicable to a particular situation. Angela asks how we might use science fiction in a manner similar to Yamamoto to disrupt this sort of binary thinking and if it is valuable to do so. Science fiction easily gives the audience a point of view that can be placed outside the normal binaries of human thought (Yamamoto uses robots, Butler has aliens: two common “others” used in science fiction). By providing these characters that do not necessarily think as humans do, science fiction authors/filmmakers/et cetera make the audience identify at some level with something decidedly non-human; even if it is only to be afraid of the aliens or robots.

An example of such things that might not be “science fiction” are comic book superheroes such as the X-Men. Most of the X-Men are human in the sense that they were all “normal” at some point in their lives and through some sort of mutation, they became more than human, perhaps transhuman or posthuman (Magneto calls them Homo Superior). The readers of these comics are able to identify with the characters because they are still human at some level: some still have human desires such as finding love or just wanting to be treated like a “normal” human. Time travel is another and perhaps one of the older ways writers have been able to get their audience to step back from their lives and look at things differently. H.G. Wells’s The Time Machine not only coined the term “time machine” but also introduced society at large to a new form of literature. His Eloi and Morlocks probably represented the aristocrats and working classes of the late nineteenth century, respectively. Introducing something outside what we might consider “normal” is the easiest way to get humans to step outside themselves for a time and perhaps step back inside to themselves with a new, enlightened point of view.

Different technologies throughout the ages have all had the hopes and dreams or humanity projected on to them. All sorts of things from hiding hair loss to printing replacement organs on an inkjet printer. Humans cannot help but dream of all the great things science can bring us in the future. The characters in the stories Ibis tells the narrator all are using technology to escape from the limits of their everyday lives, even Shion, a piece of advanced technology herself, wishes to overcome her fear of death by turning to books, television and humans to find a deeper understanding of her own existence. By telling stories to the narrator, Ibis hopes to reveal a truth about humanity’s relationship with technology. The narrator is lead to believe that the robots only wish to care for their humans and wants the narrator to tell his brethren about the goals of the robots. A great deal of the technologies humans create are made to help ease some facet of human life and in doing so these technologies allow humans to explore other things. Allowing humans to spend time exploring other ideas and concepts is beneficial. The fruits of that exploration may not always benefit humanity, but they occasionally do and that makes it worthwhile.

Science fiction allows the audience to explore ideas and cultures in a way that can be entertaining while inviting the audience to look at things in a different light than before. Attempts at this may not always be successful but if the story gets at least one person to see the world in a more enlightened view, then the attempt was worthwhile. One of my favorite things about science fiction is that the minds behind the stories know that it is not multimillion dollar budgets and elaborate plot twists (though those things can help make the work engaging) that make the piece of science fiction work but exposing the audience to different ideas while providing them a lens to look at their own world.

Followup Questions:
1. What would be a way to map human emotions using an understandable “fuzzy” system?
1a. Could that system be a single axis spectrum or does it need to be multi-axis? (Like Angela’s plot of the emotions mentioned by the robots)
2. Why do other genres of entertainment, while capable of disturbing the normal modes of human thought, do so less than science fiction?

The Facial Action Coding System:
This is an relatively simple way that has been used to help computers comprehend human emotions based on the facial expressions we use (though not too reliably). FACS breaks down movements and gestures into codes (0 is the “natural face,” 1 is raising the inner eyebrow and 19 is sticking out your tongue), once all of the movements in a particular expression are accounted for, a computer can make a guess at the emotion the human is portraying: happiness is 6+12 (cheek raising+raising the corners of the lips) while sadness is 1+4+15 (inner brow raised+lowering of the brow+lowering of the lip corners). We can teach computers to recognize our emotions but they would probably never understand them. Game:
Games With a Purpose offers a few different games that help to teach computers to solve problems that humans excel at. My favorite is the ESP Game; both players are shown an image and both players have to describe the image, if both players use the same word, they get points and move on to the next one. Each match is applied to that image and used to help computers identify images and hopefully return more relevant image search results.

Thursday, March 01, 2012

Response 2 to Thomas Wynne on Yamamoto

Thomas posed the question of whether or not our fears of technology are well founded. Prior to reading the Yamamoto book I had never considered looking at technology from a perspective that did not see it leading to an inevitable doom and a world run by human-hating robots (thus, the Shaviro text fell more in line with the thinking I was used to). However, the Yamamoto book opened my eyes and allowed me to think of the human-technology relationship as something that could evolve and better our world.  

In the western world, our relationship with technology tends to be framed in a negative light. This ongoing understanding of what could happen if technology gets too smart is something that has structured my understanding of the world as well as a stereotypical framing of the relationship that can be found throughout popular culture artifacts and political rhetoric. It is clear through the books we have read during the course that the fear of technology is not a novel concept. However, I would argue that the contemporary fear has been heightened through the political rhetoric of the Cold War during the 1980s and the artifacts of popular culture produced during this time. 

It has become common knowledge among communication scholars that Ronald Reagan’s rhetoric was effective because of the way he spoke. He tended to use transcendence during his speaking, thus moving away from the day-to-day, live experiences of American citizens. Instead, he relied not only on nostalgia and a love of the past, but on the inevitable progress that would lead to the future. What this type of political rhetoric allows us to do is to think about a better place rather than having to deal with our realities. For example, the effectiveness of Reagan’s rhetoric could be found in its juxtaposition to Jimmy Carter’s rhetoric. Carter asked people to restrain from over-spending and asked the American people to see their role in a poor economy. Reagan criticized such rhetoric, blamed the poor economy on the Democrats, and asked the American public to follow his lead towards progress without having to “change” anything (of course, this led to many things; deregulation being perhaps the most notorious). 

Reagan’s rhetoric of transcendence also worked to create clear demarcations between good and evil (ex. good vs. bad capitalism) and in no place is this more evident than in his framing of the American citizenry against the people of the Soviet Union. A significant element of this binary was the American human versus the Soviet machine. Thus, Americans had autonomy and freedom. Soviet citizens were more like trained machines that did everything in support of the communist government. Not only did this allow us to separate our identities from our enemies and continue seeing Soviet citizens as people very different from us, this rhetoric affected artifacts of popular culture, specifically in film, that reaffirmed the differences between humans and machines that I would argue, continue to frame our understanding of this relationship in contemporary times.  

Two examples of said cultural products are The Terminator (1984) and Rocky IV (1985). In both films the villain is framed as a machine-like foreigner. In The Terminator the villain is literally a machine hell bent on destroying the human race. In Rocky IV the villain is a Soviet Russian named Ivan Drago who speaks, trains, and fights like a machine. He is juxtaposed to the film’s hero, Rocky, who is framed as human in various contexts, but not more clearly than during his training as he becomes one with nature (running in the snow, chopping wood, etc.) and thus setting himself in clear contrast to the (Soviet) machine world of Drago.

There is more depth to both films and I admit doing a bit of a disservice to the plots of each. However, the duality between human and machine that was laid out in the political rhetoric of Ronald Reagan and later mirrored in many of the cultural artifacts coming from the 1980s does make clear that humans are good and machines are bad. Therefore, to answer Thomas’s question about or not our fears of technology make sense, I would say that whether or not that make sense does not matter. It is the fact that this is how we, as members of the western world, have been conditioned to understand this relationship. Thus, Yamamoto’s understanding remains very novel to us two decades after the end of the Cold War.  

I would like to offer some follow-up questions about how contemporary political rhetoric has changed our understanding of the human-technology relationship. 

1. What effects, if any, has the war on terrorism had in our understanding of this relationship? Has it changed as a result of 9/11? 

2. How do contemporary American films represent this relationship? Are we getting any closer to Yamamoto’s stance or have we remained loyal to Regan’s rhetoric? 

Below are two clips worth viewing. The first is a pre-1980s understanding of the human-technology relationship in the form of The Jetsons. I would argue this is much closer to Yamamoto’s beliefs. The second is a clip from Rocky IV as Drago’s training in a computer lab-like environment is juxtaposed against Rocky’s training in nature.

Wednesday, February 29, 2012

Response 1 to Thomas Wynne on Yamamoto

In his post regarding Hiroshi Yamamoto’s Stories of Ibis, Thomas Wynne asks the question of whether humans should fear advanced technology, such as that depicted in Stories of Ibis. He suggests that these fearful humans are perhaps actually afraid of mankind’s irresponsibility. He implies that it is perhaps this irresponsibility that ought to be feared, not technology.
For the purpose of this response, let us consider robotic machines, which play an integral role in Stories of Ibis. Without human direction, robots do not function, as all function, no matter how rudimentary or automated, requires some degree of human input. At this level of technological advancement, there seems little to fear other than what humans will to do. The argument is far more interesting when considering self-conscious robots. Are humans in danger when machines become self-conscious, as Shalice does in the story “Mirror Girl?” Consider Yamamoto’s description of Shalice’s nature after becoming self-conscious: “She wanted to become friends with people. To live with humans and share in their happiness — that was what Shalice wanted” (Yamamoto, 118). In this story’s vision, Shalice — who is arguably representative of a larger machine population — is not a threat, but rather a compassionate artificial human with the ability to love, despite the fact that she is confined to a virtual space. Similarly, consider Ilianthos’ personality in “Black Hole Diver.” As the story is set in “the distant future,” the story implies that Ilianthos is a self-conscious machine. Still, it longs to be human. Ilianthos narrates:
“I am not programmed to feel lonely or bored. And yet, a faint, forlorn feeling that I am alone in the void lies within me, and it seems stronger now. It is as if Syrinx’s departure has allowed me to understand what humans mean by loneliness. Perhaps it is only in my imagination. But I choose to believe it is real” (Yamamoto, 151).
Of course, the antithesis of these scenarios is best exemplified by the Terminator film series. In this grim example, machines turn on their creators following their attainment of self-consciousness, striving to annihilate the human race. Thus, to answer the overarching question Thomas poses, it seems necessary to consider robotic nature: a natural predisposition of machines to act a certain way, analogous to human nature.
But what dictates robotic nature? Is it the abilities given to these machines at their creation? Is it the purpose for which they were created? Is it the way humans treat them? Any number of questions could follow. However, I would suggest robotic nature largely draws from human actions during robot creation. Shalice is created to serve as a companion and is given emotional characteristics like those of humans. She must be nurtured properly to develop a well-rounded personality. (This takes place before her attainment of self-consciousness, but one can argue that the same nurturing would be required after such attainment.) Ilianthos’ personality is not as malleable as Shalice’s, but its role is to serve humans in a kind and respectful manner. In the case of Terminator, humans created machines to kill. It makes little sense to program a killing machine to have compassion, and it should come as no surprise that self-awareness of these machines leads to copious amounts of death. The key difference between the robots in Stories of Ibis and those in the Terminator films is that the former are made to love and nurture, and the latter are made to kill without mercy. Perhaps this can account for why the robots in Stories of Ibis are more likely to strive to be human; their existence is to love and help humans, and thus a favorable view of humans must be given to these robots when they are created. However, one is then left with the question of how robots evolve. Consider the first follow-up question. This could lead to trouble if machines have the ability to do harm. If Shalice were to turn evil, the damage she could inflict would be minimal.
Thus, perhaps the answer to the larger question — with robots in particular — is that it isn’t technology that should be feared, but rather human irresponsibility should be. It takes a human to create a machine. Whether that machine loves or kills is up to the creator.

Follow-up questions:
1. If robots possess human traits, could they develop human tendencies, such as destruction of their own kind?
2. What happens when machines built to love begin to malfunction? (Consider HAL from 2001.) Is this still a case of human irresponsibility, or should the machine be held accountable?

1. “The Universe on My Hands” ends with a quote about the authenticity of friendships forged in virtual space. In the film Catfish, the protagonist forms a relationship online with a person he believes to be a young woman. However, SPOILER ALERT, he finds the woman he has been chatting with is actually middle-aged and married with children. What had been real feelings are shattered when he discovers the truth, leading to questions of the authenticity of virtual relationships.
2. Thomas explored allegory in “Mirror Girl,” which related the impressionability of artificial intelligence to that of children. He explores Yamamoto’s implied question of “How would a new intelligence be anything but innocent until it is taught otherwise?” In this deleted scene from Terminator 2, the character of John Connor tries to teach the terminator how to smile. Emotion is entirely foreign to the terminator, and in this respect, even the terminator is innocent.
3. Science fiction will soon be a reality! (Or so Google’s executive chairman says.) This story, published a matter of hours ago by The Associated Press, directly addresses the concerns at hand. Consider how the woman in the audience is concerned that future technology could be dehumanizing. However, Eric Schmidt gives a brilliant response emphasizing that electronics have on/off switches. Humans are in control.