All four articles in this week’s reading express fear, uncertainty or even doom whether it has to do with the use of technology, its social long-term implications or human survival. If technology creates so much uneasiness, why do we keep producing? Are we losing focus on what is really important? Is the innovation of new technology done just for the sake of creating or is the aim to improve our quality of life? I feel like these questions need to be explored in order to have a clear understanding of my role as a designer and the implications of my work on future generations.
Fred Vogelstein’s piece illustrates the tension between creating a useful interface for the people and using it as a tool to generate profit. Although Facebook set out to create a “personalized, humanized web,” the focus gradually shifted to increasing the company’s monetary gain. Humanity quickly dissipates in this scenario where making a profit poses a higher priority over responsibility to the user. Facebook’s competition with Google further exacerbates the loss of focus. Although Google prides itself on not exploiting its customers’ personal data, it still, disturbingly, takes advantage of Internet search history for target advertising.
There is definitely something secretive and deceptive about the way the Internet functions. We are encouraged and are given multiple platforms on which to lay our information out on the table, information that we can never take back. This seems unjust because I believe that all humans are fallible. This issue is intensely discussed in Rosen’s article where he uses the example of Stacy Snyder who lost her job and ruined her career over an inappropriate photograph on Facebook. In the end, it is always the user that gets blamed. Is that really fair? Do innovators not have a responsibility to save the user from him/her self? This makes me think of an old English proverb that states “the first faults are theirs that commit them, the second theirs that permit them.” Even though it is the user that gives life to technology, I feel like responsibility should be shared with the producer. I strongly agree with the idea presented in Rosen’s article of “reputation bankruptcy”, where personal information is cleared every ten years. We must be compassionate and empathetic, are these not the qualities that us human?
Having presented the above philosophical conundrums, it is easy to understand Bill Joy’s outlook of doom for the human species as a whole, as he believes that humans will further depend on technology in their every day lives because technology will “gradually become immortal, intelligent robots.” Fortunately, I have a more optimistic outlook on the future and on the survival of the human species. As machines become more intelligent, so will humans. Humans have a natural ability to adapt. Because technology is ubiquitous, no one will fall behind on its growth in our increasingly globalized society. Advance of robotic intelligence seems to have been, and will be, parallel to human intellectual progression. Therefore, the human will always be able to control what he/she produces; the problems that will keep arising will not be that of technology but rather of morality.
Moral responsibility is what I am exploring here as I am planning to step out into the sphere of technical innovation and development. After carefully reading the assigned articles, I believe understanding the human psyche is a very important factor in creating interfaces in the 21st century. There is no doubt our lives are constantly complicated by technological possibilities and its implications but we cannot live in fear, shift blame and point fingers. There is no going back to the old days of a computer-free society. Therefore, we need to find a way to live with what we have created in a more civil compassionate way because the only things that can destroy the human race are humans themselves.
Comments!