week 1 readings

The first thing I thought of when reading Norman’s essay was:  “omg microsoft kinect!”

I don’t know how many people watched it yet, but this year’s E3 conference Microsoft demo’ed more Kinect Sports videos for your entertainment:

http://youtu.be/OYLp3ml9KJk (skip to 3.18 for the football match)


http://youtu.be/UDMJbeqmmi8 (for srs, a very serious star wars kid)

I’m reminded of this year’s E3 when Microsoft presented Kinect games and most of the audience was laughing at them because of how stupid it looks. They had this Kinect game that was supposed to mimick a football match, and it was there were these 2 guys; running on the spot towards the screen while his friend yelled various positions whilst staring at the screen. Another one I remember is Star Wars game, and this kid was swinging his hands wildly while the clone army walked towards him like zombies. It was so awful. I kept wishing that he had a controller at least, because he looked beyond stupid swinging an invisible lightsaber. (same goes for the chick swinging a golf-club-that-is-not-club)

The lack of object makes these gestures seem entirely illogical, because we normally associate an action to an object, much like how we tend to write verbs on nouns. By removing the necessary instrument/object reference, you end up with a lot of gestures but nothing to associate them with and that is a bit difficult to get used to. By contrast, Wii Nitendo has a Wii remote – which allows the user to feel like the action has meaning towards an object even if the object is dissimilar (for instance Wii Boxing, you basically are holding a remote and punching air but somehow it feels better than just punching nothing) I think that may be one of the reasons why Wii sells better than Kinect, even though Kinect’s motion sensors are ‘better’. This example illuminates one of the points that Norman brings up; that gestural interaction needs to be better researched, and has it own time and place. The most successful games with gestural interaction tend to be those that *are* already based on gestures – for instance Kinect’s Dance Central is an upgraded version of Para Para Dance Machine (arcade only), which is an upgraded version of the late 90s Dance Dance Revolution (you can buy the ddr mat for playstation home use). I really cannot imagine a text heavy game like Scrabble or old school RPG like Suikoden or Xenosaga being played purely on gestures – would you ride a chocobo by squatting in midair??? Although, Nitendo Wii is developing a dual-user interface Wii U for their Zelda game…. (will be open to it, but wary about actual execution) it’s not purely gestural though, it’s half-half again.

What struck me the most about Don Norman’s essay is the familiarity of argument – then I realised that I heard all this before, but on a different note. Gestural interfaces, Norman purposes, are unnatural because different gestures mean different things in various cultures and GUI scientists have to write feedback loops into the system as well as make these loops visible for people to explore it. Fair enough. Except that this entire (or most of it anyway) argument has *always* existed as far as human species is concerned. This problem is NOT at all unique to GUIs or computer interfaces, it’s a problem that affects most of humanity. That is, the problem of language.

There is really no such thing as a ‘natural’ communication interface. Every day, the behaviour that we practice and the language we speak to the gestures we make are all acquired from infancy – for instance, speaking in English; we have to learn vocabulary, grammatical rules, structuring of sentences. Our social behaviour is ‘unnatural’ as well, we learn to use the potty and not dump rubbish on streets, we learn that we need to wear clothes in public etc etc or as Piaget noted: ‘there is nothing natural about speech’ because our behaviour is mostly acquired.

I’m not very convinced that the way to overcome gestural interfacing problems is by developing help menus or tutorials or standardising gestures as he suggests; part of the joys of language is the richness and variety and to standardise would be to lose much of that. Instead, another possible way would be to create the ‘base’ physical interface (an interface that reads all physical gestures) and then build the semotic interaction on top of it (based on how often the gestures are used, when they are used; a kind of ‘fitness’ test to weed out movements; another would be to create a ‘library’ of common movements so a headshake would be properly interpreted in context) so you have something akin to a cultural specific gestural interface or Google Translate for gestural interaction. Plus, like most language, one can always acquire it. Maybe next time besides learning English and text-speech languages, we’ll be learning gestural language in schools as well.

Learning, or rather self-replication is crux of the next reading by Bill Joy. He paints a truly horrible scenerio(s) based on the development of 3 types of technological growth: robots or augmented post-human cyborgs (GNR), molecular electonics/nanotechnology and genetic modifications. Robots that take over the world by being more efficient and adaptive than human bodies, molecular electronics that create weapons of uncontrollable destruction and genetic modification of bodies, foods that cause unknown weaknesses and therefore all human species self-destruct.

What really drives me up the wall about the essay isn’t so much the ‘omg apocalypse!’ scenerio, but rather that he oversimplifies and has many assumptions that really need to be questioned. First, he never ever defines what ‘natural’ is. What is natural? What constitutes as a natural being? How does one define ‘natural/organic’ versus the ‘machine’? His general definition seems to be: ‘anything that can reproduce = natural, therefore if a machine can reproduce by replication it gains the status of  human/organic’. It’s a fairly workable definition except that 1. not all reproduction is in replication (i.e we are not replicas of our parents) 2. replication does not necessarily give an evolutionary advantage (which is the same weakness in GM foods; because they aren’t plague resistant at all since they have no diversity) 3. it oversimplifies things to Nature vs. Machine dichotomy, when in fact there is a huge lot of grey area and it isn’t always one or the other.

Nature/Machine debate goes all the way back to Aristotle’s silver chalice (where an object exists in 4 states of being) to Heideigger (a human who views a tool as an extension of his body, ‘in hand’ rather than ‘at hand’ as an apparatus) to more recent studies via Levy’s ‘cosmopedia’ (a predecessor to wikipedia, where humans could collectively upload their intelligence via machines) or  Latour/Haraway’s dialogue (actor network theory, where *both* objects and humans had ‘agency’ or ability to affect and leave traces behind; he’s one of the few people who really epoused the idea that there is very little difference between human/object/machine in terms of a network, only varying levels of agency while Haraway’s cyborg manifesto examined the more ‘spiritual’ side of how a post-human society or ‘cyborg’ beings may exist in the future) I personally think that Bill Joy’s views on robotics is rather dated, because we are already *in* that present world and have been for a long time. For instance, what does he think of prosthetic research? Prosthetics have been around for a long time, and I can’t imagine a wooden leg being any more ‘natural’ than a mechnical one. Does he feel that a man with a mechanical limb is any ‘less’ human because his limbs are augmented by machine? Are people in wheelchairs, mechanical arms or steel-braced spines less ‘human’ because of the ‘machine’ in them?

The second problem I have with this text is with the word ‘intelligence’ – why does he treat all intelligence as equal? Ants are intelligent. The cockroach that I fail to kill is intelligent. Plants are intelligent. My cat who knows when dinnertime is, is also intelligent. My computer is intelligent too, it knows when to sleep after 15mins and lets me watch youtube. My classmates, tutors and friends at Parsons is intelligent. Are all the variables listed above intelligent? Yes. Are all the variables listed above *equally* intelligent? hell no! (the cat is always the smartest) And that’s what really bothers me, this guy NEVER EVER compares intelligence; that intelligence is a scale, a qualitative thing rather than a set quantity or object.

So let’s assume that you create an intelligent computer. This computer runs on genetic algorithms or evolutionary computing strategy, which means it can chart possibilities, learn methods and adapt to unknown scenerios. (it works like this: 1. lets say your goal is ‘fastest way to finish line’ 2. feed all candidate data 3. write a code for fitness test i.e. ‘all solutions must be under 10mins’ 4. computer takes all candidates and comes up with billions of possibilities/mutations and tests them based on your fitness test so only the ‘strong’ solutions survive 5.solution is found, or solutions recombined until a sufficient solution is found)) All of Bill Joy’s nightmares come true – here is a truly ‘intelligent’ computer that can evolve, create new solutions by combining old ones, test for viability and even adapt them in various unknown situations. Oh, horror.

But what he neglects or forgets to mention is that *all* computers run on code, and code by itself has constraints. You can never ever have a limitless potential computer because it *always* is restricted by code rules. (this isn’t necessarily just a machine thing, as humans we are always limited by mortality) You may have an infinitely intelligent computer who can plow through billions of data in seconds, but this infinitely intelligent computer can only be intelligent in ONE and only ONE way. Machines aren’t like humans in the sense that we can ‘read’ things based on generalities. For instance, doing laundry – in humanspeak, we’ll just say: ‘go do your laundry’, in machinespeak we’ll say: ‘in this day, in this hour, here are the ways of which you can do laundry and i want you to optimise the best way of doing laundry’. Assuming the task changes midway from laundry to sweeping, we’ll have to rewrite the entire fitness algorithm and constraints for the machine to cope with having a new goal.

And that’s why the future DOES need us. It’s true, we are less efficient and more biased than our machine counterparts. We are less good at plowing through billions of data, and multitasking various processes at once. We aren’t even very good at finding solutions. However, what we can do is verify solutions do work and create the ‘rules’ that a machine needs in order to perform. We are needed because of the constraints we create for the machine, and in the same way these constraints allow a machine to work it also gives us a failsafe predictability for them to stop.

So even if someone writes a code like: ‘machine, your goal is to be efficiently smarter than every single human alive’; it is still a constraint because 1. limited by every single human so if an alien arrived it would not be smarter than alien, 2. we know the machine will stop at the last human alive 3.even as it becomes efficiently smarter, it cannot do anything with its intelligence as there is no goal that says ‘become smarter and kill all humans’ so it basically has knowledge but no execution powers.

Finally, I find this last bit on moral and pursuit of science a bit whiffy of bullshit. This is a terrible analogy and I apologise for it (it’s 4am) but he reminds me of those people from Christian Evangelical groups who believe in Abstinence From Sex Is The Only Way when everyone in highschool is already ‘doing it’. You just can’t CAN’T tell people: ‘don’t be curious!!’ because people will, always will be even when they *know* bad things will happen. So just like how catholic highschoolers get pregnant and abstinence fails, telling people to relinquish weapons and give up research for ‘moral responsibility’ is a pretty poor tactic.  As a whole, humanity has never really cared very much about long term effects as long as we get what we want (be it highschoolers having sex, or scientists doing their research or phds taking weapons grants) NOW.

I personally believe that Planned Parenthood probably has the right tactic when it comes to this kind of thing, because even if the context is different the motives are fairly similar (i.e. people with too much curiosity). The first way would be via education and teaching safety and protection. Bad things like the White Fungus Plague happens when there is a lax in lab standards, or when people aren’t aware of the hazardous of material handling – basically knowing a little, but not enough. You want this information to be free and easily accessible until it becomes ingrained in everyone so people don’t cook up grey goo in their backyard. So when educating scientists, consider adding classes on ethical standards and responsibility along with practical lab behaviour so people would know and see that their actions had real and painful consequences. Show them how Sarin was developed because some chemist tried to synthesise ethyl alcohol from plasticiser and denatured alcohol during Prohibition; how some dude wanted his drink so badly he came up with a Jamaican Ginger that formed the basis for one of the most deadly organophosphates in the world.

Secondly, put failsafes or make it a legislation/law that all machines built with genetic or evolutionary strategy systems must have’the red button’ to override all other commands for self-destruct/stop. Make it opensourced so everyone can use and develop into it, then make sure everyone puts it into their program in case of emergency.

Finally, I don’t even know if it’s possible since it’ll require way too much cooperation – but a greater move towards transparency and global cooperation would be good. We live in a world where the globe gets smaller and smaller as all countries become more and more dependant on each other. Having more transparency means more knowledge shared, and less ‘secrets’ that could be potentially disastrous. For instance SARS and China, because the govt. kept the spread quiet, people didn’t know what symptoms to look out for and it spread to various countries while claiming victims. All this would have been prevented if governments were more transparent about what was really happening, and cooperated by sharing information they had. In the same way, governments need to share the research they do especially in regards to gene-modified species (because they can potentially mutate with native species, or overrun native species thus killing diversity like what happened in Australia).

Permanent link to this article: http://interface2011.coin-operated.com/2011/09/week-1-readings/

Leave a Reply