Google Glass: Artificial Unconscious?
Google Glass is cool. But could it be philosophically dangerous?
60 years ago, Ludwig Wittgenstein famously wrote:
Where does this idea come from? It is like a pair of glasses on our nose through which we see whatever we look at. It never occurs to us to take them off.
The “idea” in this case was a particular philosophical theory about language. Wittgenstein saying that other philosophers were making use of this idea without realizing it, unconsciously – so he chose the metaphor of glasses, which are always right before us, filtering what we see, even though we’re rarely aware of them.
Perhaps all technology so far has been an extension of the conscious parts of our mind. Computers let us to do the things we consciously choose to do, better. To talk over distances, remember more accurately, see and hear more stuff – on demand.
Google Glass and other smart glasses do all that as well, but I wonder if they’ll soon go one better: they could extend or modify our unconscious mental processes.
Consider, for example, some smart glasses set up to detect anything that looked like a spider in front of its camera, and overlay it with a red flashing box on the user’s display if spotted.
Now, I think this would make you obsessed with spiders. You’d notice them everywhere, and you’d find it hard to concentrate on anything else, so long as you were in front of one. You might like them or hate them, but you would be preoccupied with them, and if you were scared of them, this spider-focus would certainly make matters worse.
Or again, your glasses could analyze the facial expressions of people you meet, perhaps displaying the results (85% happy, etc…) floating above their heads. But what if the algorithm was poorly calibrated, so that it wrongly said that most people were angry at you? How would that affect you over the long run…?
I took these examples from recent psychological theories about the cognitive processes in spider phobia and depression (1,2). The original idea was that it’s some largely unconscious processes in the mind that are (mis)directing attention. But it seems to me that technology could produce the same kind of effects.
These examples are just for illustration. No-one’s going to install an app that does such obvious harm. They show, however, the way in which smart glasses could – unlike existing technology – not just change what we do, but how we see, and therefore how we think.