“Die Gedanken sind frei” – Unknown, a German saying meaning “Thoughts are free”
(for the moment at least)
Over the past few weeks I’ve been actively testing Google Glass. I actually didn’t realise how few of those Glasses are around until many of my friends asked me to tell them more about how it feels and what you can do with it, so I decided to write an essay about it.
About 20 years ago, I was lucky to get my hands on one of the first mobile phones and then a few years later on one of the first smartphones (an O2 XDA) when they launched in Germany. Looking back, I realize how much the early exposure to new technologies shape our thinking of the future and about what may come next. So when Google came out with the Explorer program, I immediately applied for it and was happy to finally play around with Google Glass.
Note that it wasn’t easy to get Google Glass at all. If you try to get your hands on Google Glass the way I did, you’ll eventually end of up with a US address, credit card and your devices are registered with the US based app stores. Let’s just say: Google is turning you into a US resident by the time you have Glass in your hands and this is one of the things I hope Google is going to change real soon.
Why is focusing on the US alone – even at this point – a problem?
First of all, if you’re living or travelling outside of the US, you’re not truly owning the device, or you’re at least at the constant risk for it becoming (like my 6 year old nephew put it) just a neat “new pair of sunglasses”. In the terms and conditions of the program, Google reserves the right to deactivate Glass remotely. Though I paid $1,500 for it, they can still do that (they might have already done so after reading this article) and they won’t have to refund you.
Is it right or wrong? I don’t know. But Google isn’t doing itself a favour calling it an “Explorer” program when the moment you start to have fun and not just explore Google Glass technically, but with it, “the world” – you effectively risk running into trouble one way or the other.
Secondly, the focus on the US is understandable for a number of reasons from Google’s perspective, but to the rest of the world – and taking into account my personal experiences with having early access to new technologies – it surely creates an imbalance.
This holds true in terms of competitiveness between developers but also in relation to simply being a citizen of planet Earth. If you’re a developer with access to Google Glass, you could be way ahead of the game already. If you’re not, you can still think as creative and innovative as you want, but without proper access to the real experience, it’s just not the same. And while we in the world certainly have enough problems to alone get food, water, education and care to all our citizens, the early access to new technologies on a global scale will eventually play a more important role in future as it could significantly aid and accelerate the other matters.
Getting started
Google Glass arrives in a nice gift box and with it sets expectations high. Clearly everybody learned from Apple in this respect on how important the packaging for new products is. Google has done a great job to keep it simple yet informative and elegant yet environmentally friendly enough.
The first problem I encountered was that the USB charging cable which was included did not charge the Glasses. You’d think you’d figure this out on day one, but my human mind tricked me into believing it is the battery or the Glasses itself which are at fault or perhaps it could be the fact that I was outside the US when powering it up the first time. A simple solution came with replacing the USB charging cable – so in case Google Glass isn’t charging for you, try that first!
Once Google Glass was charged, it presented itself with a somewhat easy and clear guide explaining basic swiping gestures (the right side of the frame is touch sensitive and reacts to up/down and forward/backward movements using your fingers) – I felt sorry for my lefthanded friends.
There are different ways to connect Google Glass to the internet, either via the Android or the iOS version (you must be registered in the US iTunes app store) of the MyGlass app, or, by connecting it to your local w-lan network. The later option is too complicated whereas you first have to create a QR code which contains the details of your network before Google Glass can scan and use it. When Google Glass hits the market, I am convinced they will have to have solved this in another way if they want to reach the masses, everything else simply is neither practical, nor feasible.
Once successfully connected to the internet and logged in with your Google account though, the fun – and getting a glimpse of the future – can begin.
First interactions
The swiping gestures are reliable and work well though they’re not as intuitive as what you’re used to from a smartphone. A simple improvement would be to make it very clear during the first tour what the main “back” swiping gesture is and (possibly in Google’s own interest) relating this to the back button usage on Android smartphones.
Google has also equipped it’s first Glass device with basic voice recognition features: “OK, glass” is the term which is supposed the make Google Glass realize that you’re going to issue an order and that it should get ready to execute your very command. Just that it’s not clear for a first time user what to do next after issuing the trigger term. If you say “OK, glass” – you’ll have to quickly conclude your sentence by what you want it to do, i.e. “Take a picture”.
The one thing I (and everybody else I’ve seen playing around with Google Glass) badly missed is a voice recognized “back” term. I can “swipe to back”. But why can’t I simply say “back” and Google Glass does it? Perhaps Google wants to limit the voice recognition to a few terms and do those well, though I’d argue unless you’re Arnold Schwarzenegger, “back” would be a comparatively easy term to add.
That said, I’m sure that many people will have a tough time simply getting the “OK, glass” term right and voice-recognized properly. You’d think it’s easy enough, but Google will have to optimize many variants of the term in different languages and dialects to enable people have a great first experience. In the area where I originally come from (near Stuttgart, Germany) there is a very particular type of German spoken (in fact: Germans don’t call it German!) of which the accents are not only tough (or impossible) to get rid of, but even extend into other languages, such as English. Swabian (Schwäbisch) is the dialect I am referring to and when a Swabian born German speaks English, one could rightly refer to him as speaking Swenglisch instead.
Like good old Swabia, there are many other places around the world with similar challenges. People are usually proud of their heritage and local tongue. If Google Glass doesn’t recognize them well simply speaking the “OK, glass” term (which is likely due to the users lack of paying attention to the subtleties of the English language), people may not feel welcome and/or even embarrassed. I’ve witnessed this several times with friends from different backgrounds and nationalities. They didn’t say anything to me and we had a good laugh about it, but I could clearly observe that there is an obvious correlation between their interest and excitement of the product before they had used it the first time and after they realized how little it seems to understand them. Google would do well to consider this point seriously by launching Glass both globally and locally.
Localized (even considering accents) voice recognition is a fickle beast and it may be hard (if not impossible) for Google to pull off, but a personal touch like this would help people to have courage in overcoming some of the oddities that naturally occur in your daily life when you start exploring Google Glass. If you’re the only one in a room with those glasses on, you’re already the object of curiosity, envy, uncertainty or fear – depending what is the group of people whom you’re with.
Once you start talking to yourself with “OK, glass”, you could easily be outlawed and possibly even be considered as an idiot by the rest of the group. In my opinion, the real break-through of wearable technology, whether it’s glasses or something else will come only if we can create a safe and secure one-way thought-bridge between our brain and our devices which can be used to send commands from a human to a machine but (at least for now) not the other way around.
Sounds too far out there?
Well, it already exists and it’s only a matter of time until it’s ready for prime time.
Compare it with the good old concept of computers where the input device (1 – keyboard) was different from the output device (2 – screen) and the computing power was again held in a different box (3 – storage, ram, motherboard etc.). Google Glass fundamentally challenges this separation as it has all 3 in one device, plus it is powered by the cloud, thus providing infinite computing power and data access, even today.
My first experience with Google Glass therefore reminded me of an early prototype of a so-called “memex”, or personal outboard memory which Vannevar Bush famously imagined in his 1945 article “As We May Think”.
Not just Google, but several other companies, such as Evernote for example, have embedded the creation of a memex deeply into their purpose and destiny. I just hope that it won’t be one single company owning a fair share of our minds in this respect, but instead an active collaboration of them which will eventually lead to the creation of such a memex in which – so I hope – Unified Inbox will be able to be a major contributor as well.
Where Google Glass could make a real difference is in the ability to both work as an identifier of what we see and look at (without having access to our innermost thoughts) as well as being the display (2) through which we consume the enriched and contextualized information or access deeper content related to it from the cloud or other (possibly wearable) technology. The given name “Mirror-API” for the Google Glass developer program therefore suits it well.
What’s missing for me to take it to the next level is the previously mentioned thought-bridge which would allow us to communicate our wishes and desires to the device at high speed while being at ease that the device cannot read more from our minds than what we want it to. Recognizing the thinking pattern and brain waves behind “OK, glass” would therefore be a great first step towards a more practical implementation of wearable technology.
One good side effect communicating with technology by “trained thought transmission” could be that we’re naturally training ourselves to discipline our thoughts more. From the cradle to the grave, teaching this to ourselves and future generations seems to have become a lost art since we’re overloaded with information and media. Who knows, maybe the very same technology that made us lose part of what makes us human, will one day help us find our way again.
A scary new world?
One of the worrisome moments you might experience when you use Google Glass for the first time is how much data is already contained in your Google login. Though no different than using your Google account on your Android device, logging into Google Glass very bluntly shows you the power Google has in and over your life today already. Amazing as this is, it is equally scary. Luckily, Google gives Glass users a remote-wiping capability which I tested and which works fine.
That said, it remains both an illusion of privacy as well as an island of openness. The data still exists on all your other devices and the cloud whilst for instance at the same time you’re unable to receive all your messages in one single place or view. Google is making a great effort combining SMS, Gmail and Hangouts, but with Glass being so intrusive on your attention level and the ability to interrupt you more severely than any smartphone ever could, I simply cannot imagine that a certain unification with regard to apps, notifications, alerts and messaging at least will have to occur for Glass and other wearable technology to make us more – and not less – productive.
The worry many have that a user of Google Glass will invade others peoples privacy is something that I cannot follow after using it practically. While certain features like “blinking with the eye to take a picture” certainly sound like spyware, they’re actually really cool, practical and not that different than hiding your smartphone while taking a picture of somebody else. In the light of the needed transparency, Google could simply add a red or green light towards the front of the Glass which is “on” if either a picture or a video is taken, but otherwise off. I wonder whether people could have more peace of mind and Glass would be more easily acceptable in society that way.
Society will – either way – be polarized by this new world. There will be people pushing the boundaries towards more use of technology in our lives while others will advocate against it. History shows that with enough user benefit there will be enough positive value perceived by individuals that the majority of people tend to easily overlook the disadvantages and that such benefits, especially if in form of convenience, always come with a price. The question then will be: what will governments do?
The opinions that may exist on embracing technology too much or too little, aren’t easily moved on either side. In fact, they’re opposite poles which in the worst case could one day even lead to an uprise within our society. The new enemy therefore may be called neither be left nor right nor bottom nor up, but looked upon simply with (in)tolerance of how much we see ourselves merging with technology or potentially how technology is merging with us. Science-fiction series such as Almost Human, Intelligence or the remake of Battlestar Galactica have been debating this on public TV for some time now.
But what is the difference to what we can do with a smartphone today? Technically it may not appear to be that much. Usability wise however it’s a big difference when you don’t have to stare on a display and theoretically have your hands free while using it. Convenience is a factor: with Glass, Google has created a device where the input-output process is so much intertwined and connected to the human mind via the eyes, that the quote “The Eyes are the window to your soul” gets a very wide scope. I’m not gonna argue that we’re opening our souls for technology to enter just yet. Let’s just say that with Google Glass it is much harder to track and less obvious for bystanders to see what we’re doing with the device when compared to a smartphone. Let’s also note that the memory of technology is far more replicable and permanent than our conscious human brains appear to be.
One thing I want to be very clear about: despite raising some points in criticism during this article, I sincerely commend Google for having created Glass: it is truly innovative and we need more real innovation from big companies that are willing enough to take them to market instead of resting on their laurels.
I believe Google was also super smart in its marketing to simply refer to the term “Glass” and turn it into a product name. I’m actively wondering how other manufacturers of similar devices will be able to beat the simplicity and generic use of the term. Every once in a while a company has the opportunity to own a space based on the core value the product brand conveys and Google is doing just that with Glass. I found it incredibly hard writing this essay and putting a “Google” in front of every “Glass”. Simply referring to “Glass” and knowing it’s from Google just comes so naturally!
So what can you do with Glass and where is the link to other wearable technologies?
Despite there not being many apps to choose from at this point, one can already pick up on the Glasses usefulness. Apart from standard stuff like taking pictures or videos and sending them around, one quickly discovers that for instance getting driving directions is a vastly different (more natural and overall better) experience than doing the same with a smartphone or via the GPS navigation system in your car. It’s simply awesome to become one with the way while being one with the data which is showing you the directions at the same time.
I can certainly see in-numerous useful applications for the device in nearly every aspect of our daily lives. The ability to record, analyze/process and provide individualized, contextual data back to the user almost in real-time is – without exaggeration – phenomenal. For good or for worse, our learning (and the definition of it) will fundamentally (have to) change if this type of technology becomes normal and (somewhat) accepted in society. We will be presented with the opportunity to deal with vast amounts of information in a short time frame. Whether that enables us to make better decisions, remains to be seen.
One of the big drawbacks of Google Glass is the pitiful battery lifetime and the need to still pair it with your smartphone. Eventually such glasses will have to stand on their own two feet, and be independent of smartphones, especially if they truly want to make a difference. Other wearable technology, for instance a vibration or weight/pressure based chargers integrated into our shoes, could also solve the battery problem, provided there is a universal energy transmitter which would allow the charging of Google Glasses wirelessly – or would you want to run cables from the top to the bottom of your body?
This is also what brings me to the biggest of my personal concerns. Our lives have become flooded with radiation. Cell towers, mobile phones, TV and radio waves, w-lan, Bluetooth and many more. I’m not saying that there is a permanent influence which such radiation will have on us as a life form on this planet. But, if there is an influence, we’re only going to find out after generations. One thing is for sure: if we’d see those frequencies which we have put in the world and which have not been there in this quantity before us, the world looks very different today than it did just a hundred years ago.
For myself I can only say that after using Glass for ten minutes, I tend to get pains on the right side of my head. The battery which is integrated at the far right end of the frame also becomes quite hot, exhibiting a very physical difference of temperature in the head region. There are good chances therefore that there is not only a difference in temperature and radiation, but also in blood flow and attention (only the right eye is having the interaction with Glass, but not the left eye) with regard to our brain waves and activity.
Looking at the (quite unknown) influences which a technology such as Glass or other wearable devices impose on us today already, makes me wonder if we’re even remotely aware of what future we’re creating at the moment or how the poet Goethe put it in his “Der Zauberlehrling”: Spirits that I’ve cited // My commands ignore. – might well happen one day.
I do worry of the day when this may or may not happen but for the moment I am simply curious as to how it all plays out and the Google Glass device gave me an opportunity to force myself to look further ahead and challenge my thinking in this regard. I can only recommend the reader to do the same. Ultimately it prompted the question in me for the future of all connected, wearable technology as well as its potential human implants: at which point is a human a part of a machine and when is a machine part of a human?
I have no answer, but would like to close with an old story by the ancient Chinese philosopher Chuang-Tzu, narrated in the 4th century B.C.:
As Tzu-Gung was traveling through the regions north of the river Han, he saw an old man working in his vegetable garden. He had dug an irrigation ditch. The man would descend into a well, fetch up a vessel of water in his arms and pour it out into the ditch. While his efforts were tremendous the results appeared to be very meager.
Tzu-Gung said. “There is a way whereby you can irrigate a hundred ditches in one day, and whereby you can do much with little effort. Would you not like to hear of it?”
Then the gardener stood up, looked at him and said, “And what would that be?”
Tzu-Gung replied, “You take a wooden lever, weighted at the back and light in front. In this way you can bring up water so quickly that it just gushes out. This is called a draw- well.”
Then anger rose up in the old man’s face and he said, “I have heard my teacher say that whoever uses machines does all his work like a machine. He who does his work like a machine grows a heart like a machine, and he who carries the heart of a machine in his breast loses his simplicity. He who has lost his simplicity becomes unsure in the strivings of his soul. Uncertainty in the strivings of the soul is something which does not agree with honest sense. It is not that I do not know of such things; I am ashamed to use them.”
Pingback: The Future of Communication – Unified Inbox
Pingback: Unified Inbox | The Future of Communication