Google Glass – Barely Scratching the Surface


Google Glass © Technews - A Store of TechnologyMedTechBoston asks software developer Nathan Cowan to take a peek behind Google Glass. How do you actually use this newfangled device? And what are the implications for medicine, manufacturing and society as a whole?  In the first of several blog posts, we hear about this new device through the eyes of an experienced software developer. In addition to Cowan’s user reviews, MedTechBoston plans to roll out a series of stories about Google Glass. Stay tuned!


One of the perks of developing software for the past fifteen years is that you can justify buying the latest and greatest gadgets and not feel too guilty about the money spent.  New platforms intrigue both my engineering side (for obvious reasons) and my creative side because they offer an entirely new way to interpret the platform or device without being too biased by what other people have already done. The opportunity to gain access to a brand new device doesn’t happen very often, so when Google expanded their “Explorers Program,” I jumped at the chance to purchase Glass. Google had the right idea to open up their program up to a diverse set of participants. They encouraged not only developers, but also entrepreneurs, adventurers, artists and anyone else with a compelling reason to try out the new platform and potentially become an advocate for this new device. I’ve been wearing Google Glass as much as possible to get a feel for its potential and how it might be used now, and in the future. Google Glass offers a unique chance to really rethink solutions and to augment how we interact with our devices. I’m still learning a lot and I anticipate that my thoughts and feelings on Glass will probably change over time. Nonetheless, I hope to present a view into Glass, albeit from someone who has barely started to scratch the surface of this device’s potential. Click here to get a breakdown of the specs of my personal GoogleGlass device.

Glass, being a new form of wearable technology, is unlike a cell phone, tablet, or computer. If you put Glass in that perspective, you might avoid the initial disappointment I experienced the first few days I used Glass. Being an avid Apple user and spoiled by the sheer intuitiveness of Apple design, it’s been a while since I’ve picked up a device and not known how to use it within the first few seconds. After following the instructions to set up my Glass, my next thought was  “Well, what now?”  It took a while to get used to interacting with the actual hardware and working with the software.  I expected Glass to behave like my smart phone but it certainly does not. Having used Glass for such a short period of time, it is easier to say what Glass is not rather than what it is, among other things. In this first post, I’ll outline my top 5 takeaways, describe how to use the user interface called “Timeline,” and I’ll toss around my ideas about the future implications of Google Glass in society.  I’m sincerely looking forward to watching developers get creative and find completely new ways to use Glass.

Nathan's Google Glass

Nathan’s Google Glass

What are the have and have-nots of Google Glass? Here are my top five takeaways…

  1. Glass is not a standalone device – Glass is heavily reliant on an internet connection and an accompanying device, such as a cell phone. Glass does not have built-in GPS, cell phone, or messaging. Instead it relies on your cell phone to provide these services. There is an app called “MyGlass” which serves as a conduit between Glass and your phone’s various features. Fortunately, Google has recently released an iOS version of MyGlass – up until late December, this application was only available on Android. I don’t own Android devices myself and the app only recently became available, so I haven’t used MyGlass much yet. My understanding is that MyGlass on iOS works pretty much the same as it does on Android with the exception that messaging (SMS) is not supported on iOS yet. Also, certain types of Glass apps are actually run entirely on the cloud and would not be available without an internet connection. If you own a phone without MyGlass, you can connect to the internet via a hotspot or Bluetooth.
  2. Glass is not augmented reality – One of the misconceptions I had, and honestly what I was most excited about initially, was that Glass offers an augmented reality experience. Perhaps if you loosened the definition of augmented reality, you could throw Glass into that category. I conceptualize augmented reality as providing a visual layer over what your eye is seeing. This misconception arose from how one used the Glass display. It’s not a “heads up display,” but rather more of a “second screen.”  The Glass display should actually rest right above your line of sight when you are looking straight ahead.  The idea is that when you want to look at Glass’ display, you look up and to the right, as in out of the corner of your eye, to see the display. Certainly, Glass provides a flavor of augmented reality, but it’s not the sci-fi vision, in which users looking straight ahead see cues populating within their field of vision in real time.
  3. Probably not safe to use Glass while driving – When I heard that states were passing laws banning Glass while driving, I thought this was an overreaction to a new technology that policy makers don’t yet understand.  But even if state governments don’t completely understand Glass yet, it doesn’t make them wrong on this point. It is possible that with practice Glass would be safe to use while driving but it takes a while to learn how to use it. It’s certainly not easy (at first) to look out at the corner of your eye while focusing on something so close.  I experienced eye fatigue for a while and got a bit of a headache. The problem arises that when you look at the Glass display, you are actually losing focus on what is in front of you – it’s the equivalent of looking away. It would be the same as if you looked down and away from the road while driving – you lose sight of what’s going on. Again, possibly with practice people can get good with Glass and be mostly okay, but a driver that’s “mostly okay” still probably shouldn’t be on the road. And considering the distractions it may offer, I think this is something that should be approached with caution.
  4. The voice recognition is quite good – Being an avid Apple user, I definitely have used Siri quite a bit and have been frustrated with it most of the time. Because Glass relies almost exclusively on voice to navigate, voice recognition is crucial and Google doesn’t disappoint here. It rarely misunderstands me and does obviously use some intelligence in deciding what I say or mean. I can see this while I’m talking and Glass is printing out what I’m saying word by word.  Some words will initially be misunderstood by Glass but they are quickly corrected given the context. I’m quite impressed with the fluidity; it’s a joy to use voice to navigate with Glass.
  5. The battery life is terrible – Yes, the battery life is shameful, but it’s a forgivable sin. Glass is not in full production yet, but in order for Glass to be truly useful, the battery will need to last a full day.  Google does claim that the battery will last a typical day but I haven’t found this to be the case – unless in a typical day means you don’t plan to use your Google Glass that much. I can actually see the battery percentage drop before my eyes while using it. If you are using video or the camera a lot, the battery drains quickly. I haven’t done any true testing on it but I estimate Glass would last maybe an hour or two with heavy camera usage.  But through casual use, it generally does last an entire day. One other annoyance has been the hiccups in charging Glass. Every now and then the device will be completely dead in the morning after leaving it in the charger overnight. This may have to do with the power on/off mode – the device might not charge properly if it’s not completely powered off. After experimenting a bit with different charging methods, I still haven’t figured out the cause quite yet. Again, the battery problems are forgivable at this stage of development, but must be improved especially if these devices are to be relied upon for doing work.

 

So how do you actually use Google Glass? Let’s explore the Timeline…

You’ll interact with your Glass largely through the timeline. Glass uses what it calls “cards” in the timeline. A card is generally output from an app. For instance, with a news app, a card would be a single story headline. Understanding the timeline will save you a lot of frustration. Think of the timeline as a feed, or history of events triggered by the various apps installed on your Glass. It’s akin to “Message Center” on the iPhone and iPad. This is one major difference in how you interact with Glass versus a smart phone.  For instance, if you are interested in information from a particular app on your smart phone, you would click on that app, navigate through that particular application to get the information you want. On Glass, this information would be contained in your timeline and all applications will appear on the timeline in chronological order. To better illustrate, let’s go through an example. Say you have a CNN app and a New York Times app on your smartphone. These apps are simply feeds that provide headlines. On your smartphone, you would open up each of these apps independently, scroll through the headlines, perhaps clicking on a few that interest you. Say you have these same apps installed on your Glass. On Glass, you wouldn’t open these apps independently. You’d scroll through your timeline and the feed items, ie “cards,” would be on your timeline. There is no “opening” of apps in this case. However, it’s important to note that not all apps are configured to work within the timeline – some will still have to be opened directly to be used.

A view of Nathan's timeline.

A view of Nathan’s timeline.

Cards are shown on the display and you can navigate your timeline by swiping backward or forwards on the right ear stem to scroll through events in the past or future.  If you arrive at a card that you want to see, tapping  a finger on the right ear stem will open it up. Depending on the card’s context, different actions are possible. For instance, if the card is a link to a video, you can play the video. If the card is a photo you’ve taken, you can share photo with a contact. When you take any action on Glass, a card is inserted into your timeline. For instance, if you snap a photo, it will get pushed to your timeline, appearing right next to the “Home/Ok Glass” card. If you perform a Google search, a card will be inserted containing the search terms. If you navigate to the Google search card on your timeline and tap it, it’ll re-run the search and show you the results. Present and future cards are cards that are currently relevant. These cards usually update very frequently. For instance, if you are using a timer app, the timer is always counting and thus is currently relevant. If you are using the Google Now weather app, this card updates very frequently showing you current weather conditions. Finally, the leftmost card labeled “Settings” contains settings for your Glass device. Things such as network connectivity, Bluetooth connection and other configurations will show up in the menu system of this card. Understanding the timeline is crucial to understanding how to interact with Glass.

Referring to my actual timeline, along the top, you’ll find cards on my timeline. The Home card will generally appear first when Glass is first powered on. To the right are cards that have occurred in the past. To the left are future and present cards along with the Settings card. I am able to navigate to any of these cards by swiping the ear stem backwards or forwards. My timeline also illustrates how cards are drilled into to reveal menu systems and options; for example, tapping once on Settings will reveal a list of settings (pictured).

The timeline is so crucial to understanding Glass that I have devoted much of my first blog post to explaining the user interface. There are other important concepts such as gestures, types of applications, and basic functions like making phone calls or sending emails. Also, when developing applications for Glass, design decisions will need to be made early on regarding which application programming interface, or API, you’ll be using. The two APIs are the Mirror API and the GDK. In a nutshell, apps using the Mirror API are actually run in a Google-hosted cloud. A news feed app would be an example of an app that might use the Mirror API. The GDK is used for more hardware intensive applications – if you need to use Glass’ accelerometer, you’d use the GDK. A compass application would be an example of an app using the GDK API.  These concepts will be covered in detail in future posts as they are quite extensive and there are subsets within each of those APIs.

What’s the big picture? Here’s my take on the impact of Google Glass… 

While they started to make a mainstream impact last year, I think 2014 will be the year of the wearable device – smart watches, bracelets, and yes, glasses. What makes a device such as Glass so compelling is the absolute novelty of the platform. It’s a blank slate for entrepreneurs, developers and professionals to really drive this category and apply novel and creative solutions to long-standing problems. This is exemplified by my experience with Glass – from the initial disappointment when I first used Glass to now being able to see its potential. Again, this happens with every device that breaks the mold. I saw the same thing when the iPad first came out. Initially, it was panned and compared to a big iPhone,but as people got to use the iPad, they realized it was more than a bigger screen and extremely creative solutions have appeared on the iPad platform. Glass breaks that mold even more.

Glass, as the hardware currently stands, does need work before it becomes mainstream.  The battery life itself is a big factor and must be improved if its expected to be used for serious work and business. Gestures and navigation within the OS need some improvement and tweaking – for instance, taking advantage of the front facing camera, gestures could be hand movements without the need to actually touch the device (important in sterile environments), similar to how Microsoft’s Kinect uses a person’s body as a controller. That being said, the hardware problems will be solved. The device as it is now is really a beta and a pretty good beta at that.  But the concept of the Glass platform is extremely compelling and there are problems out there begging for a Glass solution. Combined with another fast growing area known as the “Internet of Things,” Glass can be an extremely powerful device.

A few broad strokes where I could see Glass having a big impact:

  1. Nursing and general clinical care – Nurses, doctors, and other practitioners interacting with patients could find a powerful tool in Glass. It could be used to pull the patient’s charts, lab results, and medications during a patient encounter.  Combined with the “Internet of Things,” Glass could roll up monitoring data (blood pressure cuffs, oxygen levels, pulse, etc) and present it in an easy to read format for the nurse or doctor during a patient encounter. Readings that are outside the bounds of normal could be highlighted or prioritized.
  2. General factory or plant use – Chemical plants and other types of heavy industrial factories could leverage Glass to help workers and inspectors quickly identify equipment that needs maintenance or attention. Simply looking around different areas could allow the worker to pull readings, inspection history, and other similar pieces of data.  While Glass is not truly augmented reality, it still could be leveraged in a way that highlights areas that are dangerous or needs attention.
  3. Recreation – Glass has obvious use in the personal use space. Just its camera alone makes for a unique, GoPro-like, first person video experience. Implications for shopping and extremely local application (like within a shopping mall) are huge.

Aside from the technical wizardry behind Glass, Glass changes how you interact with your devices, making it almost seamless. Glass is taking technology to the next level of getting your device closer to you, in some sense it might be almost as close as you can get without getting into bio-implantable devices. To snap a photo, you no longer have to reach in your pocket for your phone.  Its right there ready to use at all times. To pull up information on just about any object or person you are looking at, again, it’s right there.  I’m still trying to figure out the ways Glass could be used to solve really challenging problems. It’s a completely new platform waiting for creative folks to really push what it can do. But the real power of Glass is not necessarily the device itself; it’s how it fits neatly into the stacks of other technology, improving how we connect ourselves to everything else.


Want to learn more about new applications of Google Glass? Stay tuned! MedTechBoston is about to launch an eight-week series on Google Glass, providing our readers with never before seen insights into this new technology. We’ll announce full details on Monday, February 10…

Nathan Cowan

Nathan Cowan

    Nathan Cowan is a software developer and entrepreneur based in Houston, TX. He does a lot of consulting in the medical field and is currently working with MD Anderson Cancer Center developing software in the clinical research space. He is nearing completion of his second iOS application - a photography and video sharing app (Yes, its the millionth photo sharing app) - that will feature Glass integration. He has an avid interest in mobile development, specifically iOS development and has recently been working with Google Glass. Nathan’s hobbies include photography, reading copious amounts of nonfiction books, and coding.

    Follow us!

    Send this to friend