One of the perks of developing software for the past fifteen years is that you can justify buying the latest and greatest gadgets and not feel too guilty about the money spent. New platforms intrigue both my engineering side (for obvious reasons) and my creative side because they offer an entirely new way to interpret the platform or device without being too biased by what other people have already done. The opportunity to gain access to a brand new device doesn’t happen very often, so when Google expanded their “Explorers Program,” I jumped at the chance to purchase Glass. Google had the right idea to open up their program up to a diverse set of participants. They encouraged not only developers, but also entrepreneurs, adventurers, artists and anyone else with a compelling reason to try out the new platform and potentially become an advocate for this new device. I’ve been wearing Google Glass as much as possible to get a feel for its potential and how it might be used now, and in the future. Google Glass offers a unique chance to really rethink solutions and to augment how we interact with our devices. I’m still learning a lot and I anticipate that my thoughts and feelings on Glass will probably change over time. Nonetheless, I hope to present a view into Glass, albeit from someone who has barely started to scratch the surface of this device’s potential. Click here to get a breakdown of the specs of my personal GoogleGlass device.
Glass, being a new form of wearable technology, is unlike a cell phone, tablet, or computer. If you put Glass in that perspective, you might avoid the initial disappointment I experienced the first few days I used Glass. Being an avid Apple user and spoiled by the sheer intuitiveness of Apple design, it’s been a while since I’ve picked up a device and not known how to use it within the first few seconds. After following the instructions to set up my Glass, my next thought was “Well, what now?” It took a while to get used to interacting with the actual hardware and working with the software. I expected Glass to behave like my smart phone but it certainly does not. Having used Glass for such a short period of time, it is easier to say what Glass is not rather than what it is, among other things. In this first post, I’ll outline my top 5 takeaways, describe how to use the user interface called “Timeline,” and I’ll toss around my ideas about the future implications of Google Glass in society. I’m sincerely looking forward to watching developers get creative and find completely new ways to use Glass.
What are the have and have-nots of Google Glass? Here are my top five takeaways…
So how do you actually use Google Glass? Let’s explore the Timeline…
You’ll interact with your Glass largely through the timeline. Glass uses what it calls “cards” in the timeline. A card is generally output from an app. For instance, with a news app, a card would be a single story headline. Understanding the timeline will save you a lot of frustration. Think of the timeline as a feed, or history of events triggered by the various apps installed on your Glass. It’s akin to “Message Center” on the iPhone and iPad. This is one major difference in how you interact with Glass versus a smart phone. For instance, if you are interested in information from a particular app on your smart phone, you would click on that app, navigate through that particular application to get the information you want. On Glass, this information would be contained in your timeline and all applications will appear on the timeline in chronological order. To better illustrate, let’s go through an example. Say you have a CNN app and a New York Times app on your smartphone. These apps are simply feeds that provide headlines. On your smartphone, you would open up each of these apps independently, scroll through the headlines, perhaps clicking on a few that interest you. Say you have these same apps installed on your Glass. On Glass, you wouldn’t open these apps independently. You’d scroll through your timeline and the feed items, ie “cards,” would be on your timeline. There is no “opening” of apps in this case. However, it’s important to note that not all apps are configured to work within the timeline – some will still have to be opened directly to be used.
Cards are shown on the display and you can navigate your timeline by swiping backward or forwards on the right ear stem to scroll through events in the past or future. If you arrive at a card that you want to see, tapping a finger on the right ear stem will open it up. Depending on the card’s context, different actions are possible. For instance, if the card is a link to a video, you can play the video. If the card is a photo you’ve taken, you can share photo with a contact. When you take any action on Glass, a card is inserted into your timeline. For instance, if you snap a photo, it will get pushed to your timeline, appearing right next to the “Home/Ok Glass” card. If you perform a Google search, a card will be inserted containing the search terms. If you navigate to the Google search card on your timeline and tap it, it’ll re-run the search and show you the results. Present and future cards are cards that are currently relevant. These cards usually update very frequently. For instance, if you are using a timer app, the timer is always counting and thus is currently relevant. If you are using the Google Now weather app, this card updates very frequently showing you current weather conditions. Finally, the leftmost card labeled “Settings” contains settings for your Glass device. Things such as network connectivity, Bluetooth connection and other configurations will show up in the menu system of this card. Understanding the timeline is crucial to understanding how to interact with Glass.
Referring to my actual timeline, along the top, you’ll find cards on my timeline. The Home card will generally appear first when Glass is first powered on. To the right are cards that have occurred in the past. To the left are future and present cards along with the Settings card. I am able to navigate to any of these cards by swiping the ear stem backwards or forwards. My timeline also illustrates how cards are drilled into to reveal menu systems and options; for example, tapping once on Settings will reveal a list of settings (pictured).
The timeline is so crucial to understanding Glass that I have devoted much of my first blog post to explaining the user interface. There are other important concepts such as gestures, types of applications, and basic functions like making phone calls or sending emails. Also, when developing applications for Glass, design decisions will need to be made early on regarding which application programming interface, or API, you’ll be using. The two APIs are the Mirror API and the GDK. In a nutshell, apps using the Mirror API are actually run in a Google-hosted cloud. A news feed app would be an example of an app that might use the Mirror API. The GDK is used for more hardware intensive applications – if you need to use Glass’ accelerometer, you’d use the GDK. A compass application would be an example of an app using the GDK API. These concepts will be covered in detail in future posts as they are quite extensive and there are subsets within each of those APIs.
What’s the big picture? Here’s my take on the impact of Google Glass…
While they started to make a mainstream impact last year, I think 2014 will be the year of the wearable device – smart watches, bracelets, and yes, glasses. What makes a device such as Glass so compelling is the absolute novelty of the platform. It’s a blank slate for entrepreneurs, developers and professionals to really drive this category and apply novel and creative solutions to long-standing problems. This is exemplified by my experience with Glass – from the initial disappointment when I first used Glass to now being able to see its potential. Again, this happens with every device that breaks the mold. I saw the same thing when the iPad first came out. Initially, it was panned and compared to a big iPhone,but as people got to use the iPad, they realized it was more than a bigger screen and extremely creative solutions have appeared on the iPad platform. Glass breaks that mold even more.
Glass, as the hardware currently stands, does need work before it becomes mainstream. The battery life itself is a big factor and must be improved if its expected to be used for serious work and business. Gestures and navigation within the OS need some improvement and tweaking – for instance, taking advantage of the front facing camera, gestures could be hand movements without the need to actually touch the device (important in sterile environments), similar to how Microsoft’s Kinect uses a person’s body as a controller. That being said, the hardware problems will be solved. The device as it is now is really a beta and a pretty good beta at that. But the concept of the Glass platform is extremely compelling and there are problems out there begging for a Glass solution. Combined with another fast growing area known as the “Internet of Things,” Glass can be an extremely powerful device.
A few broad strokes where I could see Glass having a big impact:
Aside from the technical wizardry behind Glass, Glass changes how you interact with your devices, making it almost seamless. Glass is taking technology to the next level of getting your device closer to you, in some sense it might be almost as close as you can get without getting into bio-implantable devices. To snap a photo, you no longer have to reach in your pocket for your phone. Its right there ready to use at all times. To pull up information on just about any object or person you are looking at, again, it’s right there. I’m still trying to figure out the ways Glass could be used to solve really challenging problems. It’s a completely new platform waiting for creative folks to really push what it can do. But the real power of Glass is not necessarily the device itself; it’s how it fits neatly into the stacks of other technology, improving how we connect ourselves to everything else.
Nathan Cowan is a software developer and entrepreneur based in Houston, TX. He does a lot of consulting in the medical field and is currently working with MD Anderson Cancer Center developing software in the clinical research space. He is nearing completion of his second iOS application - a photography and video sharing app (Yes, its the millionth photo sharing app) - that will feature Glass integration. He has an avid interest in mobile development, specifically iOS development and has recently been working with Google Glass. Nathan’s hobbies include photography, reading copious amounts of nonfiction books, and coding.
Send this to a friend