Home » Opinion/Editorial » Google Glass – Using and Designing Apps

Google Glass – Using and Designing Apps


Google Glass © Technews - A Store of TechnologyMedTechBoston asks software developer Nathan Cowan to take a peek behind Google Glass. How do you actually use this newfangled device? And what are the implications for medicine, manufacturing and society as a whole?  In the second of several blog posts, we hear about this new device through the eyes of an experienced software developer. In addition to Cowan’s user reviews, stay tuned to MedTech Boston’s Google Glass Challenge! 


Glass has the benefit of having a fairly active developer community even though it’s still in its beta phase. There are actually a decent number of applications available for download today and the catalog is growing by the week. Even so, the scale is nowhere close to the mobile applications available today. UX designers will have their work cut out for them as they envision ways to leverage the different types of user input Glass can receive.

Certain features taken for granted may actually present a challenge when developing and designing apps on the Glass platform. For instance, let’s take a look at the lowly login screen. A simple form generally consisting of two text fields – one for a user name, the other for a password. Not much thought is given to this piece of functionality when it comes to a mobile or web application. Now, throw this screen on Glass. No keyboard! No obvious way for a user to enter this piece of data. Glass does have a solution for this particular challenge – credentials are generally configured via the MyApp mobile application or via the Glass developer’s website. But this illustrates the new thinking that must be applied when it comes to the user experience and Glass. Porting apps to the Glass platform will not be as easy as scraping the screens from the mobile version and placing them onto Glass.

Another clever workaround to cope with the lack of keyboard deals with Wi-Fi hotspot selection. Now, this is usually configured via the MyApp mobile application. However, back before this app was available on the iPhone, Google had what I thought was a great way to register Wi-Fi hotspots with Glass. I think this methodology will be one approach developers will use to get around the lack of a keyboard. Google’s approach was to actually put in the network name and password (or network key) into Glass’ developer’s website. Once this is done, a QR code is generated, you simply look at the QR code (while wearing Glass, of course) and viola, the network is registered with your Glass. While I’ve heard that QR codes were used fairly extensively overseas, they never seemed to catch on here in the States. QR codes might enjoy a renaissance thanks to Glass.

Setting up Wi-Fi Network via Website

Setting up Wi-Fi Network via Website

QR Code Containing Network Information

QR Code Containing Network Information

Interacting With Glass…

A UX designer is responsible for the overall user experience with an app. While this may include placement of visual elements and cues, perhaps the biggest driver for a particular platform is how the user interacts with the app. For a web application, that will generally include mouse and keyboard inputs. For a tablet, inputs will include finger gestures, microphone, and accelerometer data (how you physically hold or move your tablet). Out of the box, there are three main modes of communicating with your fancy new headgear:

  • Voice – This is the main method of navigating Glass and opening apps.
  • Gestures – The right ear stem of Glass is touch sensitive and will respond to taps and swipes.
  • Head Gestures – A very exciting concept using your head’s movements to control Glass!
  • Camera – I’ve added the camera mainly because it is, in fact, used in conjunction with QR codes to input certain pieces of information. I expect this, or at least a flavor of it, to be heavily used. I also expect apps at some point will be able to recognize “air gestures,” or gestures using your hands in front of you (rather than using the ear stem).

 Voice is probably what most people are familiar with in terms of interacting with Glass. To call up the main menu of Glass, a simple “Ok Glass” command is all that is needed. It should be noted that you can also use the ear stem gestures to do this same exact thing. So if you are in a place where it would be awkward to use your voice to talk to Glass, you have the option of using the ear stem to gesture through the menus. As I mentioned in my previous article, Glass’ voice recognition is excellent and much better than Apple’s Siri.

Gestures are another important input method of Glass. Gestures are done via the right ear stem.

Illustrates “Hot Spot” Area for Gestures

Illustrates “Hot Spot” Area for Gestures

Hand gestures include:

  • Tapping
  • Swiping forward and backward
  • Swipe down

Head gestures include:

  • Head tracking and panning
  • Look up/down

Understanding these input methods is crucial to using applications and ever more so in designing applications. These methods of input are going to be the way users will be interacting with the app and the device. The lovely thing about constraints is that they serve as a way to really force creative and elegant solutions to the problem at hand.

Head gestures add an exciting element to Glass app design, similar to how the use of the accelerometer on a tablet introduced some cool functionality to tablet apps. Out of the box, head gestures are used in a few different ways. Glass goes to sleep after a set period of time. In order to wake Glass, you need to look up at a certain angle (configurable, mine is set to 30 degrees). Also, looking up and down serves as a way to scroll through a screen with content larger than the screen area. There is a very simple “Compass” application which makes use of head panning to show which direction you are facing. The gyroscope is pretty sensitive and will definitely be a useful input method for apps.

A cool app…

While I generally don’t have a use for it in my day-to-day life, one of my favorite apps is Translate. Two things I really like about this app are the clear attention paid to little details and the almost “augmented reality” feeling you get while using it.

Translate is an app that will translate signs, such as street signs, in a foreign language to your native tongue. While this application is far from instant translation, it really makes you think about the potential of Glass.

Using the application is simple. Call up the application by saying “Ok Glass…Translate”. Now, all you need to do is look at a sign or text in the foreign language. Note that you do need to set up the app for the two languages you’ll be working with – it doesn’t recognize the language by itself.

Translate – Look at a sign written in Spanish

Translate – Look at a sign written in Spanish

Once you look at a sign, Glass will place blue corners around the sign of interest. The translation doesn’t happen instantaneously and you do need to remain somewhat motionless during this process. But once Glass produces a translation, you’ll then see this screen.

Translation!

Translation!

And bingo! You have the English version of the sign. What I absolutely love about this application is the attention paid to the small details, minutiae the developer didn’t have to provide but took the time to implement. For instance, notice the English translation is in the exact same font, color, and size as the original sign! Including this small design element gives the application an AR feel. While the app, as it is now, can’t be used while you are driving – it requires focus from a relatively stationary position – this app demonstrates the potential of Glass, especially from an AR perspective.

Here’s another example of the app in action with a different sign:

Spanish to EnglishNotice

I’ve also tried using the application to translate documents with mixed results. The app isn’t intended for this purpose and is geared more towards reading signs and notices, but I did want to push the app. It did work to some extent but wouldn’t translate every single word and the app would focus on a small subset of the entire document.

The UX Challenge…

UX designers will have plenty of challenges with Glass. Designers will not only have to contend with using gestures in such a manner that the device doesn’t get in the way of the user, they will also have to realize that screen real estate on Glass is extremely precious. If mobile screen real estate were California, Glass screen real estate would be New York City. You have a fairly limited visual window to present users with information. Application structure will need to be such that only the most pertinent information is shown on Glass. Another complexity is the lack of a keyboard. We take for granted the ease at which we gather information from the user on a web or mobile application. How will users enter free form text? How will users select an option from a dropdown list? These are definitely solvable problems but additional thought will need to go into Glass app design, areas where you wouldn’t give a second thought if deploying onto a mobile or web platform. Solving these problems will convert your Glass app from a passive app into a data-gathering app.

The challenges don’t stop at the user interface either. From a developer or architect perspective, how will the Glass app fit onto the technology stacks of the existing infrastructure? Remember, Glass is a fairly dumb device. It totally lacks a built-in GPS. It relies very heavily on both an Internet connection and the mobile device it is connected to (either an Android or iPhone). Complicated image processing? Forget about doing that on the device itself. Tracking the device’s location? Not without a connection to a mobile phone! Depending on the application’s features, perhaps the app can function offline just fine. But I suspect many apps will be heavily reliant on backend services to function usefully. Any application is most useful when it’s contextually aware. This can be as simple as using your location when you search for restaurants. Or if I’m paying bills via my banking app, the app only showing menu items pertinent to paying my bills. How will Glass gather this type of information easily and seamlessly using only the built-in gestures? Voice could be sufficient in some cases. But there will be times where voice won’t quite be the right choice. Again, UX and app technical design will come into play and the melding of these two disciplines will be the difference between an app that is a joy to use or one that is too cumbersome, clunky, or intrusive to use.

Covering the basic ways to interact with Glass as well as a Glass app walkthrough hopefully conveyed some sense of how Glass works – its limitations as well as its possibilities. As I mentioned in my first blog post, Glass was not what I had expected – even given the fact that I read a good amount of blogs and articles before purchasing. It’s difficult to really learn how Glass works; it’s one of those things that you have to experience firsthand to appreciate. Given that, I hope this serves as a very basic introduction to using Glass and some of the considerations that must be taken into account should you decide to develop a Glass app. I put a lot of thought into the user experience challenges, which include user input, because I firmly believe that UX has a very strong impact on the use of the final product. You can develop a great solution; but if the app is difficult to work with, people may avoid using the app altogether.


Want to learn more about new applications of Google Glass? Stay tuned intoMedTech Boston’s eight-week series on Google Glass. We’re providing our readers with never before seen insights into this new technology from the country’s experts on Glass Tech. 

Nathan Cowan

Nathan Cowan

    Nathan Cowan is a software developer and entrepreneur based in Houston, TX. He does a lot of consulting in the medical field and is currently working with MD Anderson Cancer Center developing software in the clinical research space. He is nearing completion of his second iOS application - a photography and video sharing app (Yes, its the millionth photo sharing app) - that will feature Glass integration. He has an avid interest in mobile development, specifically iOS development and has recently been working with Google Glass. Nathan’s hobbies include photography, reading copious amounts of nonfiction books, and coding.

    Similar posts

    Follow us!

    Send this to a friend