We found the Google Glass to be a very interesting device. The following blog post is a summary of the feedback that IIU staff provided after testing out the device.
It was light, fairly comfortable and the voice recognition worked well. It took good photos and the wink detection feature allow me to easily take a picture without saying a word. The display was not as robust as I was expecting. I expected something larger and with more definition / resolution. Also the battery life was surprisingly short and loading apps was a bit challenging at first.
The biggest challenge I found was actually finding a use case for the device. I could envision some scenarios where an individual that occasionally needs to work in a hands-free environment, like an encapsulated suit (e.g., HAZMAT) and be guided by an app through a lengthy procedure might find the Glass an ideal solution, but I imagine these situations are fairly rare.
Despite innovative user interface mechanisms, the overall experience was awkward, and the capabilities seemed limited. I had a hard time coming up with scenarios where the device would be preferable over a tablet or smart phone. The device succeeded in demonstrating a heads-up display that could be controlled by the user via voice or touch. The display appears in the user’s field of vision nicely.
The device is not a “hands-free device.” Navigating through the user interface using only voice and head-jerk commands was awkward and error prone. Basic features like searching the internet / viewing web pages is a very stripped down experience and again, cumbersome to navigate.
The resolution of the HUD limits the possibilities that the device could be used for tasks requiring very precise image analysis. In my opinion, a larger HUD version would be better for use-cases such as viewing detailed images (e.g., x-rays).