Professor, Designer, Husband, Father, Gamer, Bagpiper

{% marginfigure 'glass-dude' 'assets/images/external/glass_photos3.jpg' 'Google Glass original image source' %}
So, Google has finally released some pics and a video showing off their "Project Glass" head-worn display concept.  I have many reactions to the ideas and concepts presented in it, some good and some bad.  I think the glasses exhibit some nice industrial design, for example (although they're still to geeky for broad adoption).  And the idea of them being a stand-alone device is really cool (complete with Android phone functionality and a variety of sensors for understanding and interacting with the world);  it's something I've mocked up in my group, as have others around the world, and have been proposing to research sponsors for years (but, most of us don't do hardware, so it's not like we could have ever done this pretty a job!).  So, like many people, I've been waiting for more information on the project!

Alas, though, my main reaction to the video is "Oh no!"

Why oh why, Google, did you feel the need to release a video that your project cannot live up to?  In one simple fake video, you have created a level of over-hype and over-expectation that your hardware cannot possibly live up to.  I care for two reasons.  First, because the hardware does look nice, and I think there is some interesting potential here.  Second (and more personally), I work in this area (broadly speaking) and in the mid-term this kind of fakery will harm the research prospects of the rest of us.

Why do I say the video is "fake" and that the product can't live up to it?

  • Field of view.  I don't know what the field of view of the video camera used to shoot the video, and I don't even know what the field of view of the Glasses are.  But the glasses (from the pictures) look to have a relatively limited field of view that is up and to the right of the wearer's right eye;  the video, on the other hand, covers the entire video frame with the supposed display content, giving the impression of complete immersion.  Which hardware like this cannot possibly achieve.  The image I included above is looking directly ahead, at the camera, and you can see his eyes around the display ... which means he cannot see anything on the display when he looks straight ahead.
  • Stability.  Video-guy is walking around, going about his life.  And the images on the display are rock solid, easy to focus on.
  • Depth of field.  Everything is in focus, all the time.  Ignoring the glasses, the world around us is not all in focus all the time.  The glasses will likely have a fixed focus distance from the wearer, so the wearer will NOT see the contents of the glasses overlaid on all of these different contexts and scenarios where the virtual display and the world are both in focus.  This matters, because when you refocus on a virtual object some distance in front of you, everything in the physical world (you know, the stuff that matters!) will go out of focus.
  • Image quality.  Amazingly, as display-guy goes from inside to outside, bright daylight to dusk, the contents of the display are uniformly visible.  All while the clear part of the display is perfectly clear.  This isn't possible, using any technology I'm aware of, at least not for full color.  Now, this is the one that I'd love to be wrong on, since companies have been trying this for years.  Microvision's Virtual Retinal Displays where able to achieve this with red-only graphics and half-silvered mirrors that reflected the appropriate wavelength of red.

I'm not going to comment deeply on the actual application scenario.  Some are cute, some seem highly dubious.  None of it is novel, pretty much a collection of research ideas going back to Mark Weiser's early Ubicomp vision and the work the wearable computing has been doing for years.  That's great;  it's nice to see the ideas being taken one step forward!

One closing comment, btw.  To all the press:  this is a heads-up display, it's not "augmented reality".  AR is about putting content out in the world, virtually attaching it to the objects, people and places around you.  You could not do AR with a display like this (the small field of view, and placement off the side, would result in an experience where the content is rarely on the display and hard to discover and interact with), but its a fine size and structure for a small HUD.  The video application concepts are all screen-fixed ("heads up" instead of "in the world") for this reason.  This is not a criticism, but we still have a long way to go before someone creates a cheap, potentially usable set of "augmented reality glasses".

In case you missed it, here's the video.

{% youtube 9c6W4CCU9M4 %}

You’ve successfully subscribed to Blair MacIntyre's Blog
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Your link has expired
Success! Check your email for magic link to sign-in.