Blog

Oh no, Google, why that video?

April 5, 2012 at 7:26 am / by / commercial, displays / 29 comments

So, Google has finally released some pics and a video showing off their “Project Glass” head-worn display concept.  I have many reactions to the ideas and concepts presented in it, some good and some bad.  I think the glasses exhibit some nice industrial design, for example (although they’re still to geeky for broad adoption).  And the idea of them being a stand-alone device is really cool (complete with Android phone functionality and a variety of sensors for understanding and interacting with the world);  it’s something I’ve mocked up in my group, as have others around the world, and have been proposing to research sponsors for years (but, most of us don’t do hardware, so it’s not like we could have ever done this pretty a job!).  So, like many people, I’ve been waiting for more information on the project!

Alas, though, my main reaction to the video is “Oh no!”

Why oh why, Google, did you feel the need to release a video that your project cannot live up to?  In one simple fake video, you have created a level of over-hype and over-expectation that your hardware cannot possibly live up to.  I care for two reasons.  First, because the hardware does look nice, and I think there is some interesting potential here.  Second (and more personally), I work in this area (broadly speaking) and in the mid-term this kind of fakery will harm the research prospects of the rest of us.

Why do I say the video is “fake” and that the product can’t live up to it?

  • Field of view.  I don’t know what the field of view of the video camera used to shoot the video, and I don’t even know what the field of view of the Glasses are.  But the glasses (from the pictures) look to have a relatively limited field of view that is up and to the right of the wearer’s right eye;  the video, on the other hand, covers the entire video frame with the supposed display content, giving the impression of complete immersion.  Which hardware like this cannot possibly achieve.  The image I included above is looking directly ahead, at the camera, and you can see his eyes around the display … which means he cannot see anything on the display when he looks straight ahead.
  • Stability.  Video-guy is walking around, going about his life.  And the images on the display are rock solid, easy to focus on.
  • Depth of field.  Everything is in focus, all the time.  Ignoring the glasses, the world around us is not all in focus all the time.  The glasses will likely have a fixed focus distance from the wearer, so the wearer will NOT see the contents of the glasses overlaid on all of these different contexts and scenarios where the virtual display and the world are both in focus.  This matters, because when you refocus on a virtual object some distance in front of you, everything in the physical world (you know, the stuff that matters!) will go out of focus.
  • Image quality.  Amazingly, as display-guy goes from inside to outside, bright daylight to dusk, the contents of the display are uniformly visible.  All while the clear part of the display is perfectly clear.  This isn’t possible, using any technology I’m aware of, at least not for full color.  Now, this is the one that I’d love to be wrong on, since companies have been trying this for years.  Microvision’s Virtual Retinal Displays where able to achieve this with red-only graphics and half-silvered mirrors that reflected the appropriate wavelength of red.

I’m not going to comment deeply on the actual application scenario.  Some are cute, some seem highly dubious.  None of it is novel, pretty much a collection of research ideas going back to Mark Weiser’s early Ubicomp vision and the work the wearable computing has been doing for years.  That’s great;  it’s nice to see the ideas being taken one step forward!

One closing comment, btw.  To all the press:  this is a heads-up display, it’s not “augmented reality”.  AR is about putting content out in the world, virtually attaching it to the objects, people and places around you.  You could not do AR with a display like this (the small field of view, and placement off the side, would result in an experience where the content is rarely on the display and hard to discover and interact with), but its a fine size and structure for a small HUD.  The video application concepts are all screen-fixed (“heads up” instead of “in the world”) for this reason.  This is not a criticism, but we still have a long way to go before someone creates a cheap, potentially usable set of “augmented reality glasses”.

In case you missed it, here’s the video.

YouTube Preview Image

Tags

 

29 Comments

  1. Jez says:

    Couple of issues with your article.

    First of all it’s pretty clear that this is a concept video – the title of the video is ‘Project Glass: One Day…’ which is obviously supposed to be read both ways, suggesting that this is a vision for the project and maybe not the final product itself. Plus it seems that the video was launched by google as part of a consultation stage for project glass so is designed to introduce people to AR and inspire thought and comment rather than as a product launch vid.

    I agree that the first gen of this system will be significantly worse than what you’re seeing in the video but I think it’s much more important to give people an idea of the basic potential of AR – they’ve obviously reigned themselves in massively from exploring the more exciting scope of this sort of system.

    Otherwise, your stability issue seems iffy. Most if not all of the overlays aren’t being locked to objects in vision so there’s no reason to think that as long as they stay in a stable position on the screen they wouldn’t appear stable to a user. Just look at a scratch on your sunglasses.

    Your depth of field issue is wrong as you can clearly see in the video that only either the overlays on the screen or the normal vision are in focus at any one time. The actual experience may be more extreme but they’ve certainly not missed it out.

    I think your HUD vs AR differentiation is unnecessary – you may have basic overlays to start of with but this will quickly develop into a much more immersive experience.

    I can see why you might take issue with the video but only if you’re seeing it as some sort of launch video for the tech and not what it seemingly is – a bit of buzz creating fun.

    • Blair MacIntyre says:

      Hi Jez, thanks for the comments.

      Yes, it’s obviously a concept video; it’s just poorly done. Given the context of the video, the project and the group it’s coming out of, this is not something they just whipped up last week. Given the brainpower of the team behind it, from who we all know great things can come, I don’t see any reason to assume they didn’t create exactly the video they wanted to create.

      If the goal is to inspire thought and comments, they should have been more careful with their video design. They are presenting a direction they can’t achieve. This is not a marketing commercial (remember the Microsoft book, and the Nokia glasses, both from a few years ago); this is a video from Googles super-smart-super-secret labs, the folks that brought us the self-driving car.

      Nothing in this video is “AR”, btw. It’s a simple heads-up display. They simultaneously show some nice, well designed hardware (prototypes? mockups?) that have a display off to the side, and then show a video that covers the entire field of view of the user with 2D (not 3D AR) content. They create the idea that they think it’s a good idea to completely cover someones view of the world with a mostly opaque map. Really?

      BTW, the step from 2D HUD to AR is neither inevitable nor quick; the technology requirements are very different. The hardware mockups are well designed for an unobtrusive HUD, not for AR.

      And I think the terminology difference matters. It’s impossible to have a discussion about something if people use the same words for different concepts!

      I don’t think it’s a launch video, and I think it’s creating the wrong kind of buzz and expectation. Perhaps I’m just too close to the subject matter. I don’t think that most people will take this as a totally fake concept video, regardless of what they say. They’ve clearly put a lot of time and thought into this; I wish they’d made a better video that would match a realistic vision of what would be possible in the next few years. They certainly could have!

  2. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  3. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  4. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  5. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  6. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  7. [...] who’re warning that what we saw in the promo video may not be anything like the real thing.In a blog post, Blair MacIntyre, the director of Augmented Environments at Georgia Tech, says Google’s video has [...]

  8. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  9. Jez says:

    Hi Blair, thanks for getting back to me.

    I’m no expert in this area by any means so please correct me if I’m wrong but, apart from the field of vision issues with the video – compared to the first gen glasses that are being currently developed – I can’t really see any elements of the video that are crazily far out there in terms of what could feasibly come out in the next few years. Most can be achieved with the tech that comes in mobiles now, gyros, gps etc. and wouldn’t need heavy video processing. Give it a couple of generations, faster connectivity and I can’t see why any of this is impossible. Not sure where we are in terms of full field of vision displays so if we’re way off on them then that could slow the adoption and development of the area. Assuming adoption rates are reasonable though, wouldn’t rates of hardware development to suit this sort of device’s requirement increase making this and much more potentially attainable? Seems like this area could also do wonders for the field of robot vision.

    Why is there necessarily a line between a HUD and AR – when does one become the other? Don’t you consider the overlay of a map onto a scene to be augmenting reality? When someone develops a blue sky app that detects the sky and replaces it with one of your choice? Seems like a foggy, gradual but pretty natural transition rather than a sharp distinction. What’s holding back that transition if, as you say, it’s uncertain? I would have thought that given the addition of a few more gizmos, IR? depth perception, really good orientation detection and pretty shit hot vision processing that you might get on the Gglasses 4 you’re getting close to something really cool in terms of capability.

    Also, you’d kind of assume that those full overlays are at least controlled in some way rather than popping up unbidden. Assuming they are, why is that any more weird than looking down at your phone for a second to check where you are on the map?

    • Blair MacIntyre says:

      Much of what is in the video is possible, I would agree: when it’s situated on a little display up and to the left of your eye, so you can glance at it when you notice it changing, it would even be useful. I talked to various reporters who picked up the story, and some tried to relate that. The issue I have is that the video doesn’t portray it as “off to the side”, and creates a very fake sense of the experience. Worse, they carefully align 2D stuff with the relevant bits of the world to make it appear that the little annotations are linked to the world.

      Going from small (~15 degree) to large (90 degree or more) FOV, in a wearable, sleek package will be very hard. People have been trying for more years than I’ve been doing research (Sutherland built the first see-through HMD back in the 1960’s, around when I was born!)

      The line between AR and HUD is pretty simple. AR implies content aligned with the user’s perception of the world; a HUD is 2D info (perhaps related to the world around you, just as 2D stuff on your phone might be related to the world around you). If it’s not AR, you don’t even need a see-through display, and it may be worse to have one (from a usability perspective) because of the false impression of alignment, and because the content will be harder to read (contrast, cluster, visual confusion, etc). Overlaying a map on the scene, when the map is just floating in a fixed location relative to the display is not AR; modifying the sky is (and would be cool, especially if you synchronized the color to your music ;)

      More gizmos == more bulk and more complexity. You won’t be getting a Kinect into an HMD any time soon.

      Don’t take me for a pessimist; I fully expect to see real AR HMDs before I retire, and look forward to using them. I like the physical construction of the Glasses, and think I would actually use them (if I can code for them). They aren’t AR, but neither is my purely mechanical watch … and I like it and find it quite useful.

  10. Ross Graeber says:

    I’ve been mulling this over as well. They could create some fantastic ambient notifiers with a HUD plus audio. Subtle color changes or motion in the periphery could create both calm and active alerts for a user to adjust their focus when they want.

    But your point remains, it won’t look like this if it’s a traditional LCD or LED HUD projected on some optics. We can’t look two places at once. So this whole project video is as annoying as it is exciting.

    At the same time there’s some intriguing patents at Google around Virtual Retina Displays (VRD):

    http://www.google.com/patents/US5659327

    Do you think if they were hiding a VRD behind their ear it could be a different picture?

    Though, I’d think a consumer priced, tested, portable, and viable VRD might be me starting the rumor mill off with no gas in it.

    Would make for some fun ads poking at Apple’s Retina Display, though.

    • Blair MacIntyre says:

      Would be really interesting if they had a VRD in there. It still wouldn’t look like the video, I don’t think, but it would solve some of the ambient light issues, etc.

  11. [...] Tech 增强现实环境实验室主管 Blair MacIntyre [...]

  12. [...] Georgia Tech 增强现实环境实验室主管 Blair MacIntyre 也表达了类似的看法。他认为,将屏幕设置在视线上方会导致用户难以察觉。此外,MacIntyre 认为 Google 显然是“给自己定的目标过高”,他直接了当地表示:“通过一个简单的假视频,Google 营造了过于超前的图景以致他们的硬件根本无法实现。” [...]

  13. Nicolas says:

    Brother already relesed a “Virtual Retinal Display”. They call it “Airscouter”. Apparently it is already possible do do the things Google showed in their video two years ago. The Brother glasses have a 800×600 resolution and you don’t need to focus on the projected image. The projected screen is equal to a 16″ screen in 1 distance meter. Also the size of their device isn’t much bigger than Google Glass.

    Maybe in another two years it is a 20″ virtual screen with HD resolution…

    • Blair MacIntyre says:

      Yes, others have been developing VRDs; although, I wouldn’t say Brother’s has been “released” (Engadget just posted an article this morning about them finally releasing them in Japan this summer). Like google’s, the Airscouter is a really small image (16″ at 1 meter is really small); there’s a reason these companies never say what the effective field of view of their displays are, after all, but prefer to quote weird sizes that tend to mislead folks into thinking they are bigger than they are (“It’s like a 100″ display, from 20′ away”). I’d also disagree with your characterization that they aren’t much bigger than Google’s prototype, they are huge! (Check out the picture in the engadget article above).

      That’s all besides the point. My comment about Google’s project can be summarized this way: their display wouldn’t cover your vision with virtual content as their video implied, and the interface they present isn’t really appropriate for display in the center of the user’s visual field. The Airscouter could put content in front of one eye, but if you were going to design an interface for a display that blocks your vision, you wouldn’t want the interface in the Google video.

  14. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, agreed, and got to the heart of the matter: “Is it augmented reality, or is it location-based notifications? It’s going to generate ideas in people and expectations that just might not match.” [...]

  15. [...] Blair MacIntyre, “Oh no, Google, why that video?” [blog post] [...]

  16. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, agreed, and got to the heart of the matter: “Is it augmented reality, or is it location-based notifications? It’s going to generate ideas in people and expectations that just might not match.” [...]

  17. [...] seen in the photos cannot give the experience the video is showing.”  Also in the Wired article, Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech raised concerns that Google is raising [...]

  18. [...] Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.” [...]

  19. My first reaction to the video was the same as yours. A bunch of us (US Navy contractors) did a stereo HUD see-through AR project in 1993. We had tracked a maintenance object with CV in 3D and did corresponding 3D reticles. The issues that you discuss FOV, light-dimming, focus (even auto-focus based on tracking vergence) are very important for this kind of decent AR see-through system. Blair, I’m very glad you brought these points up!

    So, to those that don’t know: Augment Reality means many things to many people. To the original researchers it meant that the display is augmented onto the world, for some it was just extra information for others it is more – a one-to-one mapping of the real world (in the background) with 3D-aligned imagery that fits the real world like a glove (with appropriate holes and transparency). This is the aspect that could/should have been shown in the video – like a wire-frame model of the Moscone Center with a wire-frame landing-zone super-imposed on the actual one. That would have been AR!

    Your and the others discussion in the comments on VRD is interesting too. You probably know about Microvision’s earlier work on that, yes?

  20. James Michaels says:

    Interesting post, Blair. It took me a little while to envisage what you are suggesting, but I get your point now.

    What interests me about this project, though, is not so much that this is most certainly not going to be an AR project, which doesn’t bother me at all – I am happy for the final product (version 1.0) to be a 2D HUD display – but the fact that they choose to use that tiny little display just above the eye. Wouldn’t it make more sense to just build a larger display that covers the whole eye? I mean that, for me, (not the camera, etc.) is what I would like as a consumer. I’m surprised that they can build a 2cm x 2cm display yet not create one the size of a standard glass lens. (Perhaps I don’t understand the dynamics here. Does this somehow project onto the eye creating a larger image?)

    My impression, and please jump in anytime to correct me here, is that the finalized display will be about a thumb sized viewing area over top of the eye. This seems like it would be pointless for a large number of different activities: The main ones being viewing the web or playing games, which is what most people buy a smart phone for in my opinion. Also, viewing YouTube at that size or watching a movie with subtitles (or the translation app that is being touted so highly on the internet right now) would be difficult. There is also the element of eyestrain. Wouldn’t the design of the display require the user to be constantly looking upwards when using it? That doesn’t sound good for one’s eyes.

    Basically, I am rather excited by the potential, but all I see from Google is a camera that one wears on their head that has various features similar to a smartphone, yet a screen too small to be usable. As a consumer and a lover of technology, I am highly saddened by that concept, but at the same time I respect that this is not a simple creation. There is a ton of R&D going on and Google is obviously trying to make this a workable project. That said, as the display cannot live up to any reasonable expectations I might have of a smartphone like device (such as internet browsing, gaming, even messaging in long form or email), it is hard to not feel that this will go down as a product failure for Google.

    (I am also quite fascinated by the concept of a VRD – thank you for the education folks. I have my doubts that they could miniaturize the technologies; however, the device as it stands looks like it could accommodate a small projector above the eye for this purpose. The lens itself could be a temporary measure to avoid showing their trump card – or not.)

    • Blair MacIntyre says:

      Hi James. I agree with you; I think it will be very useful as a 2D HUD. I am excited to see how that succeeds, even though its “no AR”.

      Regarding the display: I think they actually chose the right size, for two reasons. First, the optics required to build a larger display, and especially one that wouldn’t cause weird visual artifacts when viewed through on a regular basis, aren’t really “there yet”. Bigger would be bulkier, and less comfortable. And would, thus, be the kiss of death for acceptance. Second, regardless of the quality, I think the safety issues surrounding a display that covers your eyes are immense. If you put it on/took it off when you wanted to use it, then covering your visual field would be fine. But, they want to build something that people will leave on all the time. And to do that, you either need to be able to sense an incredible amount about the world around you (to ensure you don’t block things people need to see, etc), or you need to sit it off the side, out of the way. The former isn’t possible, so the only practical option is the later.

      The use case for these displays are not anything you use you phone for right now. It’s not about playing games, or watching videos. It’s about the system constantly displaying micro-bits of might-be-useful information that is based on you location, activity, who’s near you, what’s going on, and so forth. It’s about glance-able content, not “deep” content.

      I expect over time that the system will pair with your phone, and become a secondary display to the main phone; I don’t believe they can build any reasonable form on interaction with it’s the way they are trying to do, and the phone (with it’s touch screen) will be a beautiful complement to it.

      But, regardless of where it goes in the future, the Glasses are about entirely new forms of interaction, and new kinds of applications. They aren’t about moving what you do on your phone or laptop onto the display (except for those things that currently suck on the phone, and would be better up there! :)

  21. James Michaels says:

    Thank you. I really hadn’t thought aboutit that way and you’ve really helped me to see how this might work in reality. Good post and comment!

  22. Marc Z says:

    I’m not sure this type of technology can be that successful. Yes it would be cool because its new but compare this to this movie industry. People still go to movies all the time because of the entertainment atmosphere. We have big 60″ TVs at home and some even better theater rooms, but we still want that atmosphere. I would love to test the Google Glass out though.

Leave a Comment

 

— required *

— required *


eight + = 14