What is an MR blog?
How do I make this blog usable and useful in MR? What does this even mean? It might seem a silly question, but it's actually a serious one. I've been working on web-based AR for quite some time (I started doing web-based AR as part of the Argon project at Georgia Tech in 2010, and now work on WebXR at Mozilla), and one of the things that I've long wondered about is what are "the things" people will do with AR and VR on the web.
Certainly, focused applications designed for AR and/or VR should be possible (playing games, experiences for training and simulation, and other purpose-built things); I can imagine adding AR or VR content into an entry on this blog, for example. But, if you wanted to experience my website from inside AR or VR, what would that mean? And if I wanted to have some content that is linked to certain areas in AR (e.g., a blog post tied to a location) be available on the desktop in 2D or in VR, how might that be created?
Pondering this raises some interesting ideas on how to mix media for presentation in MR and 2D. For example, perhaps I should create a VR room for my site, and have the various 2D bits layed out in it. A room for the portfolio that looks like an art gallery, with pictures on the wall, and the text summaries presented nearby? A linear, temporal presentation of 2D blog posts? Or have them in a virtual newspaper? Or perhaps just present the 2D content in a grid of elements around the viewer, that can be touched and brought to the front? Perhaps downplay the 2D content entirely, and make elements with 3D content more prominent, so that AR and VR presentations are similar?
Which begs the question: what kind of tools do we need for creating "AR or VR blog posts" in the first place? If I wanted to do some commentary at a place, are panoramic photos/videos enough, or just the "best I can do" right now? What if I want to annotate these photos and video, and link them together? Or perhaps we need to create some photogrametry tools to let me "scan a space" and then easily annotate it in 3D, letting the scanned site be usable in VR and (when possible) just present the annotations in AR?
But, AR presentation assumes we can accurately know where the user is when they return to a site; if we can't solve this "last meter problem" (of localizing people down below the last meter), can we even do AR that actually annotates spaces? And if we can solve this problem, perhaps I need to be able to present the scanned model anyway, if the interesting physical content was ephemeral or movable (e.g., a concert at a venue, or a protest at a street corner)? Should viewers be able to toggle between AR and "sort of VR" to see the differences?
I'd love to know other peoples thoughts on this, the question of "how do people create MR stories and commentary", as one form of "the long tail" of web-based MR content. Some of what's holding us back are tools, but a big part of it is not having a clear picture of what this might even be like.
I was thinking about Bluebrain's National Mall project in 2011 where they released their album as an app so that each track could only be heard at particular locations around the National Mall in Washington DC. http://bluebrainmusic.blogs...
It strikes me that things could be linked not only to specific locations but to certain classes of locations. For example, what might be in a blog that could be accessed from any toilet cubicle in the world but only from a toilet cubicle?
I agree, broadly being able to associate content with all kinds of context (time, location, objects, people, activity, etc) would be great. So, I could cause different parts of the blog to appear in different places, without being "3D" and attached. But I still feel like there's some big holes in getting from here (2D) to there ...