One of the biggest buzz words in technology at the moment is the idea of Augmented Reality or AR as it's become known to its friends. Smartphone users will know it through apps like Google Goggles or Street View on the G1, both of which involve waving your phone out in front of you and looking at the world on your 3-inch LCD display along with a few computerised annotations.
This is undeniably AR, but it's hard to see what all the excitement is about from those examples alone. As it happens the scope is much bigger and a way to answer the question of the title might be to get a better understanding of what we're using already to change how we see the the world in the present.
Augmented Reality has actually been sitting quite comfortably in the consumer realm for some time now. Any time you see a real world image enhanced by any kind of computer generated content overlay, then you're looking at AR. For example, watching Alan Hansen on Match of the Day stop the action during football analysis and start highlighting a team's shambollic defending with circles, arrows for where the forwards were moving and great yellow stripy zones of space that no one was watching - he's using AR. In this case, the reality is the recorded footage of a football game and he's augmenting it using some kind of mouse and computer set up.
So to that extent, AR is already here and very much a part of our lives, but perhaps not something that each person on their own has ever used. That's where all this is starting to change, though, and why the name AR has turned up on the agenda.
There are three ingredients you need for AR and, until recently, normal people have not had access to all three in any kind of meaningful way. You need a real-world situation, you need some extra information on it that you can augment it with, and you need a device for viewing it on. In the case of our football example the real world is the recording of the football game, Alan Hansen is the database of information and either his computer or your TV is the viewing device.
What's changed recently is that this database of information has become more readily available. Alan Hansen, although admittedly knowledgeable about the beautiful game, is not necessarily the fountain of all football. In fact, there is a rather marvellous invention that's absolutely full of huge amounts of free data that we now have access to whenever we want – the Internet – and computers to view the AR through.
But with just these tools alone, it's been no real surprise that its use hasn't exactly exploded. Much of the way AR has worked has meant holding some object up to a webcam, which a piece of pre-installed software can recognise and then call up the right information concerning this object either from the Web, or more likely from a database that comes with the application.
As a result, you get very clever and entertaining, but ultimately pointless examples of AR such as in the clip below. You don't have to watch it all to get the point.
Without meaning any disrespect to Total Immersion and what they've done, they're essentially using AR as a marketing gimmick and none of it is particularly useful to the consumer sitting at home in front of their machine.
“Well, right now most mobile phones simply aren’t equipped with enough horsepower to do some of the face, image and scene recognition tasks possible on the PC, but reality happens all around people, not just at their desk where the computer is. So, reality must be in a position to be augmented any time, any place for AR to really kick off".
"Starting this year, mobile devices are appearing that are up to this task of doing AR properly and a healthy installed base will be in place well before 2015. The mobile phone, and, more importantly, the person carrying it, are the final destinations for the AR value proposition and will be the 2015 home of the majority of AR applications”.
Of course, the other bonus of our new and improved pocket computers, rather than just their mobility and connectivity, is that they have more than just cameras to get a measure of their surroundings. There are microphones that can detect wind or sound, accelerometers for movement, digital compasses to tell which direction we're facing and proximity sensors as well. Now we're in a place where we can really experiment with AR on a personal level and explore our worlds in a whole new way.
At the moment, though, it's just the beginning of all this. The technology is all there. We're waiting for the application developers to realise the potential of what's already around and start writing programs that can detect and understand parts of our surroundings, call up the appropriate information from the Internet - without, bothering us to do so - and then display it on the screen in an effective and user-friendly way.
Fortunately this is becoming easier and easier with the increased richness of cloud-accessible information, a more reliable mobile broadband infrastructure to connect to the data and smartphone platforms like iPhone OS and Android, which developers are getting excited about writing for; both in terms of the creative freedom they offer and the potential financial gains too. So, now that framework is arriving for AR, what are its actual uses going to be for us?
AR expert, Kanwar Chadha, CMO of wireless and location specialist companies CSR and SiRF, points to a number of important environments where the public will very quickly see the benefits of using a handset or pocket tablet to enhance the 3D world.
“Navigation is a big first area for AR. The normal map systems we have at the moment are just not intuitive. The trouble is that the human mindset does not relate to streets, but to points of interest. We associate with areas rather than the road names. People have traditionally navigated with landmarks and not streets. If you go to places like India or Japan, the buildings aren't even numbered sequentially. They give a reference to when they were constructed rather than an order. Traditional methods of navigation do not work here. So, what AR offers here is a richer visualisation and a better perspective of what's in front of you in a way that real visual data doesn't”.
The issue that Kanwar points out is that many of the POIs we're looking for are hidden. With an application on your phone that recognises images of street scenes through the camera and ties them in with, say the Google Street View database, it could then show you exactly where you are on your device, which direction you need to walk in or even strip away the buildings in front of you to leave the POI to which you're trying to navigate. That's all pretty basic stuff that can be done now.
Education and tourism will quickly follow as growth areas for AR in the next few years. Think how artefacts in museums could come to life when you hold up your pocket device and view them. A specific app written by the British Museum or even an independent developer could pull in videos, annotations and images of a Roman Legionary when the camera recognises the empty suit of armour in front of you. Suddenly exhibitions come to life.
The same is true in the class room. How much more informative and inspirational would be 3D graphic images or footage of the human body and its internal organs, muscles, bones and tissues in action on your device, rather than just flat and still on the page of a text book?
AR will doubtless become an interesting social tool as well in the very near future. There are enough tagged photographs of people on Facebook, Flickr and all the others to be able to pull in information on strangers you meet whilst out and about just by holding your camera up to them and having some face recognition software to do the magic and search the social networks. We might want to be even more careful about privacy come 2015.
According to Kanwar, though, it is not the why of AR so much as the how that will take it from tech geek toy to mainstream mainstay.
“I don't really believe in killer apps. In fact, I quite oppose this way of thinking. Technology becomes mainstream when it becomes seamless. The user doesn't want to be troubled by having to move the data from A to B to C. It all has to work at the touch of a button or less. Making it accessible in a user-friendly way is where the challenge lies”.
“The test of a mainstream technology for me is when my wife and kids don't worry about why it works or why it doesn't and in the next two years AR will clearly have reached that level. The first flush of devices will be clunky at best but, as the content becomes available, there will be three stages":
"The first will be the interested phase where people are aware of it but not too fussed, the second will be the immersive phase where people are asking for it in museums, schools and all sorts of other places and the third will be the phase - probably in more like 10 years' time - where we take it for granted and rather than asking if environments offer it, people will be asking why on Earth they don't”.
As the sophistication of the software grows, the devices we use to augment reality with will also diversify as well. Why just stick to the mobile phone or pocket tablet? All you need is something with an internet connection and one obvious space that already exists, so far hugely untapped, is the car.
In fact, as futurologist and mobile service specialist, Tim Haysom of the Open Mobile Terminal Platform (OMTP) points out, the car has probably the most powerful potential AR devices out there at the moment.
“On a mobile phone, the sensors we have might be the camera and accelerometer, but if you move that onto the car there are a whole load more. There are outside temperature sensors, engine temperature sensors, speed sensors, direction linked-in with satnav - you have a massive, massive amount of information coming in from the car which is already used for engines to feedback and regulate themselves. There's no reason why these can't detect real-time information about our environment and be used to pull in relevant augmenting information from the cloud”.
"Imagine your dashboard as a touchscreen and everything that's running on there as a web application - a little widget. So, it's a web app that, 99% of time, all it's doing is pulling speed information from the car and displaying it in a customised graphic of your choice - digital, analogue, Times New Roman font or whatever. But, it might also link in through GPS or a satnav system and mix that information with data on what the speed limit of the road being driven on is. So it might then light up a different colour on the speedo to let you know you're going too fast”.
One obvious barrier that pops up, of course, is that a lot of the potential is going to require your car being connected to the Internet, but a rip-roaring LTE link up is not the order of the day. First the connection does not need to be that fast. As Tim points out, we're talking about tiny bits of information here a lot of which could be achieved with GPRS alone.
There's plenty of scope for patchy coverage as well. Local information can be cached in case the link goes down when in the further reaches of rural areas. Not only that but coverage will be more or less ubiquitous come 2015, but you can read more about that in our mobile networks of the future piece later in the week.
Once connections are even better, then the applications can become more involved as well. We could have heads up displays, as fighter pilots already do, on our windscreens augmenting our view of the road. Satnavs are all very well and good but they don't tell you what's around the corner. A HUD could display the real curves of the road round a bend and help us to drive in foggy and other treacherous conditions. It could also pull in traffic information and illustrate real-time cars in front of us or those coming in the other direction.
"All of the playing cards are laid out to do this”, says Haysom. “There's nothing in what I've said that's magic. All of it is pretty much there and it can all be developed now. Getting it built into your Fiat Panda I think is a long shot for five years because manufacturers have long lead times. However, if you were taking about a top of the range Mercedes in five years, to give an example of a high end car, I would say yes. Yes, why shouldn't they have spent the time and effort to make sure they've got it right”.
If these ideas and possibilities for the car of 2015 tickles you fancy, then take at a look at our 'How will we drive in 2015?' Future Week piece in the coming days but, of course, AR goes beyond this device as well. As AR guru, Ken Blakeslee points out, the technology for reading and viewing the augmented data must eventually be so subtle as to be only detectable by the recipient themselves for a smoother, less self-conscious experience. Aural augmentation is the easy part.
“Most people these days walk around wearing ear buds or Bluetooth headsets to listen to music or be at the ready for that important call. There could then be a direct alert, for example, that informs a hay fever sufferer that the pollen count is getting unusually high”.
The tricky part is getting it to our eyes in a less obtrusive manner than having to hold up and wave around our expensive phones in an unfamiliar city in order to work out where we're going.
“Futurists predict that contact lenses that act as a visual overlay will appear in 10 to 20 years but, until then, video eyewear glasses are available now from companies like Vuzix for example, that will fill the bill nicely and I am quite aware of technology waiting in the wings from companies like this to enable normal looking fully see-through AR glasses starting in the next year or so”.
Once such devices are in place, then the possibilities start to become mind-blowing. Within five years there's no reason why we shouldn't be out there jogging in our Nike Sport glasses, which bring up information on our heart rates, pulled in from sensors against our temples, and running times in front of our eyes as well as even adding a visual warning for pollen information if that's important too.
Start connecting them and you can even add mapping data as well as real-time progress of other people you're racing against. The sky's the limit - or perhaps the need to still be able to keep your eyes watching where you're going is.
Perhaps the ultimate in AR prototype technologies that exist at the moment is the Sixth Sense set up shown off at the TED conference in December. The camera set up is still a little on the clunky side but it really gives an excellent idea how this kind of seamless integration of AR into our lives could take place.
If you're still not convinced that this kind of thing is useful in the future, then you can take AR into the professional world – firemen with HUDs attached to their helmets with infra-red temperature views and information on building structure and safety, or surgeons with blood pressure, oxygenation levels and heart rates all displayed as they look down at the patient without having to ask or turn away. Theses uses will save lives. They can even help less experienced professionals perform more complex procedures.
Even now, Colombia University is even developing an interesting application known as Augmented Reality for Maintenance and Repair that could see us all doing the repairs on our own vehicles as guided by a set of goggles and a database of parts, procedures and engines. The point is that it's just impossible not to see that this stuff is going to be huge not even in 5 years' time, but any minute from now. In fact, according to Ken Blakeslee, by 2015, we won't notice it's there any more.
“Consumers won’t be wanting, buying or using a service called 'AR'. AR will go the way of AI (Artificial Intelligence). Never referred to, just part and parcel in some way of most every valuable application used, but embedded invisibly”.
So, now that we have the pieces in place, it's time for this future to begin.
You can find out more of the shape of the future of Augmented Reality from Ken Blakeslee coming soon on Pocket-lint.
If you enjoyed this article, then head over to our Future Week homepage where you'll find a collection of features on what gadgets will be like in the year 2015.