And now for a bit of speculation… The latest episodes of Sherlock, the current BBC/Masterpiece series starring Benedict Cumberbatch, will soon be seen as the successor to Star Trek. Star Trek is now hailed as predicting so many of our new electronic devices in its episodes. Sherlock is now predicting augmented reality with Sherlock’s Mind Palace.
The Mind Palace is a place Sherlock can go inside his head to visualize things he’s seen, and it can associate them with other knowledge. The other knowledge appears as labels superimposed over the visual image. It’s very neat to see.
Several nice augmented reality applications already exist. One of my favorites is Star Chart. I can hold up my iPad to the clear night sky, and the iPad will display the names and pictures of the constellations and heavenly bodies where I’m pointing the iPad’s camera. But these things are only the start.
Google is busily putting the Mind Palace together. Its Project Tango has just released a smartphone-sized device that can do simultaneous localization and mapping (SLAM). This is a lot less clumsy than carrying around an Xbox with a Kinect sensing device. It allows you to map everything you see, such as the interior of your house, but, beyond that, everything you see, anywhere you go.
Once you have this kind of map, you, or somebody with access to your map, can put labels on it. You can put a virtual note on your special friend’s bedside table. A supermarket, for example, can give you a SLAM image of its premises. You type in what you want to find, and the map superimposes the product’s location on your augmented vision, which could be on a smartphone, tablet, or Google glass.
Goo-goo-goo-ga-ly Eyes
Since eyes are my favorite search device, I’ll go with Glass. By sharing your maps with everybody else’s, or at least with certain strategic ones such as satellite maps and those of others you choose, you should soon be able to turn all the reality you see into augmented reality, where you can get Mind Palace-like superimpositions over your vision. Once the Internet of Things gets going, all kinds of devices will be able to provide information to your map.
Add tomorrow’s Google Now into the mix, and you can be supplied with all kinds of information your linked computers want to give you. Perhaps you will be able to pin things to it like Pinterest, so that categories of things, or patterns, you’ve decided are special show their special labels whenever you see them. Perhaps you will be able to pick among categories of labels. After all, you can tell Glass what to look for.
Sooner or later, you’ll be able to see your Mind Palace without having to visualize it with your mind. Google will do that for you.
Now, how far can this go? Sherlock is able to infer lots of things about somebody just by noting a detail of their expression, or fingernails, or dress, or an out-of-place speck of dirt. It’s not hard to imagine combining facial recognition technology with kinesics to turn you into a “Lie to Me” expert. You’ll get a label on a liar’s forehead (within certain limits of probability).
And what else? Find anomalies in your map and ask for possible explanations. Isn’t that what Sherlock or the guy in Psych does? Only instead of your lousy memory, you’ve got access to a lot, a LOT, of indexed information. Let the crowd do the indexing.
You, or your local surveillance camera, may be able to tell when somebody’s carrying a concealed weapon, or a bomb. Build a database of how different cloths hang when part of clothing. Differentiate between normal drape and when there’s an object in a pocket, or under a coat. Study the shape of the anomaly. Combine with infrared measurement around the area, or how the person is using their hands differently. Etc. Etc. Of course, mistakes can be made. So, maybe, don’t shoot.
Sticking with Sherlock plots, your AR device may be able, eventually, to help you in a fist fight. Perhaps it can give you an estimate of how long you have to get in the first punch, sort of like one of those countdown stoplights. Perhaps it can outline where you should punch or kick, combining knowledge of pressure points with a prediction of where the other person will be by the time you can put your fist there. How will people with access to this technology move beyond people without it?
I find my mind just spinning off all kinds of scenarios, based on what these technologies could do, and they are a long way from worrying how much information you can put on the face of a wristwatch, which seems to be about what the trade press can handle.
And the thing is, when you combine Glass with Project Tango, Google Now and the Internet of Things, we are building the underpinnings for all this sci-fi stuff at breakneck speed. This could start getting serious a lot sooner than reasonable people expect. Maybe most of it has already been done for the military.
0 Comments