Google Glass is Dead! Long Live Sirious iGlasses!

glass-displayIf you really want to sync with me on this subject, read this article first, where the early problems with Google Glass are relentlessly noted. Even better, watch this Saturday Night Live skit, which resonates with many tech embarrassments over the years. Otherwise, just admit the bloom is coming off the Glass rose.

It seems that what is happening to Google Glass is what has happened to several past technologies: ISDN, PDAs, video conferencing, even PCs and laptops: The closer they got, the farther away they looked. The devil in the details can’t be seen until you see the details. The thing about the future is that it’s easy to see the possibilities. It’s the timing that kills you.

Somebody smart once said it’s like being on a mountaintop and seeing across to the next mountaintop, but forgetting having to climb down your mountain and up the other one to get there. It’s easy to see the publicity videos for Glass and imagine great things, without knowing whether the technology is there yet.

So Google Glass is on the way down its mountain. I don’t think the clever people at Google are going to mind that much. They took the sensible step of not betting the farm on this beta, and they will learn a gigantic amount from it. I even think I know what it will take to go up the other side.

The comparisons between Glass and the Newton are apt in more ways than that they are both objects of ridicule. They both tried to reach the potential of their devices right away, and they didn’t get some of it right. In Newton’s case, there was no direct fallback position. That was left to other vendors. Google may be able to fall back (while appearing to move forward), but, if not, others will, and who knows who will get it right in the end?

It makes me think of something Richard Dawkins wrote about evolution, some time ago. He was talking about how eyes evolved, how it happened slowly over millennia, and how every incremental step in the process, even vision that was only perceiving a slight difference between light and shadow, must have had survival value. Otherwise, evolution would never have gotten to eyes.

Compared to the iPhone, RIM’s first BlackBerry (if it was even called that back then) was like that rudimentary eye. It looked like a pager and had, what, six lines of display, or was it four? You scrolled with a wheel. But it got my email to me before I walked into the office. The BlackBerry evolved, model by model, and pretty soon (at least in geological time) the President couldn’t live without one. And this halting progress provided real improvements step-by-step, at a pace that allowed the company to stay in business and then make money.

So maybe that is what will happen to heads-up displays (assuming there simply isn’t some wonderful military product we can declassify). The next step may be a device that displays a few features simply enough to work well, with less battery drain, less (literal) headaches,  less conflict with traditional vision and lower cost. The big question may be what features those are, and that is where all this real-world feedback Google is getting really matters.

One stumbling block remains, however. Sometimes you don’t realize how far back you have to go to find the right point to jump on the evolutionary curve. I got sucked into that once, with virtual reality. I tried to sell my boss on a virtual trade show, where people online would walk into a full-surround 3D trade show with booths and products and avatars. The technology wasn’t really ready, but, more fundamentally, there was an intellectual flaw that the wonder of VR had somehow obscured.

I got it some months later when I wondered why anybody would shop in a supermarket without signs over the aisles, or detailed labels on the cans. Basically, writing is a wonderful invention for finding and describing things, and giving it up for immersion into 3D representations of things is silly for many purposes, even if it’s great for movies and games. So now we have virtual trade shows that use what you see in Web browsers, and that has worked, more or less.

Is there some kind of blindspot in our fascination with heads-up displays that is similar? That’s the question that intrigues me now. For example, Google has created some nice blindspots in the way it’s positioned Glass.

Why should your heads-up display be part of a complete, standalone product? Glass is that, with a CPU, independent Wi-Fi connectivity, storage, and all the rest. That’s certainly great positioning for Google, who can present Glass as the successor to the smartphone and/or the tablet. It’s the next big thing! But maybe it’s the wrong thing; maybe we should distribute the components differently.

When Steve Jobs came back to Apple, he soon repositioned the Mac as the hub of users’ multimedia devices. You would connect your camera, video camera and microphone. You would feed in your CDs, and, ah, connect your iPod. The Mac would be the place to work with all the material from those peripherals and how to connect to the Internet. So if the Mac became the hub for your media, why can’t the iPhone become the hub, or perhaps the heart, of your senses? And your heart could talk to your higher self in the cloud!

We could have iGlasses (or perhaps Sirious iGlasses). We could have iGloves. We could have iTeeth. I knows; I can’t forget iNose.

Now it’s possible that Google may not be able to reduce Glass to a peripheral, even if it needs a phone to connect to cellular networks already, even if Google could address issues like size, battery life, and processing power that way. They may have a marketing vision they’re not willing to give up. That’s sort of what happened to the Newton. But Apple did manage to come back with the iPod, originally a true peripheral. Less was more.

BTW, I really like the idea of iGloves. They may be vastly better than an iWristwatch if they allow you to type just by making the motions with your fingers, on any surface, or maybe even without a surface. I’m not completely kidding about iTeeth either. One of our customers, Sonitus, already makes hearing aids that use teeth. But I digress.

Another competitor could be Fujitsu. Check out this video of Fujitsu technology making paper come alive, so you can edit an ordinary paper book just by touching it. What if you put some of that technology into glasses and make them a peripheral that works with Android?

Rethinking where to put what applies to other markets, too. Consider the i-mate, a Windows 8 smartphone that you can put into a docking station to become your office PC.

All I’m seeing in the media, however, are suggestions for fixing Glass. Congratulations, Google, you’ve got them enthralled. I can suggest stuff like that, too: put a little red light on Glass that tells people in front of you when it’s recording. Maybe even add a flash, suitable for video lighting or for self-defense. But coming up with ideas like that is not nearly as fun as thinking outside the frames.

Tim Haight
About the Author
I'm VP of Technology Services for CGNET. I love to travel and do IT strategic planning.

Leave a Reply

*

captcha *