Select the directory option from the above "Directory" header!

Menu
Will Microsoft's 'Minority Report' UI leap-frog Apple?

Will Microsoft's 'Minority Report' UI leap-frog Apple?

Five years ago, all of the major U.S. operating system makers were planning to own the future of the PC user interface. That future was the multitouch UI, which relies on gestures and physics to create a satisfying touchy-feely replacement for clunky keyboards and mice.

In 2007, that new interface hit in a big way. Microsoft shipped Surface. Apple shipped the iPhone. And Google shipped Android.

Though Microsoft was first to market, I'm giving Apple the nod as the multitouch pioneer. The iPhone and iPad have done more to bring multitouch interfaces to the mainstream than any other products. When people describe that type of UI, the most helpful line is "iPhone-like user interface," because everyone knows what that means. From a business perspective, Apple has made by far the most money from multitouch. Apple wins!

The next big win, however, could belong to Microsoft.

Wave your hands in the air like you just don't care

Many of the users I talk to say they'll never give up physical keyboards. That's because they think the future involves as much typing as the present.

Right now, we type everything -- emails, documents, URLs, commands. On our mobile devices, we text like crazy.

Doing all of that typing with an on-screen keyboard is unpleasant. No question about it.

I believe we will still have keyboards in the future -- though they will be made out of software, rather than plastic and springs. But most of us will barely use them.

That's because the main input will be Siri-like voice control and Dragon-like dictation. And "commands" will be communicated via gestures executed by touching the screen or by moving your hands in midair.

Right now, here's how you use email: You sit down at a PC and use a mouse to open an email application with a double-click. Or you double-click to open a Web browser and type in the URL of a cloud-based email service. Looking at the inbox, you click on the first message, scan it, then click on a button to delete it. Then you click to view another message, click Reply, then type out a response: "Sounds good, Steve. Talk to you soon." You then click the Send button.

It's all keyboards and mice

Five years from now, here's how that same activity will play out: Your PC will be a giant TV-screen-size display set at an angle, with the bottom of the screen at waist level, and the top of the screen at about collarbone height. Your PC will replace the desk entirely. As you walk toward the system, you'll say: "Open email." By the time you sit down, you'll be looking at that first message. A wave of your hand, like you're shooing a fly away, will archive the message and open the next one. You'll reply by saying: "Reply and say, 'Sounds good, Steve. Talk to you soon,' and send it." The punctuation will be taken care of for you, unless you specify punctuation.

When you're done using email, you'll close the application with a gesture that's similar to rudely telling someone to leave the room (holding your hand bent at the wrist and then straightening it away from your body).

There's no mouse. And not only is there no physical keyboard, there's no on-screen keyboard, either. This is why people won't miss physical keyboards.

You'll also have the option to manipulate things with iPhone-like multitouch gestures. It'll be your choice.

We've all become familiar with touch interfaces. Voice is growing fast. But where will in-the-air gesture technology come from?

Here comes Microsoft

The voice and gesture bits of that scenario may come not from Apple, but from Microsoft.

It turns out that Microsoft has a "Kinect for Windows" project, headed by one Craig Eisler.

Unless you have a life, you probably know that Kinect is Microsoft's motion-detection gesture control for the Xbox 360 gaming platform. Kinect is incredibly good, cheap and fun to use. It also has rudimentary voice command.

Eisler dropped a bombshell in a blog post last week by saying that Kinect for Windows will launch in "early 2012."

The PC version will be based on the Xbox technology, but it will be "optimized" with hardware and software for "PC-centric scenarios," according to Eisler.

Specifically, the Windows version of Kinect will be physically smaller and will register gestures that are closer to the screen -- a capability that Microsoft engineers call "Near Mode." And, of course, it will control a PC, rather than a game system.

What's exciting about Kinect for PCs is what third-party developers might do with it.

Kinect for Xbox 360 turned out to be a tinkerer's dream device, with hundreds of hackers, researchers and inventors using it for amazing innovations.

Now, Microsoft is fostering that kind of activity with a Kinect for Windows PC development kit, as well as an initiative for funding and mentoring Kinect-related projects.

Microsoft recently launched a new program called Kinect Accelerator, which is a Microsoft-funded incubation program for 10 startups of the company's choice. Each chosen participant will get $20,000 and some really good publicity.

The Accelerator program expands on Microsoft's TechStars program, which funnels venture capital seed money to startups and includes some mentoring services.

What all this means is that by this time next year, we can expect not only to be controlling our Windows PCs with Kinect-like gestures, but also to be able to buy a range of applications that leverage that interface.

There's no telling what these might be like. You can imagine working like Tom Cruise in the movie Minority Report, or playing Kinect-like games, but on your PC.

To me, the most interesting application may be software that interprets your body language and unintended gestures to know what you're doing. For example, what if your PC could tell when you're on the phone, or not looking at the screen? What if your PC could tell not only how many people are looking at your screen, but who those people are? If that were possible, it could understand user context and be more responsive to what you're doing.

Best of all, we can all look forward to a computing environment that's controllable by touch, voice and in-air gestures. This will eventually appear on all of the major platforms. But Microsoft just might get there first.

Eventually, however, you can be sure that a magical combination of touch, voice and gestures is coming to PC user interfaces soon or later.

Mike Elgan writes about technology and tech culture. You can contact Mike and learn more about him at Elgan.com


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoftsoftware

Show Comments