Wednesday, January 13, 2016

Failure before ubiquity - and undoubtedly some lessons to be learned


Failure before ubiquity



On a rare day off (love you, missus) I've been doing the usual Reddit browsing and stumbled over this article as to why laptop computers were a failure - from 1985, no less. Take a read, pop on back - I'll wait.

http://www.nytimes.com/1985/12/08/business/the-executive-computer.html

I don't think any sane person today would say that laptop computers, and mobile computing in general, is a failure. Nearly every pain point mentioned in the article then has been conquered by technological and process improvements - better screens and batteries, vastly improved computational power, the internet (and mobile connections to the internet), app stores - the list is enormous and each brought paradigm shifts in use cases and general usage. Nearly all of those have been to the advantage of some new industry, and to the detriment of an old established industry. Anyone involved in print media, especially newspapers, would hopefully be nodding in agreement.

There's some technological paradigm changes coming - some are here now if you look - that are going to have similar swan songs written for them, and then become so ubiquitous that people in 2050 will be looking back at today, shaking their heads and thinking "Boy, were they wrong". AI, automation, robots and VR/AR are the cases I'm most interested in and I think VR is about to be the next chunk of tech that people buy and scratch their heads over, quickly declaring "Well, that doesn't work because thing X is a bit rubbish".

Here's a mental exercise - what's going to be the things that, in 2017, stop people from realising they are doing something that's going to become ubiquitous, but just doesn't work properly right now? Because those are the areas where the fun is going to be (and possibly the money). I'm going to pick VR as my highest close-term interest, but the same exercise can be applied to Automated cars, Rocketry, medicine - pick your poison.

For VR, Here's my top three:

HCI (Human computer interaction).


Remember that scene where Scotty talks to the mouse? Funny at the time, blindingly obviously necessary right now. My typing speed is, if I might toot a horn, pretty bloody good - if I needed to get a job as a secretary, I'd probably do just fine. I've used keyboard input for computers since I was a wee bairn, and it's a very normal part of my day. But that's not going to work in VR, or AR, and the control and speed of that interface is required for decent computer interaction. What's going to change here?

Obvious candidates are speech recognition and gesture recognition, and both are on the cusp of breaking into normal usage. Google Now, Alexa, Cortana, CMU Sphinx and a variety of other controllable speech interfaces are out there now, and they're bleeding awesome. Why are we not using them every day? Possibly because the input tools themselves are not pervasive (although computers on wrists is the obvious channel, as is your home being mic'd up). Possibly because we don't have a really compelling reason to right now, but that's going to change as IoT devices and especially VR/AR devices need interfaces where we are not able to, or can't be bothered to, walk over and type stuff. I don't know many people who just talk into thin air yet, but I do know folks (including myself) who will happily talk to a computer - I use "OK, google" pretty much every day.

Gesture rec is a bit further out. Tracking of some form is required, and not everyone is living in a space that is understood by computers. Would it seem strange to have something like a Kinect in every room? Because that's what's required right now. The ideal is to have sensors tracking your fingers, face and pose with a feed into any apps that fancy interacting. None of this stuff is Sci-Fi, though - it's all eminently achievable right now. I've messed with the APIs for Kinect, Leap Motion, a few others (and I've been doing work around this stuff in the games industry for years now!) and it's technically ready. It's certainly not consumer ready, not by a long shot. I'm excited to get my hands on Constellation and Lighthouse (and more Lighthouse!), as they are going to be the first real generation of hardware that lets you feed things directly into VR apps natively, but this is really going to be the year where we see the space explode, both with understanding, technology and capabilities. Wiimote and PS Move, your time may be over shortly.

You've then got to do something with the speech and the gestures - and that's going to be very interesting to watch!

Motion


Knowing what you're doing is one thing. Knowing where you do it is another. This requires that the device you're interacting with knows both where you are and where you're looking. Gyros and accelerometers get you a tiny part of the way, GPS is another part of the picture, and the tracking capabilities of systems just discussed gets you a bit closer. the APIs and hardware for this stuff is crazy now and going to get better very quickly.

Oculus has very good seat-scale tracking, Vive has very good room-scale tracking. Of course, your definition of room scale may vary - church halls and multi-user shared spaces are rooms by my definition, and accurately tracking everyone within those spaces (ideally non-intrusively) is an interesting challenge right now. Once that's nailed, though ... Of course there's many other contenders about to properly change the space. I'm not talking about iBeacon or any of that other nonsense designed to tell things that you're close enough to purchase junk - I mean sub-centimeter accurate position and orientation information for you and your things.

One of the biggest perceptual disconnects with VR is that you are often expected to move in an environment where that motion isn't matched in the real world. Our brains really don't like this. Walking around in a virtual space where the real world and the projected space match, perceptually, suddenly makes it all click (and I don't mean this metaphorically). If you've yet to try out the Vive, then that's likely your best conduit to understanding what I'm talking about, right now. 

Computer Understanding the Environment 


I'm not sure of the best term for this, to be honest - computers have been understanding (directly, to a lesser or greater extent) the environment ever since we started using them as tools. However, right now, it doesn't tend to feel very personal.

Your personal devices understanding the space around you, however - that is where the gold is. Tango, RealSense and various other devices that do area learning, spacial rec, and feed that stuff back into your system - they're here, and they are very much in the developer-sphere right now, but boy are they going to explode really soon.

Laser scanners, photogrammetry, SLAM, lightfield capture (both static and dynamic) and other mechanisms to accurately represent world-geometry - they all feed into this. They're all very hard to do but they are becoming better and cheaper every day.

Once the computational system knows the difference between a chair and the floor, all kinds of awesome ensues. This stuff is critical for AR but it's a massive multiplier to the capabilities of VR systems too - just look at Vive's chaperone system for one obvious example.

You take photos and videos right now, yes? I'll lay good odds that you've probably got a video-capable camera within 5 meters. When you can record everything around you so that it can be played back in VR - would you want that? Why wouldn't you want that?

Putting it all together


Soon, baby. Soon.


1 comment:

  1. One obstacle to ubiquity is the rather firm distinction between AR and VR. In order to be ubiquitous, one device will need to comfortably show you the unvarnished world as it is all day long, augment spontaneously, and close out the real world on demand substituting something else.

    ReplyDelete