I think it’s safe to say that Microsoft’s Kinect for the Xbox 360 has been a success. Despite the limitations of the device, it seems to have caught on to a large (and growing) extent in the marketplace, and in so doing has brought gesture-recognition technology into the mainstream. In addition to the games that, predictably, awkwardly shoehorn the Kinect’s abilities into existing frameworks, there are now quite a few games that use it to break new ground in their users’ experience. So, that being said, the question is: Where does gesture-recognition technology go from here, both in the gaming world and for more practical applications?
A little while ago, I interviewed Michel Tombroff, the CEO of a company called SoftKinetic. SoftKinetic is a four-year-old Belgian company that develops gesture-recognition hardware and software for a variety of applications, using 3D cameras that are — I’m given to understand, as this is outside my area of expertise — far more advanced than the Kinect’s technology. From the demo videos I’ve seen, including the one of SoftKinetic’s game DanceWall from Intel’s CES booth above and more on their YouTube channel, it’s seems much less prone to lag and erroneous gesture interpretation than the Kinect has been in my experience.
So here’s my interview with Tombroff. I think he has some really interesting insight into what advances in the technology are coming down the road, and what kind of effects those advances will have on games and other applications.
GeekDad: How did the company get started working on gesture-recognition technology?
Michel Tombroff: SoftKinetic was initially a research project, both for our hardware and software side, the camera as a university project and the software through an interactive agency. The initial vision was to define the best possible human interaction to navigate in virtual spaces. Out of the co-founders, 2 of them are architects: they wanted people to visit real-time 3D virtual spaces without any device. After quite some research, they realized that they were paving the way for a disruptive technology, hoping the cost of hardware would one day allow anyone to interact with the digital world through gestures. This was the beginning of the adventure, and the founders were able to attract investors to support the project. The company got officially formed in July 2007.
GD: How far do you think the technology is from widespread adoption?
MT: We’re already entering the second generation of technology in the coming months, and the technology has reached another level of quality. The first consumer product to hit market was Kinect a few months ago, and we’ve just revealed today our collaboration with EEDOO, the leading digital entertainment products and online services provider in China, who just announced the upcoming release of iSec, a gesture based platform, powered by our 3D camera, called DepthSense, but also powered by one of the game from our internal studio, DanceWall.
The following consumer wave will likely come from the smart TV industry, who’s preparing to embed 3D cameras into the frame of the television. This will allow gesture controlled interaction to replace or complement the remote control, but also offering games and other value added services, like video conferencing with background suppression or enhancement.
GD: Do you think we’ll reach Minority Report-type interfaces soon? Have we (you) already?
MT: Actually, you should see what our internal concept team comes up with every day: Minority Report is already outdated.
More seriously, there has been a lot of progress made in the past few years on the subject, and the reality is beyond the fiction sometimes. We’ve collaborated with clients using our middleware since Q1 2008 and it gave us an idea of where the usage could be in the following years.
Check out the corporate demo we have put together for the CES 2011, it gives you an idea of a range of user experience you could have tomorrow with gesture recognition.
GD: How do you deal with accidental gestures people might make?
MT: This is for our iisu software team to care about. Working with a solid middleware is critical for developers to build nice and solid user experiences, without having to worry too much about accidental gestures. In other words, the raw data coming out from the camera requires some advanced filtering and algorithmic treatment to turn into usable and reliable data. This represent many years of software engineering on our end.
GD: The educational benefits of the technology seem very promising, but how do you handle the price threshold? I’d imagine it’d be out of the range most schools could afford.
MT: Thanks to the intuitiveness of the technology, any group of people from children to older people immediately catch it. This makes the technology an extraordinary educational tool. For rehabilitation, industrial simulation, sport training, remote education, etc.
We understand the educational value of gesture recognition technologies, and have decided to offer a freemium model for our middleware since the San Francisco Game Developer Conference 2011 back in March 2011. Now, developers can request our middleware free of charge for non commercial use, so that they can experiment with gesture recognition and build new types of projects. Now the next steps is for companies and groups of individuals to provide application for kids to use, to learn about physics, anatomy, sport and anything literally! By physically engaging into the experience, the quality of memorization is much higher.
GD: Where do you see the technology going from here?
MT: This is just the beginning really. The arrival of gesture recognition technology is as disruptive as the arrival of the tactile interfaces on the smart phones. It’s game changing, and will take a bit of time for the industry to absorb this new technology and turn it into great products. Gaming was the starting point, you can expect the entire TV experience to be gesture based in a while, and multiple new usages to emerge. In the commercial space, a few industries have already capture the movement, including health, sport and fitness and digital signage.
GD: What does the technology provide in an educational environment that can’t be achieved some other, possibly cheaper, way?
MT: Again, I think it’s all about the immersion offered into the experience. You can put into practice any exercise by doing it right away. A picture is worth a 1000 words, a gesture based experience is probably worth a 1000 pictures.
For any educational context that requires to perform specific moves (think about learning a sport or safety for example), gesture is the cheapest possible technology to use: no specific hardware to wear, just a distant camera tracking the individuals in front of it.
GD: Do you think the technology will, as Microsoft seems to hope, revolutionize the video game industry?
MT: It has started to. The gaming industry has been focused on consoles for many years, and traditional consoles were hard to play with compared to the way you play on a smart phone or a tablet today. Nintendo paved the way for a major move into the industry with the release of the Wii, and Microsoft took it to another level with Kinect. Here it’s about expending the user base, allowing the entire family to play together instead of only the adolescents of the house.
GD: What else do you think geeky parents would be interested to know about the company and/or technology?
MT: First I think geeky parents will have even more fun with their kids thanks to gesture recognition technologies! Yet they should be patient, as new technologies take time to mature, as the application of their dream may not arrive before a few years.
I’d like to tell geeky parents to join the gesture revolution. If they or their kids are programmers / technical designers, they should get a free license of Unity3D (3D game development platform) and request a free license of iisu and start experimenting with the technology: we’d love to see what they come up with.