Published on December 15th, 2015 | by Giovanni Laquidara
Back in August 2011 Mark Andreessen‘s on the Wall Street Journal stated that “Software is eating the world”.
And Now? How is it doing? Quite everything around us is based on software. Every business is or will at some point be based on software.
Nowadays everyone of us has software running in our pockets (on a Smartphone), and on our wrists (on a Smartwatch).
We drive software based cars, and soon they will become software driven cars.
We use software to play, to organize our life, to look for our mates and to communicate with them (Tinder and Whatsapp, I’m thinking of you!).
What more do we need from the digital arena? We can already do all the things our fathers only dreamt of!
I believe it’s not the consistency of a “what” that we are looking for, but an easier, fast and safe way to access software functions that we are reaching out to.
In other terms the next big focus of interactions is going to be user-centered and is going to be natural. As a gesture. As the most intuitive and fast mean of communication.
Interfaces are going to be suited to human needs, computers will understand inputs and not vice versa.
They will turn transparent (thinking of interfaces as Graphic User Interfaces), to fully and fluidly empower people to access software potentials.
Current natural user interfaces hence use technologies (or a combination of technologies) such as multi-touch, gesture recognition, speech recognition, motion sensing, body tracking etc.
What’s happening in gesture recognition?
Systems are tracking user movements and translating them into instructions! We only scratch
the surface with system like Nintendo Wii, PlayStation Move and Microsoft Kinect.
This is a revolution started in our living rooms through videogames and it’s becoming much more valuable in everyday life. BMW has recently released a car with multimedia compound controlled by Driver’s Gestures. Samsung and Google are studying Gestures controllers using wearables and so on. Also the brand new Apple TV allows software control using through a gesture based remote.
This will lead to the development of a soft sign language between us and the machines.
And Speech recognition?
Our mobiles are packed with assistants lately: listening to our voices (Google Now, Apple Siri, Microsoft Cortana amongst others).
Their are able to analyze the voice of the user and translate the voice commands into instructions.
This kind of interfaces are very helpful in case of emergency situations. For example when hands can’t be used to access the smartphone.
About Touch recognition:
Disney for example, is studying touch recognition objects. That means that the next softwares could react to what we are touching.
Are we touching a Steering Wheel? So better deactivate distractions within the car environment….
About IoT Interfaces:
“Mirror Mirror on the wall”… won’t be any longer a phrase from a bedtime story.
Objects are also interfaces. With the raise of Internet Of Things every possible object will be connected.
Simply moving an object from the kitchen to the bedroom will be a meaningful interaction to the connected home listening powers, and in itself it will carry the value of the interface.
And the “Things” are effectively going countless in smartcity landscapes as in our very private homes. Our toothbrushes are counting the time it takes to clean our mouths. In a bit this could communicate with our smart mirror and react to our actions. Every action is to become a smart action.
This is the playground of our startup. We will dive deep into interfaces pushing the code towards this new perspective.