Over the past year a lot has happened in the field that we call the Internet of Things. Therefore predicting that in the future everything will be connected isn’t really something new. In fact we have been talking about a connected future with Universal Object Interaction for the past years. Still, 2015 was the year in when more products entered the mainstream market, more systems and frameworks were developed, build and created with connectivity in mind. And this is a huge step towards a connected future.
But as this article highlights, we are not yet there. Not every IoT product is as smart as the learning thermostat nest or as connected as the Philips Hue lamps are though. Many so-called smart products are relatively dumb and designed within a closed system only capable of performing one specific task. These products cannot talk to each other and most of them are only smart in combination with your phone.
“Which brings us to the real dilemma the Internet of Things is facing as we come to the end of 2015: how the hell are all these things going to work together? Apple has Homekit; Google has Brillo and Nest; Microsoft has Windows; Samsung has SmartThings. There’s Wemo and Wink and Zigbee and Z-Wave and Thread and I’m not even making any of these up. You can control some things with your fitness tracker, some with a universal remote, and pretty much all of them with your phone. Some of the protocols overlap and support each other; others are more exclusive. But there’s no simple plug-and-play option, no way to walk out of Best Buy with something you know is going to work.”
But do we really want to control every physical object from the flat interface of our phones? Ignoring the fact that we have been operating the physical world with our hands for thousands of years? We have designed objects that we find functional, pleasing and intuitive to operate solely with our muscle memory – without any need to think about the tasks we are performing. We need to design tools that help us merge the physical and digital world into one.
One project that took interface design to a new level in 2015 is Project Soli by Google. With Soli your hands and their physical movement can become the interface of the future. An interface that enables screen- and touch-less interactions and instead focuses on the intuitive usage.
“The Soli sensor can track sub-millimeter motions at high-speed and accuracy. It fits onto a chip, can be produced at scale, and can be used inside even small wearable devices.”
Project Soli still gets us jumping in excitement but so far you cannot order your own radar sensor from Google. We certainly cannot wait for that day to happen but until then we are on the look out for other promising concepts. One that caught our attention lately is the Reality Editor from MIT Media Lab. A project that truly thinks outside the box that is our screens (mobile or desktop) in order to connect the world around us. The Reality Editor is build on simple universal rules common to all physical objects that have been identified in research.
“A true solution would be that the digital and the physical becomes truly merged and one would not be able to separate what is physical and what is digital.”
“The Reality Editor is a new kind of tool for empowering you to connect and manipulate the functionality of physical objects. Just point the camera of your smart phone at an object and its invisible capabilities will become visible for you to edit. Drag a virtual line from one object to another and create a new relationship between these objects. With this simplicity, you are able to master the entire scope of connected objects.”
The best part about the Reality Editor? It ain’t just a research project: You can actually download the app and use their open source platform OpenHybrid and start building today. We can’t wait to see how connected the world will be by the end of this year.