My goal is to build an Augmented Reality view over a house full of IoT sensors and actuators.
The approach is to start with some Raspberry Pies running sensors or actuators and exposing them with REST interfaces, then to turn them into BLE beacons and use that to drive and position an Android Augmented Reality app.
The beacons give distance information to place the AR view, as well as broadcasting the URLs linking to the R-Pies' objects being viewed, to be fetched over WiFi.
Those objects will carry coordinate information to anchor them in the combined view; the 3D objects of many R-Pies will be linked up into a virtual place overlaying your house. Then you will be able to see what's going on in the R-Pies virtually in the form of 3D objects at those URLs.
You spark up the Android app and, as you wander around the house, you would see an AR view of your IoT world: you would see the state of all your sensors and actuators and could interact with them.
You could see the temperature or occupancy in every room, check the doors and windows, read the weather centre in the garden, see if you have a phone message and check on the house electricity consumption. You could touch actuators to turn on the lights, set the temperature, set your home media centre to record a show, etc.
Then you would also be able to see other, completely virtual, objects such as 3D signs, whose
text is some derivative of the state of the IoT world, such as "all
doors and windows locked" or a big red button labelled "turn off all the lights".
You could switch to moving by on-screen controls, so that you could interact with your whole house from the comfort of your armchair or bed. By setting a bookmark, you could even visit your house when away, and wander around it from the poolside.
Ultimately, you'd have R-Pies all around your house, perhaps talking to devices directly from the I/O pins, perhaps through Bluetooth, ZigBee or Z-Wave. Each R-Pi would publish its objects into the shared, linked-up virtual world.
The interactions and mashups between your things and to and from the all-virtual objects will be user-programmable with simple rules that match conditions and set new states.
I think your plans sound great from a hobbyists perspective. AR & IoT & contextual computing all share some theoretical overlap, and this will make a great learning experience for you personally and others that follow your work.
ReplyDeleteHowever I would challenge the user experience element, mostly it seems you are replicating existing physical controls in virtual space. For example when I get home I don't check the temperature of each room or check to see if anyone is in the bedroom, so I wonder what problem this is solving or what new value it adds. This does have the benefit of consolidating readouts to one virtual location but how many of those readouts do I regularly want to view?
If you are interested in viewing this from a UX angle, I think that if you are going to this extent of proving out an IoT concept in your home you might be able to get more value out of it. This is a pretty interesting discussion on the UX of the IoT at O'Reilly OSCon:
http://radar.oreilly.com/2013/11/podcast-the-internet-of-things-should-work-like-the-internet.html
In it they discuss the idea that the IoT should sink into the background, provide sensible defaults, monitor and make reasonable adjustments, and rarely - if ever - tell you what it is doing.
I've attempted to address the issues that you and the excellent O'Reilly panel raise in my "Manifesto for the Internet of Things" blog post.
ReplyDelete