Friday, 31 January 2014

Venice Vacation

No blog today, or rather no content, because I'm in Venice.

I may show you some holiday snaps tomorrow..

Thursday, 30 January 2014

The Internet of Meeting Rooms

Today, I was yet again hunting for a meeting room in the huge office building I work in. Obviously, the meeting room number gives nothing away - adjacent numbered rooms could be on opposite sides of the floor.

It occurred to me that I'd like to be able to hold up NetMash in "look around" mode, and see all of the meeting rooms nearby - like X-Ray vision.

This would of course be driven by BLE beacons inside each meeting room, advertising the URL of their virtual room/place object. Assuming my phone is on the office WiFi, NetMash could fetch and render those place objects for me, plus all the contents.

An obvious object to have inside a room on the virtual wall would be a "card" describing the event - a meeting booking. This would be a 3D representation of a JSON object encoding an iCalendar event.

Of course, occupied rooms would also contain all the avatars of the people in them. I could find the room, then as I approached, ping a message to all of my colleagues.

There could be an object that captured the whiteboard work as an image that we could virtually take away with us.

And, obviously, there would be objects for lighting and heating. Being listed in the event object invitees, I could turn all the lights off and back on again as I approached, for dramatic effect.


Wednesday, 29 January 2014

The Internet of Farm Yields - from Monsanto?

With Google's acquisition of Nest fresh in our minds, this article now makes even more alarming reading.

Apparently, Monsanto are offering farmers a system that can measure crop yields over their fields by following them around using GPS while monitoring the rate of harvest at each point they cross. All this gets pushed up into Monsanto's servers.

The farmer then gets the payback: an automated planting system that adjusts the amount and type of seed planted at each point in the field.

But of course, the farmers have trusted Monsanto with their field, planting and harvesting data and are letting the company control their planting in unprecedented detail.

I suppose I trust Google enough with my personal life - perhaps I'll learn to regret that - but if I were a farmer, would I trust Monsanto?

I was going to search for some juicy stories about their unethical tactics, but found that just searching for "Monsanto" alone threw up enough material.


Open & Local Farming

Obviously, it would be perfectly possible to do all of this in an "open and local, not proprietary and cloud" way.

The farmer would get all the above benefits, plus control and privacy, plus the benefits of being able to work with other local and global farmers and keen technologists to create systems that do much more than whatever Monsanto want.


Tuesday, 28 January 2014

Slightly Better AR Microlocation Algorithm

There are two parts to Augmented Reality: knowing which way you're looking around you, and knowing where you are. I've started the first one, in one axis at least. I've also started the second, by jumping right up to the nearest object.

As I showed in that article, it's a bit rough: if the nearest object is the light, I only see the light, up close, without a surrounding place. Only if no beacon is near do I see the place the light is in, with that light and any other lights.

While waiting for the mathematical energy and inspiration to tackle proper trilateration, what I actually want is for the 3D AR view to always be in the place that the surrounding objects or Things occupy, rather than jumping off to the object alone.

Then when I move in to a beacon on a Thing, for the user object or avatar in that place, and their 3D view, to also move smoothly closer to the 3D representation of the Thing object within that place.


Tricky Algorithm

Now, it's not tested, but here's the algorithm I worked out on the train today:

  If the nearest object is a place:
    If already in that place:
      If looking up close at an object:
        initiate zoom to the middle of the place
    Else:
      jump to the middle of the place
  Else:
    If already in the place the object is in:
      If not already looking at the object:
        initiate zoom up to the object
    Else:
      jump to the middle of the place
      initiate zoom up to the object

Way more complex than I ever imagined! And that's not even close to a trilateration algorithm.

The action "jump to the middle of the place" is resetting the 3D view to a new place URL and putting the user in the middle, of the room, etc. Arbitrary, but on average not a bad choice, I hope.

By "initiate zoom..", I mean kick off a thread to move the user's avatar smoothly to a target destination position in the place. Since at this point I only have the URL of the target object, I also need to look up its position coordinates from the place it's in, or the coordinates of the middle of the place.

On top of that, for zooming up to an object, I need to stop before I hit its actual position. So I need to get its bounding box and turn that into a bounding circle radius, so that I can stop zooming that far back.

Only two weeks to go to the end of this ThoughtWorks exercise..


Monday, 27 January 2014

Simple but Effective Lighting Behaviour via the IoT

This last weekend, I replaced all those LED bulbs in the kitchen with dimmable ones, and put a dimmer switch in. They are still super-white bulbs, which is ideal for seeing while doing jobs.

But as soon as we first dimmed them down - we hit upon an obvious issue: they're still super-white, only dimmer.

What we would like is for them to go warm-coloured when they're dimmed!

You don't need pure white light if you're not using them to work by, and if you dim them, you normally want to set a softer mood, or make it suitable for relaxing and chatting. White light isn't the right light for that.


Object Net IoT Approach

So obviously, that gets me thinking about the Object Network solution.

The ultimate IoT solution would be to have individual control over each bulb's RGB levels via some decent radio control such as ZigBee, Z-Wave, Bluetooth or 6LoWPAN. The state the dimmer switch was in - off, or on and the control level set - could be fed into the I/O ports of a Raspberry Pi which would also terminate all the radio.

The practical, DIY solution? You can buy GU10 RGB LED units that come with an IR controller. It only seems to have limited colour settings, however, but it may be possible to trace the protocol and set any RGB value. You'd need to have a dimmer box containing an IR LED to control the bulbs across the kitchen (don't stand in the way!), and a variable resistor for the control. These would all be wired into a nearby Raspberry Pi's I/O pins via some adaptor circuitry.

The R-Pi would be best in or near the dimmer box, so that it can have a BLE beacon advertising the URL of an object representing the state of the control. This object would sit within a place object (the kitchen) that also has the 3D light objects overhead. The lights may or may not be beacons, too, depending on their technology.


The Magic Bit

Then a simple Cyrus rule in the R-Pi would set all the bulbs to white at maximum control setting, but increasingly to warmer colours while decreasing brightness as the dimmer control is turned down.

You could of course override the control level in the app, or directly set any bulb colour as usual. From the coffee shop or down the garden.

Or you could fiddle with the rules to make the lights go blue and surge like waves instead, when you turn down the control..

Sunday, 26 January 2014

Android sensors for AR

I've added a simple class to NetMash called Sensors, that I use to pick up the phone orientation, to drive the 3D view. This is only active once I set the "Around" menu item. I currently only use "azimuth" to pan around the room.

That's something I learned doing this project: "azimuth" is panning around you - and is also called "yaw", "pitch" is looking up and down, and "roll" is tipping or rotating the device while looking at the same thing.

The Android documentation appears to have some rather odd axis conventions to the eyes of someone who hasn't taken the trouble to work out all the maths, but working code beats theory every time.

Obviously, the code was more-or-less copied from multiple StackOverflow posts, but there's not one post or article anywhere I could find that gave me complete code like the simple class I ended up with. The smoothing algorithm I invented is probably far too simplistic, but I can refine it.


Actual Positions

So now I can pan around my 3D place, it becomes pretty obvious that the room isn't exactly oriented square to North.  But more importantly, the positions of the lights bear no relation to their actual positions in the room around me.

Setting Thing positions to their actual locations in the room will have to be done manually for now, when I run NetMash on each Pi. They will then be notified to the place object which can pick them up in the initial discovery rule.

I'm still intending to video all of this working..

Saturday, 25 January 2014

Augmented Reality and Magic

I was out with my daughter today and we went to a garden centre.

To be more precise, I was out with both my daughters - one went indoor climbing with her friend, and the elder and I went off for a hot frothy milk and a coffee, respectively, in the garden centre.

Garden centres are strange places - for a non-gardner like me it's mostly just stuff for other people to buy - mostly, it has to be said, for a generation that arrived before I did.

But this one, like most, had a book section. Again, mostly gardening and cookery books and books about how the town looked 100 years ago. Aeroplane books and war books.

And children's books. Which is where I get to the point of all this.

We picked up this Augmented Reality Fairy book. Now, the daughter that nagged me to buy it is the kind that never gives up and gets very enthusiastic for a while, then moves on to the next opportunity very quickly. So I was a little reluctant at first.

But it was discounted down to only a fiver, and had marker-based AR, which I always find fun to see, and guessed the rest of the family would enjoy too. So I bought it.

Now I have to say that I was almost as excited to try it out as my daughter, and she and I sat together at the Mac and got it going. She was delighted, of course.

You point the webcam at the open page and it detects which page it is and creates a superimposed fairy scene. You can activate various things, like getting a fairy to appear and cast fairy dust around.

The grand finale is to hold a card disk in your palm and entice a fairy by hitting various keys to add fruit and flowers to a cup. So it's like you're holding this fairy in the palm of your hand.

When the family saw all this, there was plenty of "wow"ing and "ooh"ing.

But really, it was pretty basic - just a simple animated 3D scene. It was the fact that it keyed itself onto a physical thing - the book page or the disc - that gave it such a compelling edge. So engaging was this illusion, that my daughter apologised to the fairy when she turned the page and made her vanish!


Augmented Reality and Magic

For a long time, computers have lived in an abstract virtual place in our lives - only interacting properly with reality when printing something out on paper, or arranging for a package to arrive the next day from Amazon. Maybe video calling has a bit of that reality-engagement, too.

Smartphones are better at integrating into our physical lives, with their sensors for orientation and GPS and their cameras.

But when you combine those sensors and a 3D display within an AR app, you can create magic.

The kind of magic that is possible when the unlimited creative universe of the virtual can start to invade our physical environment in truly tangible and compelling ways.

Friday, 24 January 2014

The Internet of 3D sensors and actuators

The sensors and actuators of the IoT tend to be small-scale: temperature, light, etc.

The key factor is the merging of real and virtual, the bringing of Things into the Internet, and allowing communication in both directions between real and virtual and virtual back to real.

On a slightly bigger scale, in the Object Network people are first class and bring their own more complex sensors - for gestures, location, orientation, etc. - and actuators - like screens, glasses, vibration and speakers.

Now the whole thing begins to fill our 3D space - as the Object Network's ability to view the IoT in Augmented Reality indicates.


The 3D IoT

Taking this concept of a 3D IoT to its conclusion:

3D IoT sensors can include full 3D sensor devices that suck in the shape of the world, and sensors like those on the Wii for picking up the motion of your hands and feet, or the Leap Motion that can pick up hand gestures.

3D IoT actuators can include bigger, wall-sized screens, holographic displays, 3D solid displays and 3D printers.

All these technologies enable what were once futuristic applications - instead of bending over a tablet and stabbing at a tiny screen, we'll be standing up, and moving our whole bodies around, interacting with surround-vision displays and holographic objects.

Imagine sculpting a virtual object with your hands, then printing it out. That is a true blend of real and virtual - the "lights and thermostats" Internet of Things will seem rather lame in comparison.

Of course, the ultimate sensor/actuator combo is the robot - a device that can exist in both real and virtual domains simultaneously - it could have a 3D virtual representation showing more about its state and rules, which could be as interactive as the physical robot.

Telepresence robots are a variant of this, merging a real person into a virtual person and back into a nearly-real person again.

The benefit of the Object Network approach to the IoT is that it starts off seeing your world in 3D, so all this is native to its way of working and interacting.

Thursday, 23 January 2014

Programming the IoT = Programming Parallel Computers

It's well-known that Moore's Law is over for single-CPU processing, and that the only way forwards is multi-core and parallel processing.

So given that, the way that we program these chips may have to change. Obviously, if you can easily split your application up into parallel, independent threads then you're fine to carry on programming in single-threaded Java. That's Web servers, of course. Even if your application has inter-dependent threads, you may still be able to battle on with the corresponding Java code and win.

But for any interesting interactive application, different programming models and languages are needed to take away the pain of properly exploiting parallel hardware - to make all that threading code go away in the same way as Java made all that memory management code go away.


The Internet of Processor Things

Now, at the same time that we're considering putting large numbers of small processors together in the same box, we're also considering scattering large numbers of small processors all around us - the sensors and actuators of the Internet of Things.

In fact, there could be more of those processors per hectare as was planned for in the extrapolation of the Moore line: in your 2014 house you may have 16 processors in Things and mobile devices, but only a couple of quad-core desktop machines. And the same ratio may hold true in the work environment. This ratio of scattered processors to co-located ones will probably only get higher.

Indeed, why do we need all the processors to be in the same box? Not all the processes need to share the I/O to the user, so it's mainly the need to communicate through a single physical memory and disk.

Which is the main problem with scattering: the processors have to wait longer to communicate. But we don't even know if that's a problem: it's application dependent. And in a future where your applications are sensors and actuators, multiple mobile devices, Augmented Reality, immersive Virtual Worlds and gestural interactions, it could be that wireless data exchange is perfectly good enough.


Declarative Programming

Either way, whether you're programming scattered or co-located processors, you should be able to program without worrying about concurrent thread interactions and process distribution.

You should be able to program independent active and interactive objects without needing to know yet whether they're microns or miles apart. Only timeouts should change. This isn't RPC [pdf].

Imperative, threaded programming will have to give way to Declarative programming - where we tell the computer What we want, not How to do it - it will be up to the implementation of the language to handle the mapping to threads and then to processors, local or remote.

Since we don't know right now what ratio of scattered to co-located will turn out to be best in general or in specific applications, a programming model like that in Cyrus, that lets us split and recombine our active objects with perhaps only minor code adjustments, will have a massive advantage.



Wednesday, 22 January 2014

Minecraft, Augmented Reality, and the Internet of Things...

Augmented Reality is a fundamentally 3D technology - you look around a 3D space from the vantage point of your mobile device. Thus it's a short distance from full Virtual Worlds, such as Minecraft and Second Life. You can show 3D representations of any IoT Thing around you, as I have shown in my experiments, but also show any virtual 3D object you like. The "place" or room is an example of a virtual 3D object. I've also mentioned being able to move from your own house into that of your grandparent's, which jumps the user from AR to VW, since you jump from navigating by moving the device, to navigating by on-screen controls.

You can leave notes and signs around the AR world, you can have abstract objects such as one saying "all locks closed" or a big red button to turn off all the lights in your house. You can pick up a link to your grandparent's "all locks closed" indicator, and put it onto the virtual wall of your living room, to check at any time with just a wave of your phone.

A coffee shop could have offer tickets placed in their AR place - that you could pick up and pin to the virtual wall of your AR home. Those tickets would occupy the same place as the 3D representation of the IoT light on your table, or the IoT jukebox that can take suggestions for what to play next.


One App to Rule Them All

An important difference with the Object Network approach, is that there isn't an app for the coffee shop, an app for the light, one for the jukebox and one for picking up offer tickets - there is only one app (currently NetMash) that, like a browser, can be used to engage with all and any players in the Object Network, with any shop that has Object Net beacons, any light or thermostat that operates according to Object Net principles, and so-on.

This way, you interact in an environment that seamlessly merges real and virtual, and lets you seamlessly move from place to place owned by different people, as you go through your day. From house owned by you, to street owned by the local authority, to the coffee shop owned privately, to the library, to the park. From interacting with real Things to interacting with virtual objects. All with just one app.

To achieve this level of seamless interoperability requires that everyone simply publishes their JSON or Cyrus objects in the same way, in the same formats, all linked up with URLs. Obviously harder to do than to say.

But the Web has done it, so perhaps we can.


Minecraft-style building

Since we intend to empower the users over this VW/AR/IoT "fabric" - its data and its rules - we should also allow them the same ease of building within it - especially their home and shop places.

So clearly we need to give them the same abilities, the same tools and materials, that are provided for this in Minecraft! Voxels and hand-held tools and inventories, in other words. That way, a shop owner can delegate the building of her virtual shop "place" to her 8-year old, and concentrate on the offer objects.

Tuesday, 21 January 2014

Security, Patching and the IoT: Buy Slaves not Masters!

I'm actually really glad that people are using insecure and unpatchable IoT devices to send spam. I'll explain in a minute.

That news broke just a week after this rant by some quite angry and bitter-sounding person at ArsTechnica. Perhaps he wished he had bought a regularly-updated Nexus instead of a Samsung phone, but I digress..

For a balanced view of all this, turn to the expert: Bruce Schneier had this prescient article on Wired, a couple of days earlier than that rant, and this on the Guardian from May last year.

Bruce tells us that these devices are usually running old, unpatched, vulnerable software, and updates are unlikely to be made available - even less likely to be applied if they are.


Solutions

Like I said, I'm actually glad that this high-profile hacking report has come out right here at the start of 2014, just when the IoT is hotting up. If there were a few smaller attacks here and there, they may not have been noticed under all the hullabaloo. But this one even made it into the mainstream popular press.

Which will focus everyone, who wants to make the IoT work, on solutions.

One solution is to wrap the insecure and seldom updated manufacturers devices with, say, Raspberry Pi hubs or controllers that run Ubuntu and open source middleware, and to have a regular software update process running on that, just like you would on your laptop.

You manage security at the layer above, and work around proprietary access methods and known vulnerabilities and bugs from that level.

Security and privacy are of course big challenges for the IoT - so this is a great time to open up the discussion about open standards and open source.

Fear the silo and the walled garden, and the consumer device software that tries to take too much away from you!

Buy slaves, not masters.

Monday, 20 January 2014

IoT Protocols

Just for my future reference, not as a definitive comparison, here're some notes on the various IoT protocols that I'm aware of.


Application Level

HTTP - the granddaddy protocol, always a safe bet - RFC

CoAP - the HTTP-alike, RESTful alternative for small devices - RFC-ID

MQTT - from IBM

Comparison of CoAP and MQTT.


Transport Level

IPv6 and see here - designed to give every Thing in the universe an IP address - RFC


Wireless Level

WiFi/Direct - almost every house has a WiFi LAN; 5Ghz option; star topology - Pi example

BlueTooth 4.0/BLE - popularised as iBeacon by Apple, but handy for beacons and low power sensors; star topology - Pi example

Z-Wave - Home Automation; ~900Mhz not 2.4Ghz like the rest, so greater range, less interference, bigger antennae; mesh topology - Pi example

ZigBee - Home Automation; mesh/star; based on 802.15.4 - Pi example

6LoWPAN
 - IPv6 based; mesh/star
based on 802.15.4 - RFC - Pi example


NFC - exchange by touching things together; P2P - Pi example

RFID - for cheaply tagging things; P2P - Pi example

Comparison of Wifi, BLE, ZigBee, NFC, and others.

Comparison of ZigBee and 6LoWPAN, the two 802.15.4 protocols.


Some are proprietary, some are the products of more or less closed industry consortia, some have RFCs. Here's a bigger list.

I'm looking at HTTP over WiFi and BLE right now, and will probably look at CoAP over 6LoWPAN at some point.


Sunday, 19 January 2014

The Economics of LED Bulbs

My friend Francis Mahon is on a mission. By trade, Francis is an "oil man", but he sees his future as being in sustainable, low-energy alternatives. As we drove around my home area, Francis was pointing at shops filled with halogen bulbs and at photovoltaic and solar water heating arrays.

Often, he just walks into a shop or pub and chats to the manager, then follows up with a spreadsheet showing the economics of a full refit with LED bulbs, which he then supplies and fits himself. Not a living yet, but every little helps - the planet, that is.

This weekend, after fitting out our bathroom in bright, pure-white LED bulbs, we went out to buy a new light fitting for the kitchen that would take these GU10, 240V 5W LEDs instead of the MR16 12V 25W halogen ones I have been using, that need a mini-power station cooking away behind them - the heavy, inefficient transformer unit.

Taking five times less power, the kitchen is now even brighter than the bathroom - indeed, so bright now, that we're going to have to install dimmable bulbs and an LED dimmer switch instead.

The light fitting we bought came with old-school halogens, and Francis told me simply to drop them into the recycling station - new and unused! It's just not right to put them back into the market in any way and thereby cost both the purchaser and the planet unnecessarily.

I haven't checked out the maths, but Francis assures me that it's always economically better for you to change all your bulbs to LED right now - not to wait until they pop - even if you've just put new ones in, or got new ones with the light fitting, as I had.

Roughly speaking, the benefits are: brighter, whiter, broader light; 5-10 times less power consumption, 10-30 times longer life and thus half the replacement cost at current unit prices - although that will certainly be significantly less by the time you need to change them 5 or 20 years from now.

You can buy new bulbs online for a couple of quid. I'm already half way there, but I plan to go all-LED this year: the only thing that will delay me is the need for new fittings and LED dimmer switches, and my search for bright, IoT-ready RGB bulbs.

Saturday, 18 January 2014

Three lights in both a real and a virtual room

You may have noticed that I've been a bit quiet on the development front recently - I only have 45 minutes a day on the train, and I've been using that to tidy and bugfix in NetMash.

So just to show that things still work after dibbling with the code, here are some photos of the latest actual Augmented Reality Internet of Things in action.

I've hit the menu option "Around", which searches for beacons in range and picks up their URLs.


There are three lights around, all in the same place: "Room of Things". The app has decided I'm not right up close to any of them, so it's showing me the place they all belong to.


I move to the first light (OK, it's my laptop pretending to be a light, but you can see the USB BLE sticking out there). The view jumps in to focus on that light.


Over to the first Pi, advertising the light that's green. The view jumps to that, now.


And finally to the only actual light - the RG(B) one on my Christmas Pi. You can't tell from the photo, but the LED really is red like its 3D view. Touching the red cube adjusts the LED colour.

Obviously jumping in and out like this is a bit coarse, so I need to come up with a smoother algorithm for figuring out where the phone is. And I need to glide in to the lights in the actual place, not jump to a view with just the light alone.

And of course, it would be nice if things in the place were in their actual relative positions, and panning around panned around the 3D place view, in proper AR style.

I'll get onto it, on the next train to work...

Friday, 17 January 2014

Internet of Things Events, Conferences and Meetups

Here's what I found from a quick Google for interesting IoT events in 2014 (European ones highlighted for my own future convenience):

IEEE World Forum on Internet of Things (WF-IoT) 6-8 March, Seoul, Korea
M2M Conference 24-25 April, London
Thingscon 2-3 May, Berlin
IoT-SoS 16 June, Sydney
IoT Week 16-20 June, London
Intelligent Environments 30 June-4th July, Shanghai
Smart Systech 1-2 July, Dortmund
Future IoT & Cloud 27-29 August, Barcelona
MIT's IoT Conference 6-8 October, Cambridge, MA
Internet of Things Conference 12-13 November, London

As I discover more, I shall update this list.

Some more interesting links:

IoT Calls for Papers
IoT Events on Twitter
IoT London Meetup Group


Talking of the latter - the London IoT Meetup Group - I, and hundreds of others every month, have not been able to attend any of the monthly meetings, due to the maximum capacity of the venue: 95. There is a desperate need for a second London group! If you are interested in joining me in setting something up, let me know @duncancragg.

Thursday, 16 January 2014

Wifi Direct, BLE, Zigbee or Z-Wave? AR versus IoT? Ask Google Trends!

I use Google Trends a lot to decide where things are going and what to focus the Object Net on, or what technologies to use to implement it.

Here's an interesting graph comparing WiFi Direct, Bluetooth 4.0, iBeacon, Zigbee and Z-Wave - here's the scoop: WiFi Direct demolishes the lot!

Here's one comparing Augmented Reality, Layar and Google Glass with the Internet of Things. AR in all its forms is still ahead of the IoT, but after a big up-spike in 2009, has been in gently declining interest for four years. However, specific AR products - Layar and Glass - are very active.

More interesting insights come from clicking through the "Regional interest" tabs just below: compare where ZigBee and Z-Wave get their respective support, for example.

Of course, these graphs should only really be used for picking up indicative clues, and entertainment. Look at where "Layar" gets most of its support - Indonesia!

Wednesday, 15 January 2014

Nest and Google: Closed and Cloud

No blog dedicated to the Internet of Things would be complete without some kind of reasonably timely analysis of the recent purchase of Nest by Google...

Here's a pretty good summary of the basics, on Mashable, to save me repeating it all. And here's a good article on the issues with the automatic features of Nest, and the infamous software upgrade - keep reading into the comments.

My view is that Google has been gradually getting deeper into all of our lives, and this is an obvious next step for them: right into our homes. Nest is another service that depends on remote servers for their normal operation. They can collect lots of information about you and your family.

In my Manifesto for the Internet of Things, and in a follow up article, I argued that the IoT should be open and local, not closed and remote.

So in that light, even though not unexpected, this development is not a good direction for the Internet of Things from the end-user's perspective. But you need to have real, substantial examples of what you don't want, in order to focus your mind on what you do want.

There are interesting, challenging and exciting times ahead..

Tuesday, 14 January 2014

Half way through my 60 Days of Things

I'm now half way through my 60 Days of Things. I started this blog on the 14th of December last year, without intending to blog every day. But I happened to have something to say every day, and once I'd established that regularity, I decided to keep it up. I do have rather a lot to say about the Object Net, and my other blog is full of me saying it in long posts. This way I get to write shorter posts, yet more often.

So, looking back at what I've blogged: here are articles describing the development progress in NetMash:

I also wrote some more philosophical and background articles on the Object Network for AR and the IoT:

And I showed off some hardware:

In the next 30 days .. well, you'll just have to stay tuned!

Monday, 13 January 2014

The Functional Observer Programming and Distribution Model

I thought I'd pick up on the final two paragraphs in my recent post on Monads, where I described an alternative approach to using them: simply accept objects and their states, and then have pure functional transitions between those states.

This approach is the basis of the programming and distribution model I call "Functional Observer". Functional Observer is the foundation of the Object Network.

In Functional Observer, an object may observe other objects through links. When an observed object's state changes, the observer object may change its own state. That new state is a pure function of the new state of the observed object, the states of other linked objects, and the current state of the observer.

Below is an example in a picture: there's a Ticket object linking to its corresponding Order object. For a given state of the Ticket and the linked and observed Order, the Ticket's next state is a pure function of those two visible states:
State can be either pulled when needed, or pushed when updated.

Functional Observer is the basis of FOREST (Functional Observer REST), the Object Network's distribution model, and of Cyrus, its native programming language.

I'll be elaborating more on this family in future posts.

Sunday, 12 January 2014

Linking Things up: from house to street to city to the world

Yesterday I referred to the way that, with the Object Network, you could have a link to a grandparent's house, or rather its virtual places, from yours, so that you could navigate virtually there just like you can navigate around your own house, checking on the lights and locks and perhaps medical sensors.

This only needs a URL - say from your hallway place object to their hallway place object. You could walk to your own hallway holding up the AR view, then carry on walking on the screen right into the grandparent's house to see that everything was all right.

Actually, you could link to the council-provided street object, and then either physically or virtually go out into the street. You could read a note telling you when the next recycling collection was due. Not IoT, more abstract-virtual than real Things. But the Object Net doesn't distinguish between them. There could also be actual public or council-run Things, perhaps to do with lighting or traffic sensors.

Of course, other folk out in the street looking around would be visible to you, and you could chat if you were friendly.

Maybe if you looked back at their houses, you'd see, not their private IoT network, but a set of publically-visible places, perhaps advertising some event or hobby or interest. Or virtual seasonal decorations, that sync up with the real Things.

If you walked, in reality or virtually, into town, you would be met by ongoing links coming out from the places you've been to, or new ones advertised by beacons you pass. You've seen all the examples of course, of how shops and art galleries could be augmented. But in the Object Net, they can be linked up, too.

In fact, the Object Net can form a seamless, global network of Things all linked up. No silos, not even any "applications". It will be an unbroken fabric of linked-up real and virtual objects that surrounds you everywhere you go.

It's not just a static "read-only" fabric either: it's interactive. You could be allowed to leave notes for friends in certain places, and you'd be able to pick up a link to anything you see and then plonk it down anywhere you are allowed to.

When entering the park on your bike, you could grab the link of a notice telling you today's park closing hours, and attach it to your virtual wall at home, ready to read it when you return, simply by holding up your AR view on your phone to the real wall.


Saturday, 11 January 2014

The Person as a First Class Object or Thing

Following on from my post a couple of days ago, I'd like to elaborate a little more on the relationship between Augmented Reality and the Internet of Things within the Object Network. I alluded to the fact that the IoT can naturally embrace AR, and this is particularly true in the Object Net, because there, a person is just another "Thing"!

In fact, you've hopefully seen my 3D screenshots of a place with lights and a light sensor. Well, here is what another person using NetMash in the same room looks like:



OK, a little basic, and slightly unnerving, for an avatar, but that's just detail at this stage.. This was me visiting the room using NetMash on my daughter's phone, to be precise.

To the Object Net IoT, the user object is not so different from a light or switch: it has a URL and both sensor and actuator elements. The sensors include: GPS location, BLE location, orientation, currently looking at, currently touching or gesturing, currently saying, etc. Even heart rate and other Quantified Self parameters could be added to the user object. The actuators include: screen - tablet or glasses; gui or 3D; camera background, wall screens; sounds and vibrations, etc.

Instead of just interacting in the rather limited way that other IoT approaches afford: directly with the actual lights and switches and via apps with dedicated GUIs, the point of an AR interface in the Object Net is to make the user a first class player in its IoT, able to interact within the same logical space as any and all other Things together, to explore, observe and to act and interact just like them.

The user gets to move around and see everything there is to see about the surrounding Things at a rich, logical level - their full internal state, the links between them, their animation rules. A user can interact in a more complete, seamless and more consistent way than directly with the physical Things themselves.

Being first class also means that users can themselves be treated just like any other Thing - by lights, etc - and can feed or trigger their rules in the same way.

They can also virtually meet other users, perhaps from rooms in another town - linked and joined together with theirs by simple URLs - maybe to keep an eye on a grandparent, and their locks and lights and health.

Friday, 10 January 2014

Monads and Cyrus

A question came up on the ThoughtWorks tech forum yesterday:

"What actually is a monad, and why should I care?"

Various people had a go, or directed the OP to their favourite exposition. Not wanting to miss this opportunity to promote my Cyrus language, I also replied:
____

This is just a practical approximation to give you a feel for how I see the whats and whys of Monads. Happy to take corrections if I've actually got it all completely wrong. And apologies to anyone who feels patronised by my jolly style or who is a Haskell programmer and whose teeth start to grate..


A Pure Functional World

Say you love functions sooo much that you want everything to be a function in your programs. So that's:

output=f(input, input, ..).

Everywhere. And for the same inputs, you always get the same outputs.

Programs are built by chaining functions:

final-output = f1( f2(3), f3(4,"banana") )


No State?!?

First thing this means is no state - you can't have persistent state. State as-in "value that something takes at some point" can only be represented by those input numbers and strings: 3, 4, "banana", and the outputs of functions.

So you can have "state", as long as it keeps being juggled transiently between the inputs and outputs of functions, maybe recursively.

But you really want state, because the Real World has absolutely bloody tons of it. Computers and programs that do real work have user interfaces and databases.

So you're just going to have to juggle and maintain those transient values between functions, and do some recursion to keep the plates spinning.

If only there were a way to make all this value juggling and plate spinning easier. A functional programming pattern of some sort.. it would have to have an academic, intimidating name, to put people off thinking that State is Great, or anything like that.


Enter The Monad...

In fact, all you need to do is to juggle bigger, smarter values!

A Monad is just an aggregate, a container, a wrapper, a raiser-upper of state which allows you to still be completely functional and yet still pass around all the state you need in your entire program.

Indeed, to make the point: you could go really nuts and have a program like this, for your database:

end-of-day-database-state = f5( f4( f3( f2( f1(start-of-day-database-state)))))

where each fN() is a transaction, or a selector that carries forward the selections plus the entire database.

A better notation, perhaps for browser GUIs, would be:

f1(starting-DOM-state).f2().f3().f4().f5()

which can pass the DOM state through, along with, again, selections - a subset or working set. Look familiar?

So, in general, you want to apply little functions to little bits inside the whole passed/juggled state. You never change the whole database or the whole GUI at once. Perhaps just a field in the database or GUI. You may have intermediate states that get added to the aggregate and passed on to the next stage.

Actually, f1() in both examples is special, because it takes a "simple" state or value and raises it up to this wrapper/aggregate/super-value - it creates a Monad. It's like $() in JQuery. Similarly f5() is special, at least in the database example, as it goes back again to the simple state. The simple state doesn't know about all this aggregate, compounding stuff: it doesn't have the functions or methods that juggle and spin plates.


Why not just have explicit, big state objects instead of Monads, and only use pure functions to transition those objects between states?

Exactly. It would be way easier than all this juggling and plate-spinning, and much easier to see what was going on. You could give those state objects URLs, and everything. :-)

Thursday, 9 January 2014

Augmented Reality and the Internet of Things

As I explained a while back, this blog is all about my exploring the relationship between Augmented Reality and the Internet of Things.

In my Manifesto for the Internet of Things, I described my vision:

To invisibly merge the real and the virtual, creating ambient and ubiquitous interaction, and to empower people over the control and sharing of their physical items, virtual data and the rules that animate them.

Of course, none of this is particularly original. Mark Weiser's vision of Ubiquitous Computing from the Eightie's amounts to much the same thing:

Ultimately, computers would "vanish into the background," weaving "themselves into the fabric of everyday life until they are indistinguishable from it."

However, do a Google search for "Augmented Reality" "Internet of Things", and there are very few who share the view that Ubiquitous computing can be realised by the combination of these two hot topics.

Here's a Wired article from 2010. And ABI Research agrees. That's pretty much it.

But it seems obvious to me that, if we are to be surrounded by a fabric of reality merging with the virtual via the Internet of Things, we ourselves will play within that fabric using what we today call "Augmented Reality", via smartphones, tablets, glasses and projectors, or holographic projections and wall-sized screens that react to our gestures.

Maybe IoT people just naturally embrace AR without a second thought. Not sure if AR evangelists are so aware of the potential of the IoT, though.

Wednesday, 8 January 2014

Navigating Things and Places Around You

Some small progress, from 20 minutes here and there on the train to and from Waterloo...

I've added a menu item to switch to navigating NetMash by Things around you.

It starts you off viewing a 3D place or "room" containing the three light objects as you've seen before, then if you move your phone towards a broadcasting Thing, it jumps to showing you that 3D object up close. If you move back, you are back in the place again.

Here's the menu, showing the "Around" option:


You can also see underneath that the "Objects Around" object, which not only lists the three broadcasting Light Things I had before, but now has their common place object, too, the "Room of Things". This link is found by grabbing the "within" links of the lights. The place is given a fixed distance of 20.

So, if you're in jumping "Around" mode, then the nearest item in the list above is jumped to, which is the place "Room of Things" - unless you move the phone near to one of the BLE dongles, or less than 20 away from it. Clunky algorithm for now, but will be refined.

I'd show you screenshots of the 3D stuff, but you've already seen them: it's the jumping that is important. When I've smoothed it out enough for a video, I shall do that instead.

Tuesday, 7 January 2014

IoT Rules: Event->Action versus State->State

A core difference between the Object Network approach to the IoT and every other that I've seen is in its programming model.


The Event-Action Programming Model

Other rule systems - or programming models within existing languages such as Javascript - are based on the Event-Action approach. This starts with an event, possibly coming from a subscription, such as "light level is now low", "owner is nearly home", "someone has entered the room", "switch has been turned off", "room is too hot", "it's turned night time", and so on. The rule matches such an event, then triggers an action: "turn on the light", "turn on the heating", etc.

This style is very simple and obvious, so does allow non-programmers to create rules quite easily.

However, it is very fine-grained, limited to simple parameters and actions. It very quickly hits a limit for more interesting programming of behaviour.

Having to model every input as a specific event, and every output as a specific action is not just low-level, but also highly prescriptive - your rules say "exactly in this case, you must do this".

In an environment of large numbers of independent but (hopefully) co-operating components, this can be a brittle model.

These approaches compound this because they usually have a central controller that expects all entities around to become slaves. Sometimes that controller is even outside of your own network.


The Object Network Programming Model

The Object Network programming model is more general and broad, and allows each Thing or object in the system to have complete autonomy.

In the Object Net, instead of events you have observed states, which may be complex descriptions of the whole of a Thing's current state, or even the current state of many linked Things, in your house or on the public Internet. Instead of actions you have new states for objects. Those new states may in turn be observed by other Things.

It is also a simple programming model, but a much more powerful one.

Every Thing or object is responsible for its own state, which depends on the state of other Things or objects around it, that it has links to:

When a light decides to observe a switch, it can set its own state to "on" when the switch's state is "on". Several lights could observe that switch state, or another state, and come on at the same time. Or another light can observe the first light and other lights around and set its own light levels to match them, or perhaps match their colours. The heating and lighting can observe the time of day and behave differently to other states of Things they are observing. A heater may refuse to turn on if some other state it is aware of, or rule it is following, is overriding it. A light can scan around to see if there's a light-level sensor that it can react to.

Nothing is run centrally, so Things and objects in the network just interact at their own pace and according to their own rules. Rules and states can be more like intentions than commands, allowing a more loosely-coupled federation of intelligent and interacting Things.

Further, much more context and surrounding state can be taken into account at once in those rules, because the rules have the entire local and global Object Network before them, on which to base their state transitions.


Monday, 6 January 2014

Open and Local, not Closed and Cloud

I came across this excellent article by T.Rob Wyatt yesterday. Go and read it..

OK, so if you didn't jump that link, here're some juicy quotes for you:

What is holding up the Internet of Things is that people do not want to buy devices that deeply penetrate their veil of personal privacy and then send fine-grained data about them back to device manufacturers.

And:

Screw in a Philips Hue bulb and all of a sudden that switch on the wall is worse than useless. You have to duct tape it in the “on” position to make the bulb fully functional and then are forced to use your phone to control the bulb. What LED “smart” bulbs need is a wall switch that passes power straight through but sends commands to the bulb using the API. 

And the Big One:

It is the open, local API that is missing from the Internet of Things.


Open and Local, not Closed and Cloud

Consider the Ninja Sphere - it depends on the remote servers to manage and run your rules.

Consider the British Gas Hive offering, which was splattered across Waterloo this morning:


Its website to control your heating is hosted by them, not you. British Gas gets to see every single heating adjustment you make, via the website or the app. And there's no API, at least not yet. Another app, another silo, another lock-in, another privacy leak.

The IoT will blossom once everyone just accepts the inevitable and opens up all their devices on the local network.

You should be able to choose a favourite way to see and control your home, office or factory, regardless of the provenance and technology of the sensors and actuators: the (single) app you like, the rule system you like. It should all just work together, with hardly any set-up, and not let in or out without you knowing and agreeing.

My Manifesto for the IoT describes this vision, and this blog documents my own explorations of it.

Obviously, I believe that one day everyone's favourite choice will be the NetMash app and NetMash servers, implementing the Object Network approach, programming in Cyrus rules.


And Finally..

Here's a nice bit of sloganising from the article by T.Rob:

Hardware is the new software.
Crowdfunding is the new VC.
Makers are the new kingmakers.




Sunday, 5 January 2014

Scanning BLE adverts from Linux

As I mentioned before, I need my Pies to be able to see each other's BLE adverts so that new Things can discover existing ones and find a place to belong to. I even suggested that a place could advertise itself directly, instead of jumping via a Thing object's "within:" link. That will be needed for the first Thing, at least.

Also, one day I'll want all my Pies to be scanning for mobile advertisers, such as people, robots, dogs and cars. Android currently can't itself be a beacon, but when it can, I'll need my Pies to be able to spot the app.

So I did a little research and came up with a possible answer. Unfortunately, as I said before, there doesn't seem to be a BLE API for Java yet, so we still have to call out to the command line. And the tools we have are apparently undocumented and rather clunky to use.


Clunky Method

First you kick off hcidump:

root@duncan-dell/0:~ -> jobs
[1]  + Running                       hcidump -x -R

You need "-R" to show the raw data and "-x" for hex dumping.

Now call "lescan", and watch the output of hcidump:

root@duncan-dell/0:~ -> hcitool lescan > & out

The following is the output of hcidump:

< 01 0B 20 07 01 10 00 10 00 00 00 
> 04 0E 04 01 0B 20 00 

Don't know what that is.

< 01 0C 20 02 01 01 
> 04 0E 04 01 0C 20 00 

Or that.

> 04 3E 2A 02 01 03 00 B3 F1 C6 72 02 00 1E 02 01 1A 1A FF 4C 
  00 02 15 C0 A8 00 12 1F 92 B5 0D C3 24 A3 7F 7A 66 00 00 00 
  00 00 00 00 C2 
> 04 3E 2A 02 01 03 00 87 6C C8 72 02 00 1E 02 01 1A 1A FF 4C 
  00 02 15 C0 A8 00 11 1F 92 4B CF D9 1F 26 0E F6 E2 00 00 00 
  00 00 00 00 BC 

Aha! There're my two Pies. I've highlighted the MAC number, which is reversed. The hex following is our advertising data, containing the URLs.

Interestingly, the last octet of the data has a number there. I did some tests to see if that was the RSSI, but it didn't seem to change in any correlation to the distance. The "1E" after the MAC tells us that there are 30 octets following, which does indeed fall one short of the end. More on this below.

It then hangs, presumably continuing to scan. When you kill it, hcidump splutters a bit:

< 01 0C 20 02 00 01 
> 04 0E 04 01 0C 20 00 

Here's the rather uninteresting output saved by the actual lescan command:

root@duncan-dell/0:~ -> cat out
LE Scan ...
00:02:72:C6:F1:B3 (unknown)
00:02:72:C8:6C:87 (unknown)

I believe that "unknown" refers to the fact that it found the "FF", or manufacturer-specific data, in the octet string. See below for more on that.


RSSI?

In pursuit of the RSSI, I ran hcidump without the raw mode flag and got more promising-looking output:

< HCI Command: LE Set Scan Parameters (0x08|0x000b) plen 7
    type 0x01 (active)
    interval 10.000ms window 10.000ms
    own address: 0x00 (Public) policy: All
> HCI Event: Command Complete (0x0e) plen 4
    LE Set Scan Parameters (0x08|0x000b) ncmd 1
    status 0x00

Ah! So that's what that meant.

< HCI Command: LE Set Scan Enable (0x08|0x000c) plen 2
    value 0x01 (scanning enabled)
    filter duplicates 0x01 (enabled)
> HCI Event: Command Complete (0x0e) plen 4
    LE Set Scan Enable (0x08|0x000c) ncmd 1
    status 0x00

Uh-huh..

> HCI Event: LE Meta Event (0x3e) plen 42
    LE Advertising Report
      ADV_NONCONN_IND - Non connectable undirected advertising (3)
      bdaddr 00:02:72:C8:6C:87 (Public)
      Flags: 0x1a
      Unknown type 0xff with 25 bytes data
      RSSI: -62
> HCI Event: LE Meta Event (0x3e) plen 42
    LE Advertising Report
      ADV_NONCONN_IND - Non connectable undirected advertising (3)
      bdaddr 00:02:72:C6:F1:B3 (Public)
      Flags: 0x1a
      Unknown type 0xff with 25 bytes data
      RSSI: -68

There's the supposed RSSI, then - but it's the same as that "random" octet - and it similarly doesn't change even when I press the Pi right up against the laptop BLE.

Notice the bit where it says it doesn't understand all this Apple-ese (0xff).

< HCI Command: LE Set Scan Enable (0x08|0x000c) plen 2
    value 0x00 (scanning disabled)
    filter duplicates 0x01 (enabled)
> HCI Event: Command Complete (0x0e) plen 4
    LE Set Scan Enable (0x08|0x000c) ncmd 1
    status 0x00

Killing the lescan.

This is all extremely clunky, but can be made to work, as long as I find out what's up with the RSSI. Someone else had the same issue it seems. I can at least set up the light object place links this way instead of hard-coding them.