Monday, 12 May 2014

Welcome to ThoughtWorks Subscribers!

You may have come to this blog from my article just published on ThoughtWorks' Insights page, "CoAP and a Web of Things Watching Things". Welcome!

This blog is actually my ThoughtWorks 60-day Internet of Things project - my main blog is "What Not How". I summarised my progress in this project after 30 days and after 60 days. These two pages briefly describe, and link to, each day's page for the preceding month.

There are three main articles to read here that explain the Object Network approach to the Internet of Things:
I also have three articles that mention CoAP:
You can contact me on Twitter if you want to discuss anything about what you see here or on my main blog.

Friday, 14 February 2014

Moving off Blogger..

This is my last post on this blog. Blogger and Google have been a bit of a disappointment, so this seems like a good time to wrap up and find an alternative.

What I want in a blog service:
  • bug-free editor - I don't actually need WYSIWYG if it's that hard to do; HTML would be fine
  • reliable server to save mid-edit to, and which doesn't push down the whole page - that I'm in the middle of editing - with a message telling me that once again it failed to auto-save
  • a preview with links that work, so I can test them there
What I want from Google's search services:
  • if they run a service like Blogger, they should put the pages from it in their index, preferably when they're published
  • for "site:object-network.blogspot.co.uk" to work
I'm also looking for alternatives to Feedly, the feed reader that pulls in new pages in less than a second, but then stubbornly refuses to acknowledge updates from then on.

You don't know what you do want, until you know what you don't want. I now know more about what I want. So thanks, Google, for that!

And for giving me a free blog, which wasn't so bad, really.

Thursday, 13 February 2014

Concluding my 60 Days of Things

I'm now nearly at the end of my 60 Days of Things. As I said when I was half-way: "I started this blog on the 14th of December last year, without intending to blog every day. But I happened to have something to say every day, and once I'd established that regularity, I decided to keep it up. I do have rather a lot to say about the Object Net, and my other blog is full of me saying it in long posts. This way I get to write shorter posts, yet more often."

It's been a lot of fun, doing the coding and hardware stuff and writing these posts. I almost never knew what I was going to write as I sat down every evening, but never once failed to find inspiration.

So here's another summary in two parts. First, what I achieved for NetMash:

Rather more articles in the "general" category this time:

I also went on a short trip to Venice. and linked to some snaps.

I'm not sure if I'll continue the pleasant discipline of daily blogging, now that my 60 days are up, but I do hope to.

I'll certainly keep blogging on the Augmented Reality Internet of Things manifested by the Object Network.

I've only just started...


Wednesday, 12 February 2014

Amazing Innovation at Raspberry Pi and Arduino Meetup

Tonight I attended the Surrey Geeks Meetup in the lovely offices of the generous Kyan in Guildford. The topic was essentially anything to do with Raspberry Pi and Arduino.  There was some pretty innovative stuff being talked about.

The organiser, Jon Nethercott, kicked off talking about the Arduino boards and the projects he had constructed, including an amazing capacitance meter that required no additional hardware. You could push a capacitor into two A-D pins and the display shield would quickly tell you what value it had, from 1pF up to hundreds of uF.

It has two ways of calculating the value: from 1pF to 1nF it measures the ratio of the capacitance of the test capacitor against the residual 25pF capacitance of the internal circuitry! I've done a lot of electronics in my early life (really early life - I built my first computer in 1977 using the 1802 CMOS microprocessor), but I've never, as far as I recall, had to work out the voltage distribution of two capacitors in series!

Jon explained it to me, and it actually works out quite intuitively - the bigger capacitor develops the smaller voltage, as from an electron point of view, it is similar to a lower resistance, as it has more "space" for electrons to flow into it.

Above 1nF, which is very large relative to the 25pF residual capacitance, Jon switches to an alternative technique using an internal pull-up resistance to charge the test capacitor, then measure the developed voltage and time taken to reach it. Once again, not a scrap of extra external circuitry since it relies this time on an internal resistance. Cool.

Related to the work I've been doing, Richard Jelbert showed us his Pi for cars with a BLE beacon attached. This could be used to drive an app on the driver's phone to pick up a small number of events broadcast from the Pi, including from sensors. For example: when the driver enters the car, starts it, stops it .. or crashes it! This could be used to reward clean drivers with lower insurance, without any inconvenience to the driver, who otherwise has to keep messing with the app controls at the start and end of the journey.

Richard also showed us his prototype for a Bitcoin vending machine. Seriously: a Bitcoin vending machine. You put in some cash and get a printed slip with two QR codes on it: the public and the private key for your entry on the blockchain.

My colleague in government and another meetup organiser, David Carboni, told us of his plans to enhance local neighbourhood safety with an automatic number plate recognition system. Every participant in a street would have Pies tracking cars via its camera. They could share the information to track stranger cars. The approach would involve quite a bit of image processing to normalise the image then extract the characters. There is this software which may be interesting to look at.

Next up, a Research Assistant from Guildford University, James Mithen, told us of his plans to get into ARM code and write another operating system for the Pi, as a fun exercise...

Finally, I stood up and told everyone about the Augmented Reality Internet of Things idea, with a mention of Minecraft house-modelling to get them all thinking I'm a nutter. It worked. They did.

We all talked more than we hacked or wired, which was great - and there's always next time to play with kit.

Really exciting stuff. And great pizza and great beer. Thanks to the organisers and hosts.

Tuesday, 11 February 2014

Seamlessly copying data between adjacent machines

This morning on the train I wanted to work on a document that was in a draft email that I could access from Google by 3G on my Android phone. But I wanted to work on the document on my laptop, which doesn't have 3G.

After my 25 minutes journey was done, I still hadn't solved the problem - how do I easily and reliably move data from the device in my hand to the one six inches below it?

Even the fact that I had to think about all the ways means it's just not something you do in any instinctive way.

I'm not asking for answers, by the way, I know about all the options (Bluetooth, hotspot, tethering). It's their reliability and ease of use that's part of the problem.

On the way back, there was a massive video advert in Waterloo station for the Audi car company. It said (I think; I was rushing past) "Number of Audi drivers in the station: 4567", and it was slowly incrementing. I presumed that they made that up - it seemed high - or used Twitter or something.

Then I thought, well, it'd be fun to offer iPhone-using Audi drivers an app which could be a beacon saying: "I'm an Audi driver!". Then if they were told to walk past the advert, well, you get the idea.

These two incidents got me thinking about commodity technologies for easy and reliable proximal data exchange.

And it's really still too hard to seamlessly move data between machines and devices that are right next to each other!

In our workplace, faced with the need to get a file from one PC to the one next to it, even techies have been known to send the data across the Atlantic via the US, because it's easier to email it thousands of miles than figure out a direct route of three feet.


Non-Solutions

I've listed some commodity wireless technologies already: Bluetooth, Wifi and Mobile data. You could add QR codes, NFC and RFID to those, of course, but they are still less common.

Now I've got a low tolerance for poor usability, but surely everyone hesitates before considering the buggy and unreliable, and cognitively complex Bluetooth approach. I can't even think how I'd do it, to be honest. I know it involves some kind of "pairing", and lots of failed transfers.

Wifi requires you to be logged in to a network with complex passcodes, and even then you'll probably need to play with IP numbers to do local file transfer.

Mobile data is not always available, is slow and unreliable when it is, and requires some or other proprietary intermediary.


Solutions

So, back to the Audi example: what if it was as easy as (a) being near and (b) setting the intent to share something, anything?

I should be able to pull up the draft email I had on my phone, hit "Copy to adjacent device .." and enter a 3-digit number (to prevent others being able to see it casually; say only one transfer is possible and it times out after a minute, to make it even more secure).

Now if I hit "Look for local data" or something on the other machine (laptop), I just need to enter the 3-digit number and it's there in seconds.

Similarly: PC #1: right click on file "Copy to adjacent device .. ". 3-digit number. PC #2 "Look for local data". 3-digit number. It's there.

The 3-digit number would be all you needed to confirm the particular transfer, perhaps when others were active around you, you wouldn't need to see or choose the filename being offered or anything.

I should be able to pull up a photo of myself that I use in public, or enter details into a profile document - including the car I drive - the hit "Publish to adjacent devices ..". No 3-digit number this time, of course. The peer device will "Look for local data" and suck it all in, then filter out the interesting stuff to stick up onto that 12-foot screen.

That should be built into every single smart device we use.

We could implement it in BT 4, WiFi Direct, whatever. It just needs to always be there, always work, and be that simple.

Monday, 10 February 2014

Seven uses of the Raspberry Pi Camera

My Christmas Pi came with a 5Mp (2592x1944) f/2.9 camera module, which I intend to use as my light level sensor. I also want to use it to detect the colour of the ambient light, so that my light Things can match it.

Further along, I expect I can use it for motion detection and to recognise QR codes, car number plates and faces. And to take pictures; almost forgot. But, baby steps..

For light level and ambient colour temperature, I'd need to point the camera at a sheet of white paper.

The challenge I face is to arrange one of the following:

(a) get the exposure - which I presume corresponds to shutter speed in some way, as it's a fixed aperture - and AWB colour value, set in the EXIF

(b) disable auto-exposure and auto-AWB, and do my own sampling of the exposure to find out how much light there is, then take the average colour of the image.

(c) get the exposure and disable AWB or vice-versa, depending on the EXIF

I've been looking at the documentation and sources for information today, but it's quite hard to find what I want. For option (b), I think --shutter and --awb should allow me to set the series of shutter speeds and to turn AWB off, respectively.

It's possible that I need to do (c) - there's something in the EXIF called Light Value but I can't see a colour temperature parameter.

On Wednesday, a group of us are getting together to hack some stuff, so perhaps I'll experiment then.

PS The title today is a dig at those awful traffic-hunting titles you see these days..

Sunday, 9 February 2014

Augmented Reality position from beacon strength

Well hopefully you all enjoyed my 51 seconds of fame yesterday.

Although in that video, I skilfully managed to make it look as if my position, relative to all those BLE beacons, was being accurately tracked, the fact is that my algorithm is pretty dumb:

It puts you in the middle of the room or "place" unless you get close to a beacon, in which case, it gently moves you in towards the 3D virtual representation of the corresponding Thing object to that beacon, as advertised by its URL. When you go out of range, it glides you back to the middle again.

It's pretty effective for a simple algorithm, I hope you'll agree, but of course it does have limitations.

The main issue is that the RSSI signal strength is very jumpy, which means you can sometimes oscillate around when at the borderline. Both for the simple algorithm and for a future trilateration one, I'd need to filter or smooth the distance calculation better.

What I'll try is a smoothing algorithm that works asymmetrically. When the signal strength goes up, it's pretty much guaranteed to be because you've moved closer, due to the laws of physics. The same laws of physics also dictate that when the signal strength goes down, it's because of some random interaction with a plane going overhead and the phase of the moon.

So I'd have it work like this: moving in immediately sets a closer distance, but moving out is treated with greater suspicion - maybe smoothing it out over three samples.


Other Jobs

I had to set up the positions of the light objects manually for the video, and that will always be the case to some extent. I imagine the new Thing would have a guess where it is when installed, then the user could nudge it into its proper place in the 3D view. The initial position could be roughly worked out at the same time as fetching the URL of the nearest place from nearby beacons, which also has to be hand-configured currently.

Another thing is that I should demo more than one place, and move between two linked rooms. Most of the code is already there for that, I think.

I also don't track up-down orientation yet, just compass rotation. Again, it's not as big a loss as you might think, but I should add that sooner or later.

Saturday, 8 February 2014

Very short video demo of AR+IoT in the Object Net

I spent all day today, thanks to my understanding family, preparing this 51-second video for you, about the work I've been doing for the ThoughtWorks 100 Days of Hardware:


It's my first ever YouTube video, and I'm quite pleased with it. I'm also pretty happy with the overall results I've achieved in the snippets of coding time I've had.


Mashability

Soon into my first practice take, I realised I couldn't actually touch the screen to change the light colour for the video, since I was holding my wife's iPhone in the other hand.

So I got the URL of the light up in my browser and typed in this thought-free code to make the light blink automatically, between red and green:
{ is: cuboid rule
  Timer: 0 => 2000
  light: (1 => 0) (0 => 1) 0
}
{ is: cuboid rule
  Timer: 0 => 2000
  light: (0 => 1) (1 => 0) 0
}
The point was not that I should have only written one rule not two, but that I could tap this in quickly while between video takes, in a browser editing page, and immediately see the light flash - both on my phone on the table and on the Pi next to it.

Which is one of the points of the Object Network. Instant gratification!

Friday, 7 February 2014

Which Contact and Event Formats for the UK Government?

Today, Paul Downey and I submitted two "challenge suggestions" to the UK Government's Standards Hub.

Now, we both work at the Cabinet Office, so I guess we have a head start in knowing about all this, as insiders .. but no-one will find the standards we're proposing to be in any way controversial.

We suggested that the UK Government should pick a standard for contact information exchange and calendar event information exchange. I won't bias the process by naming the obvious *ahem*vcard*cough* standards to *ahhh*icalendar*choo* at least give due consideration to, for these needs.

In true UK Government style (or the style of any Government I imagine), this process is done politely and at a slowww pace. Several packets of tea are consumed from the start to the end of the process.

To evidence this, there are already two accepted standards: paraphrasable as "Use UTF-8!" and "Use URLs!".

Baby steps, baby steps.

It's a long way to go from this to the Object Network contact and event formats, but once we've got something in the Standards Hub for contact and event, the next one to try for is: "A textual structured data encoding format with maps and lists, that is useful for moving data into and out of APIs".

That way we can then go on to suggest such an encoding of all the data available from our Government, which could include those contact and event types.

Not sure if we'd have to get HTTP - sorry, a hypermedia/data transfer protocol - in first, though.

Anyway, I've got plenty of time...

Thursday, 6 February 2014

Bidirectional CoAP

I mentioned before that CoAP would seem to be a good protocol to implement FOREST over, as most implementations implement the observe spec.

CoAP is based on HTTP, thus inherits its asymmetric client-server model. However, it is also built over UDP, which is another loosening up of HTTP in CoAP that can help implement FOREST. FOREST's basic mode of interaction is peer-to-peer, with clients able to be servers and vice-versa.

So if I use CoAP, I'll have UDP packets for requests going in both directions between peers, and UDP packets for responses also going back in each direction. Even though CoAP isn't specified to be bidirectional, I could easily implement a bidirectional version of it once I have a unidirectional implementation.

If the code is all in NetMash, which is used on both clients and servers, then the bottom of the CoAP stack will be simply sending and receiving UDP packets: there's no separate TCP connection in each direction. You could have a single UDP port number at each end.

If you need to keep receiving updates to your cache of a response to an earlier request (which it seems is a feature being added to HTTP/2), then you're already getting towards a bidirectional protocol, since a spontaneous packet can now come back at the client instead of it always being the initiator.

I played with such a bidirectional protocol of my own for an earlier version of the Object Net, back in 2005, but then switched to using just dual HTTP channels and long-polling to cover asymmetric infrastructure. It's nice to be able to re-visit the concept with the IoT and CoAP.

Wednesday, 5 February 2014

What is The Object Network again?

I've been working on this project without really stopping to give an overview of all the elements of the Object Network. You can work it out from the articles and links, but here's everything summarised in one place.


Functional Observer

This is the basic programming and interaction model underlying everything. In a nutshell:

An object’s state is set as a Function of its current state plus the state of other objects it Observes through links. 

So imagine an object representing a light that is shining yellow. It links to another object representing the values of a dimmer. The brightness of the light depends on the value of the dimmer, so as the dimmer value observed by the light reduces, the light calculates a new RGB setting based on whatever colour it's set to, modulated by the dimmer value it can see through the link.

In the style of a spreadsheet, whenever the current light colour on the light, or the current value on the remote dimmer changes, it has work to do to set its current output in the RGB values.

I mentioned Functional Observer here and here.


FOREST

Functional Observer REST simply allows our objects to reside on different host machines. They talk RESTfully over HTTP in JSON. They have URLs and exchange state using GET and POST.

I mentioned FOREST here and here.


Cyrus

In order to know what state to set itself - what dependencies it has on peer objects and its current state - an object needs to be programmed or "animated". It could be programmed in Java - and that's exactly what I do for some functionality in NetMash.

But we can create a language that is a direct mapping onto the Functional Observer model above. The "Function" part can be pure: you don't need I/O or side-effects when all of that is taken care of in the objects you're rewriting.

Cyrus is a pure Functional Observer language. Being homoiconic, it is based on the JSON of the objects it is animating, but with much noisy syntax removed. Cyrus rules have their own URLs, of course.

I mentioned Cyrus herehere and here.


Object Network Types

These distributed, interacting objects can form a global Object Network or graph. But only if they all look pretty much the same.

So the Object Network defines a number of simple and stable formats within JSON for common needs, such as contacts, events, feeds, articles, media, GUI layouts, users, 3D objects, IoT Things, Cyrus rules, etc.

These types can also be represented in the cleaner Cyrus syntax.

I mentioned Object Network Types here and here.


NetMash

All of this is implemented in the NetMash Java code. NetMash is an Android app and a Java server. They share the same Object Network core code, including the implementation of the Cyrus language.

This whole blog is full of screenshots of NetMash in action.

Tuesday, 4 February 2014

Internet of Things and Augmented Reality in Retail

Today I received my latest copy of the Thoughtworks Perspectives mailing. This month's issue was a special on Retail. Our European Head of Retail, Mark Collin, has been interviewed on the "Essential Retail" site about the latest trends.

That's where I discovered a new term: "Phygital", which of course means the merging of physical and digital. Cute. "Internet of Things" seems quite a mouthful in comparison.
.. it will become a necessary reality that retailers have to find ways to subtly and seamlessly incorporate digital into a store experience and not just for technology sake or as a gimmick but to tackle core retailing issues like real time inventory, faster checkout, everything in my pocket (mobile – payment, loyalty, receipts, rewards, etc). The behind the scenes analytics opportunities are the significant side benefit to a customer experience premised on digital. 
- Mark Collin
Here's another, related article on the Thoughtworks website: Will iBeacon Further Enable the Passion of Shopping? And another from a Thoughtworker's own blog: Introduction to iBeacons by Andrew McWilliams.


Retail: The Thoughtworks Application of IoT/AR

So this got me thinking, that soon enough I'm going to want to explain how my 60 Days of Things, that I'm coming to the end of pretty soon, will benefit Thoughtworks.

And the obvious applications are all in retail, in interacting with customers within some environment, such as shops, malls, airports, stations, libraries, museums, theme parks, cinemas, sports centres, swimming pools, holiday camps, and so-on.


IoT/AR in Retail

Now currently, all the demos we have been doing inside TW and the typical example ideas that people have been coming up with in general in this area are about "IoT plus 2D smartphone app" interactions.

I can't see anyone else that has spotted the potential of combining "IoT plus Augmented Reality".

Tomi Ahonen, a mobile industry analyst who's predictions are extremely reliable, believes that Augmented Reality is the next big wave after mobile.

Just sayin'...


Monday, 3 February 2014

FOREST: a higher-level RESTful interaction model

I was having a good chat with my buddy Jim Barritt today, and the subject came up of how one could describe the difference between my FOREST approach to REST and the widely-used, traditional AtomPub style.


AtomPub

Quick summary of the AtomPub style: you have a client and a server where the server is a lot like a database of articles and HTTP is used by the client to edit those articles.

So the editor client says "POST" and a new article is created. It says "PUT" and the article is updated. "DELETE" is pretty obvious. The client, and indeed the world, can "GET" to read an article.

The client basically runs things and the server more-or-less does what it's told, modulo whatever it needs to do to ensure security and integrity. There are other bits like Media Types and Link Relations, but that's basically the model.


Almost Database Integration

When people want to "do REST", if they think they want to do it "properly", chances are they'll use this approach as their paradigm. I was slightly involved in the creation of the AtomPub spec, so I'm not knocking it, at least for this use case - editing articles.

Trouble is, it only makes sense if your application is a lot like a database, where you have some data that you want to create, update, delete or read. So, in order to do what they believe to be "proper REST", people end up forcing their inter-server interaction protocols into this simple, low-level, data read-write model.

And that feels uncomfortably close to a database integration style!


FOREST

In contrast, in the FOREST style, two application servers talk to each other at a higher level, as peers - they can be both client and server to one another.  A peer can GET the data of another - pull or poll - and can POST its own data to another - push.

It's a simple, symmetric model of interaction where the application protocol is like a two-way conversation. Being RESTful, all such data can be found at their URLs - this isn't just a substitute for messaging. PUT and DELETE aren't used at all, because the interaction model isn't about just low-level editing of data.

I'll be illustrating FOREST with an example on this blog, to show how such a conversation between peers can proceed. You'll see how it enables interactions that are at a higher, domain level.

Sunday, 2 February 2014

Beacons that Move

Up to now, I've been talking about BLE beacons attached at fixed points around the house, office, shop, park, etc. The moving bit is you; or rather your Android device, that picks up the URLs and the signal strengths and drives a 3D AR app that maps out your surroundings.

Beyond that mode, I did mention that a fixed beacon could also scan around it - when first installed - to find out what place it was in and whereabouts it sat.

So the other two combinations left involve the Android device itself broadcasting, either to the fixed beacons or to other Android phones. Unfortunately, Android can't yet broadcast as a beacon through BLE - see this issue here and vote on it by starring. Android needs "Peripheral Mode" support, which iPhones do already have.

Once your phone can also act as a beacon, it can broadcast the URL of your person or avatar object.

The first benefit of this would be more accurate location - the surrounding Things would know how far you are from them and could collaborate on trilaterating your position, which you could combine with your own trilateration of their positions for more accurate results.


Publishing You

If your device were broadcasting, it could be used to notify surrounding people and machines of your presence, identity and other parameters and links you want to make public, through a packet of JSON fetched through the advertised URL.

This would allow various applications such as: automatically paying for a service by walking through a gate, exchanging behavioural tracking for store discounts and a conference birds-of-a-feather locator. You could even advertise that it's your birthday, or that you like rock climbing, your blog URL or your relationship interests. A more private view could show your health to your doctor.

You don't actually need BLE to do something like this: when you have WiFi switched on, your device gives away its unique MAC address while scanning for networks. Now you just need to map from that to your personal URL, which could be done through a MAC-to-URL lookup service a bit like DNS.


Beacons that Move

While waiting for Android to get peripheral mode and enable this huge range of applications, there are other examples of physical objects that can move and can be tagged with a beacon.

The TI SensorTag, the Ninja tag, the Chipolo, the Light Blue Cortado - all allow BLE tracking of the location of things they're attached to, or of values of their sensors, such as accelerometers.

Like you, your car can have its own URL, allowing similar applications such as automatic payment of tolls, fuel and parking fees, detection of presence for tracking in the home or the street, perhaps again in exchange for information or discounts. A more private view could show you the health of the car and allow you to set certain parameters and rules of operation.

Android better fix that peripheral mode if its Open Auto Alliance is to beat iOS, though..

Finally, robots can have "beacons that move", allowing all the above functionality plus a whole lot more, around collaboration and coordination of their joint activities.

Saturday, 1 February 2014

Venice Holiday Snaps.. Not Today

I was hoping to show you some holiday snaps from Venice today, but I can't upload from my phone into the Blogger editor.

So here's my Twitter link, anyway, which has some photos: https://twitter.com/duncan__cragg


Friday, 31 January 2014

Venice Vacation

No blog today, or rather no content, because I'm in Venice.

I may show you some holiday snaps tomorrow..

Thursday, 30 January 2014

The Internet of Meeting Rooms

Today, I was yet again hunting for a meeting room in the huge office building I work in. Obviously, the meeting room number gives nothing away - adjacent numbered rooms could be on opposite sides of the floor.

It occurred to me that I'd like to be able to hold up NetMash in "look around" mode, and see all of the meeting rooms nearby - like X-Ray vision.

This would of course be driven by BLE beacons inside each meeting room, advertising the URL of their virtual room/place object. Assuming my phone is on the office WiFi, NetMash could fetch and render those place objects for me, plus all the contents.

An obvious object to have inside a room on the virtual wall would be a "card" describing the event - a meeting booking. This would be a 3D representation of a JSON object encoding an iCalendar event.

Of course, occupied rooms would also contain all the avatars of the people in them. I could find the room, then as I approached, ping a message to all of my colleagues.

There could be an object that captured the whiteboard work as an image that we could virtually take away with us.

And, obviously, there would be objects for lighting and heating. Being listed in the event object invitees, I could turn all the lights off and back on again as I approached, for dramatic effect.


Wednesday, 29 January 2014

The Internet of Farm Yields - from Monsanto?

With Google's acquisition of Nest fresh in our minds, this article now makes even more alarming reading.

Apparently, Monsanto are offering farmers a system that can measure crop yields over their fields by following them around using GPS while monitoring the rate of harvest at each point they cross. All this gets pushed up into Monsanto's servers.

The farmer then gets the payback: an automated planting system that adjusts the amount and type of seed planted at each point in the field.

But of course, the farmers have trusted Monsanto with their field, planting and harvesting data and are letting the company control their planting in unprecedented detail.

I suppose I trust Google enough with my personal life - perhaps I'll learn to regret that - but if I were a farmer, would I trust Monsanto?

I was going to search for some juicy stories about their unethical tactics, but found that just searching for "Monsanto" alone threw up enough material.


Open & Local Farming

Obviously, it would be perfectly possible to do all of this in an "open and local, not proprietary and cloud" way.

The farmer would get all the above benefits, plus control and privacy, plus the benefits of being able to work with other local and global farmers and keen technologists to create systems that do much more than whatever Monsanto want.


Tuesday, 28 January 2014

Slightly Better AR Microlocation Algorithm

There are two parts to Augmented Reality: knowing which way you're looking around you, and knowing where you are. I've started the first one, in one axis at least. I've also started the second, by jumping right up to the nearest object.

As I showed in that article, it's a bit rough: if the nearest object is the light, I only see the light, up close, without a surrounding place. Only if no beacon is near do I see the place the light is in, with that light and any other lights.

While waiting for the mathematical energy and inspiration to tackle proper trilateration, what I actually want is for the 3D AR view to always be in the place that the surrounding objects or Things occupy, rather than jumping off to the object alone.

Then when I move in to a beacon on a Thing, for the user object or avatar in that place, and their 3D view, to also move smoothly closer to the 3D representation of the Thing object within that place.


Tricky Algorithm

Now, it's not tested, but here's the algorithm I worked out on the train today:

  If the nearest object is a place:
    If already in that place:
      If looking up close at an object:
        initiate zoom to the middle of the place
    Else:
      jump to the middle of the place
  Else:
    If already in the place the object is in:
      If not already looking at the object:
        initiate zoom up to the object
    Else:
      jump to the middle of the place
      initiate zoom up to the object

Way more complex than I ever imagined! And that's not even close to a trilateration algorithm.

The action "jump to the middle of the place" is resetting the 3D view to a new place URL and putting the user in the middle, of the room, etc. Arbitrary, but on average not a bad choice, I hope.

By "initiate zoom..", I mean kick off a thread to move the user's avatar smoothly to a target destination position in the place. Since at this point I only have the URL of the target object, I also need to look up its position coordinates from the place it's in, or the coordinates of the middle of the place.

On top of that, for zooming up to an object, I need to stop before I hit its actual position. So I need to get its bounding box and turn that into a bounding circle radius, so that I can stop zooming that far back.

Only two weeks to go to the end of this ThoughtWorks exercise..


Monday, 27 January 2014

Simple but Effective Lighting Behaviour via the IoT

This last weekend, I replaced all those LED bulbs in the kitchen with dimmable ones, and put a dimmer switch in. They are still super-white bulbs, which is ideal for seeing while doing jobs.

But as soon as we first dimmed them down - we hit upon an obvious issue: they're still super-white, only dimmer.

What we would like is for them to go warm-coloured when they're dimmed!

You don't need pure white light if you're not using them to work by, and if you dim them, you normally want to set a softer mood, or make it suitable for relaxing and chatting. White light isn't the right light for that.


Object Net IoT Approach

So obviously, that gets me thinking about the Object Network solution.

The ultimate IoT solution would be to have individual control over each bulb's RGB levels via some decent radio control such as ZigBee, Z-Wave, Bluetooth or 6LoWPAN. The state the dimmer switch was in - off, or on and the control level set - could be fed into the I/O ports of a Raspberry Pi which would also terminate all the radio.

The practical, DIY solution? You can buy GU10 RGB LED units that come with an IR controller. It only seems to have limited colour settings, however, but it may be possible to trace the protocol and set any RGB value. You'd need to have a dimmer box containing an IR LED to control the bulbs across the kitchen (don't stand in the way!), and a variable resistor for the control. These would all be wired into a nearby Raspberry Pi's I/O pins via some adaptor circuitry.

The R-Pi would be best in or near the dimmer box, so that it can have a BLE beacon advertising the URL of an object representing the state of the control. This object would sit within a place object (the kitchen) that also has the 3D light objects overhead. The lights may or may not be beacons, too, depending on their technology.


The Magic Bit

Then a simple Cyrus rule in the R-Pi would set all the bulbs to white at maximum control setting, but increasingly to warmer colours while decreasing brightness as the dimmer control is turned down.

You could of course override the control level in the app, or directly set any bulb colour as usual. From the coffee shop or down the garden.

Or you could fiddle with the rules to make the lights go blue and surge like waves instead, when you turn down the control..

Sunday, 26 January 2014

Android sensors for AR

I've added a simple class to NetMash called Sensors, that I use to pick up the phone orientation, to drive the 3D view. This is only active once I set the "Around" menu item. I currently only use "azimuth" to pan around the room.

That's something I learned doing this project: "azimuth" is panning around you - and is also called "yaw", "pitch" is looking up and down, and "roll" is tipping or rotating the device while looking at the same thing.

The Android documentation appears to have some rather odd axis conventions to the eyes of someone who hasn't taken the trouble to work out all the maths, but working code beats theory every time.

Obviously, the code was more-or-less copied from multiple StackOverflow posts, but there's not one post or article anywhere I could find that gave me complete code like the simple class I ended up with. The smoothing algorithm I invented is probably far too simplistic, but I can refine it.


Actual Positions

So now I can pan around my 3D place, it becomes pretty obvious that the room isn't exactly oriented square to North.  But more importantly, the positions of the lights bear no relation to their actual positions in the room around me.

Setting Thing positions to their actual locations in the room will have to be done manually for now, when I run NetMash on each Pi. They will then be notified to the place object which can pick them up in the initial discovery rule.

I'm still intending to video all of this working..

Saturday, 25 January 2014

Augmented Reality and Magic

I was out with my daughter today and we went to a garden centre.

To be more precise, I was out with both my daughters - one went indoor climbing with her friend, and the elder and I went off for a hot frothy milk and a coffee, respectively, in the garden centre.

Garden centres are strange places - for a non-gardner like me it's mostly just stuff for other people to buy - mostly, it has to be said, for a generation that arrived before I did.

But this one, like most, had a book section. Again, mostly gardening and cookery books and books about how the town looked 100 years ago. Aeroplane books and war books.

And children's books. Which is where I get to the point of all this.

We picked up this Augmented Reality Fairy book. Now, the daughter that nagged me to buy it is the kind that never gives up and gets very enthusiastic for a while, then moves on to the next opportunity very quickly. So I was a little reluctant at first.

But it was discounted down to only a fiver, and had marker-based AR, which I always find fun to see, and guessed the rest of the family would enjoy too. So I bought it.

Now I have to say that I was almost as excited to try it out as my daughter, and she and I sat together at the Mac and got it going. She was delighted, of course.

You point the webcam at the open page and it detects which page it is and creates a superimposed fairy scene. You can activate various things, like getting a fairy to appear and cast fairy dust around.

The grand finale is to hold a card disk in your palm and entice a fairy by hitting various keys to add fruit and flowers to a cup. So it's like you're holding this fairy in the palm of your hand.

When the family saw all this, there was plenty of "wow"ing and "ooh"ing.

But really, it was pretty basic - just a simple animated 3D scene. It was the fact that it keyed itself onto a physical thing - the book page or the disc - that gave it such a compelling edge. So engaging was this illusion, that my daughter apologised to the fairy when she turned the page and made her vanish!


Augmented Reality and Magic

For a long time, computers have lived in an abstract virtual place in our lives - only interacting properly with reality when printing something out on paper, or arranging for a package to arrive the next day from Amazon. Maybe video calling has a bit of that reality-engagement, too.

Smartphones are better at integrating into our physical lives, with their sensors for orientation and GPS and their cameras.

But when you combine those sensors and a 3D display within an AR app, you can create magic.

The kind of magic that is possible when the unlimited creative universe of the virtual can start to invade our physical environment in truly tangible and compelling ways.

Friday, 24 January 2014

The Internet of 3D sensors and actuators

The sensors and actuators of the IoT tend to be small-scale: temperature, light, etc.

The key factor is the merging of real and virtual, the bringing of Things into the Internet, and allowing communication in both directions between real and virtual and virtual back to real.

On a slightly bigger scale, in the Object Network people are first class and bring their own more complex sensors - for gestures, location, orientation, etc. - and actuators - like screens, glasses, vibration and speakers.

Now the whole thing begins to fill our 3D space - as the Object Network's ability to view the IoT in Augmented Reality indicates.


The 3D IoT

Taking this concept of a 3D IoT to its conclusion:

3D IoT sensors can include full 3D sensor devices that suck in the shape of the world, and sensors like those on the Wii for picking up the motion of your hands and feet, or the Leap Motion that can pick up hand gestures.

3D IoT actuators can include bigger, wall-sized screens, holographic displays, 3D solid displays and 3D printers.

All these technologies enable what were once futuristic applications - instead of bending over a tablet and stabbing at a tiny screen, we'll be standing up, and moving our whole bodies around, interacting with surround-vision displays and holographic objects.

Imagine sculpting a virtual object with your hands, then printing it out. That is a true blend of real and virtual - the "lights and thermostats" Internet of Things will seem rather lame in comparison.

Of course, the ultimate sensor/actuator combo is the robot - a device that can exist in both real and virtual domains simultaneously - it could have a 3D virtual representation showing more about its state and rules, which could be as interactive as the physical robot.

Telepresence robots are a variant of this, merging a real person into a virtual person and back into a nearly-real person again.

The benefit of the Object Network approach to the IoT is that it starts off seeing your world in 3D, so all this is native to its way of working and interacting.

Thursday, 23 January 2014

Programming the IoT = Programming Parallel Computers

It's well-known that Moore's Law is over for single-CPU processing, and that the only way forwards is multi-core and parallel processing.

So given that, the way that we program these chips may have to change. Obviously, if you can easily split your application up into parallel, independent threads then you're fine to carry on programming in single-threaded Java. That's Web servers, of course. Even if your application has inter-dependent threads, you may still be able to battle on with the corresponding Java code and win.

But for any interesting interactive application, different programming models and languages are needed to take away the pain of properly exploiting parallel hardware - to make all that threading code go away in the same way as Java made all that memory management code go away.


The Internet of Processor Things

Now, at the same time that we're considering putting large numbers of small processors together in the same box, we're also considering scattering large numbers of small processors all around us - the sensors and actuators of the Internet of Things.

In fact, there could be more of those processors per hectare as was planned for in the extrapolation of the Moore line: in your 2014 house you may have 16 processors in Things and mobile devices, but only a couple of quad-core desktop machines. And the same ratio may hold true in the work environment. This ratio of scattered processors to co-located ones will probably only get higher.

Indeed, why do we need all the processors to be in the same box? Not all the processes need to share the I/O to the user, so it's mainly the need to communicate through a single physical memory and disk.

Which is the main problem with scattering: the processors have to wait longer to communicate. But we don't even know if that's a problem: it's application dependent. And in a future where your applications are sensors and actuators, multiple mobile devices, Augmented Reality, immersive Virtual Worlds and gestural interactions, it could be that wireless data exchange is perfectly good enough.


Declarative Programming

Either way, whether you're programming scattered or co-located processors, you should be able to program without worrying about concurrent thread interactions and process distribution.

You should be able to program independent active and interactive objects without needing to know yet whether they're microns or miles apart. Only timeouts should change. This isn't RPC [pdf].

Imperative, threaded programming will have to give way to Declarative programming - where we tell the computer What we want, not How to do it - it will be up to the implementation of the language to handle the mapping to threads and then to processors, local or remote.

Since we don't know right now what ratio of scattered to co-located will turn out to be best in general or in specific applications, a programming model like that in Cyrus, that lets us split and recombine our active objects with perhaps only minor code adjustments, will have a massive advantage.



Wednesday, 22 January 2014

Minecraft, Augmented Reality, and the Internet of Things...

Augmented Reality is a fundamentally 3D technology - you look around a 3D space from the vantage point of your mobile device. Thus it's a short distance from full Virtual Worlds, such as Minecraft and Second Life. You can show 3D representations of any IoT Thing around you, as I have shown in my experiments, but also show any virtual 3D object you like. The "place" or room is an example of a virtual 3D object. I've also mentioned being able to move from your own house into that of your grandparent's, which jumps the user from AR to VW, since you jump from navigating by moving the device, to navigating by on-screen controls.

You can leave notes and signs around the AR world, you can have abstract objects such as one saying "all locks closed" or a big red button to turn off all the lights in your house. You can pick up a link to your grandparent's "all locks closed" indicator, and put it onto the virtual wall of your living room, to check at any time with just a wave of your phone.

A coffee shop could have offer tickets placed in their AR place - that you could pick up and pin to the virtual wall of your AR home. Those tickets would occupy the same place as the 3D representation of the IoT light on your table, or the IoT jukebox that can take suggestions for what to play next.


One App to Rule Them All

An important difference with the Object Network approach, is that there isn't an app for the coffee shop, an app for the light, one for the jukebox and one for picking up offer tickets - there is only one app (currently NetMash) that, like a browser, can be used to engage with all and any players in the Object Network, with any shop that has Object Net beacons, any light or thermostat that operates according to Object Net principles, and so-on.

This way, you interact in an environment that seamlessly merges real and virtual, and lets you seamlessly move from place to place owned by different people, as you go through your day. From house owned by you, to street owned by the local authority, to the coffee shop owned privately, to the library, to the park. From interacting with real Things to interacting with virtual objects. All with just one app.

To achieve this level of seamless interoperability requires that everyone simply publishes their JSON or Cyrus objects in the same way, in the same formats, all linked up with URLs. Obviously harder to do than to say.

But the Web has done it, so perhaps we can.


Minecraft-style building

Since we intend to empower the users over this VW/AR/IoT "fabric" - its data and its rules - we should also allow them the same ease of building within it - especially their home and shop places.

So clearly we need to give them the same abilities, the same tools and materials, that are provided for this in Minecraft! Voxels and hand-held tools and inventories, in other words. That way, a shop owner can delegate the building of her virtual shop "place" to her 8-year old, and concentrate on the offer objects.

Tuesday, 21 January 2014

Security, Patching and the IoT: Buy Slaves not Masters!

I'm actually really glad that people are using insecure and unpatchable IoT devices to send spam. I'll explain in a minute.

That news broke just a week after this rant by some quite angry and bitter-sounding person at ArsTechnica. Perhaps he wished he had bought a regularly-updated Nexus instead of a Samsung phone, but I digress..

For a balanced view of all this, turn to the expert: Bruce Schneier had this prescient article on Wired, a couple of days earlier than that rant, and this on the Guardian from May last year.

Bruce tells us that these devices are usually running old, unpatched, vulnerable software, and updates are unlikely to be made available - even less likely to be applied if they are.


Solutions

Like I said, I'm actually glad that this high-profile hacking report has come out right here at the start of 2014, just when the IoT is hotting up. If there were a few smaller attacks here and there, they may not have been noticed under all the hullabaloo. But this one even made it into the mainstream popular press.

Which will focus everyone, who wants to make the IoT work, on solutions.

One solution is to wrap the insecure and seldom updated manufacturers devices with, say, Raspberry Pi hubs or controllers that run Ubuntu and open source middleware, and to have a regular software update process running on that, just like you would on your laptop.

You manage security at the layer above, and work around proprietary access methods and known vulnerabilities and bugs from that level.

Security and privacy are of course big challenges for the IoT - so this is a great time to open up the discussion about open standards and open source.

Fear the silo and the walled garden, and the consumer device software that tries to take too much away from you!

Buy slaves, not masters.

Monday, 20 January 2014

IoT Protocols

Just for my future reference, not as a definitive comparison, here're some notes on the various IoT protocols that I'm aware of.


Application Level

HTTP - the granddaddy protocol, always a safe bet - RFC

CoAP - the HTTP-alike, RESTful alternative for small devices - RFC-ID

MQTT - from IBM

Comparison of CoAP and MQTT.


Transport Level

IPv6 and see here - designed to give every Thing in the universe an IP address - RFC


Wireless Level

WiFi/Direct - almost every house has a WiFi LAN; 5Ghz option; star topology - Pi example

BlueTooth 4.0/BLE - popularised as iBeacon by Apple, but handy for beacons and low power sensors; star topology - Pi example

Z-Wave - Home Automation; ~900Mhz not 2.4Ghz like the rest, so greater range, less interference, bigger antennae; mesh topology - Pi example

ZigBee - Home Automation; mesh/star; based on 802.15.4 - Pi example

6LoWPAN
 - IPv6 based; mesh/star
based on 802.15.4 - RFC - Pi example


NFC - exchange by touching things together; P2P - Pi example

RFID - for cheaply tagging things; P2P - Pi example

Comparison of Wifi, BLE, ZigBee, NFC, and others.

Comparison of ZigBee and 6LoWPAN, the two 802.15.4 protocols.


Some are proprietary, some are the products of more or less closed industry consortia, some have RFCs. Here's a bigger list.

I'm looking at HTTP over WiFi and BLE right now, and will probably look at CoAP over 6LoWPAN at some point.


Sunday, 19 January 2014

The Economics of LED Bulbs

My friend Francis Mahon is on a mission. By trade, Francis is an "oil man", but he sees his future as being in sustainable, low-energy alternatives. As we drove around my home area, Francis was pointing at shops filled with halogen bulbs and at photovoltaic and solar water heating arrays.

Often, he just walks into a shop or pub and chats to the manager, then follows up with a spreadsheet showing the economics of a full refit with LED bulbs, which he then supplies and fits himself. Not a living yet, but every little helps - the planet, that is.

This weekend, after fitting out our bathroom in bright, pure-white LED bulbs, we went out to buy a new light fitting for the kitchen that would take these GU10, 240V 5W LEDs instead of the MR16 12V 25W halogen ones I have been using, that need a mini-power station cooking away behind them - the heavy, inefficient transformer unit.

Taking five times less power, the kitchen is now even brighter than the bathroom - indeed, so bright now, that we're going to have to install dimmable bulbs and an LED dimmer switch instead.

The light fitting we bought came with old-school halogens, and Francis told me simply to drop them into the recycling station - new and unused! It's just not right to put them back into the market in any way and thereby cost both the purchaser and the planet unnecessarily.

I haven't checked out the maths, but Francis assures me that it's always economically better for you to change all your bulbs to LED right now - not to wait until they pop - even if you've just put new ones in, or got new ones with the light fitting, as I had.

Roughly speaking, the benefits are: brighter, whiter, broader light; 5-10 times less power consumption, 10-30 times longer life and thus half the replacement cost at current unit prices - although that will certainly be significantly less by the time you need to change them 5 or 20 years from now.

You can buy new bulbs online for a couple of quid. I'm already half way there, but I plan to go all-LED this year: the only thing that will delay me is the need for new fittings and LED dimmer switches, and my search for bright, IoT-ready RGB bulbs.

Saturday, 18 January 2014

Three lights in both a real and a virtual room

You may have noticed that I've been a bit quiet on the development front recently - I only have 45 minutes a day on the train, and I've been using that to tidy and bugfix in NetMash.

So just to show that things still work after dibbling with the code, here are some photos of the latest actual Augmented Reality Internet of Things in action.

I've hit the menu option "Around", which searches for beacons in range and picks up their URLs.


There are three lights around, all in the same place: "Room of Things". The app has decided I'm not right up close to any of them, so it's showing me the place they all belong to.


I move to the first light (OK, it's my laptop pretending to be a light, but you can see the USB BLE sticking out there). The view jumps in to focus on that light.


Over to the first Pi, advertising the light that's green. The view jumps to that, now.


And finally to the only actual light - the RG(B) one on my Christmas Pi. You can't tell from the photo, but the LED really is red like its 3D view. Touching the red cube adjusts the LED colour.

Obviously jumping in and out like this is a bit coarse, so I need to come up with a smoother algorithm for figuring out where the phone is. And I need to glide in to the lights in the actual place, not jump to a view with just the light alone.

And of course, it would be nice if things in the place were in their actual relative positions, and panning around panned around the 3D place view, in proper AR style.

I'll get onto it, on the next train to work...

Friday, 17 January 2014

Internet of Things Events, Conferences and Meetups

Here's what I found from a quick Google for interesting IoT events in 2014 (European ones highlighted for my own future convenience):

IEEE World Forum on Internet of Things (WF-IoT) 6-8 March, Seoul, Korea
M2M Conference 24-25 April, London
Thingscon 2-3 May, Berlin
IoT-SoS 16 June, Sydney
IoT Week 16-20 June, London
Intelligent Environments 30 June-4th July, Shanghai
Smart Systech 1-2 July, Dortmund
Future IoT & Cloud 27-29 August, Barcelona
MIT's IoT Conference 6-8 October, Cambridge, MA
Internet of Things Conference 12-13 November, London

As I discover more, I shall update this list.

Some more interesting links:

IoT Calls for Papers
IoT Events on Twitter
IoT London Meetup Group


Talking of the latter - the London IoT Meetup Group - I, and hundreds of others every month, have not been able to attend any of the monthly meetings, due to the maximum capacity of the venue: 95. There is a desperate need for a second London group! If you are interested in joining me in setting something up, let me know @duncancragg.

Thursday, 16 January 2014

Wifi Direct, BLE, Zigbee or Z-Wave? AR versus IoT? Ask Google Trends!

I use Google Trends a lot to decide where things are going and what to focus the Object Net on, or what technologies to use to implement it.

Here's an interesting graph comparing WiFi Direct, Bluetooth 4.0, iBeacon, Zigbee and Z-Wave - here's the scoop: WiFi Direct demolishes the lot!

Here's one comparing Augmented Reality, Layar and Google Glass with the Internet of Things. AR in all its forms is still ahead of the IoT, but after a big up-spike in 2009, has been in gently declining interest for four years. However, specific AR products - Layar and Glass - are very active.

More interesting insights come from clicking through the "Regional interest" tabs just below: compare where ZigBee and Z-Wave get their respective support, for example.

Of course, these graphs should only really be used for picking up indicative clues, and entertainment. Look at where "Layar" gets most of its support - Indonesia!