tag:blogger.com,1999:blog-43338183369526614382024-03-05T18:18:24.488-08:00Building The Object NetworkMy Explorations into the Internet of Things using the Object Network approach.Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.comBlogger65125tag:blogger.com,1999:blog-4333818336952661438.post-23031065952982046992014-05-12T09:40:00.000-07:002014-05-13T12:13:44.365-07:00Welcome to ThoughtWorks Subscribers!<span style="font-family: inherit;">You may have come to this blog from my article just published on ThoughtWorks' Insights page, "</span><a href="http://www.thoughtworks.com/insights/blog/coap-and-web-things-watching-things">CoAP and a Web of Things Watching Things</a>"<span style="font-family: inherit;">. Welcome!</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">This blog is actually my ThoughtWorks 60-day Internet of Things project - my main blog is "<a href="http://duncan-cragg.org/blog/">What Not How</a>". I summarised my progress in this project <a href="http://object-network.blogspot.co.uk/2014/01/half-way-through-my-60-days-of-things.html">after 30 days</a> and <a href="http://object-network.blogspot.co.uk/2014/02/concluding-my-60-days-of-things.html">after 60 days</a>. These two pages briefly describe, and link to, each day's page for the preceding month.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">There are three main articles to read here that explain the Object Network approach to the Internet of Things:</span><br />
<ul>
<li><a href="http://object-network.blogspot.co.uk/2014/02/what-is-object-network-again.html">What is The Object Network again?</a></li>
<li><a href="http://object-network.blogspot.co.uk/2014/01/iot-rules-event-action-versus-state.html">IoT Rules: Event->Action versus State->State</a></li>
<li><a href="http://object-network.blogspot.co.uk/2013/12/links-between-thing-objects.html">Links between Thing Objects</a></li>
</ul>
<div>
I also have three articles that mention CoAP:
<br />
<ul>
<li><a href="http://object-network.blogspot.co.uk/2014/02/bidirectional-coap.html">Bidirectional CoAP</a></li>
<li><a href="http://object-network.blogspot.co.uk/2013/12/coap-and-forest.html">CoAP and FOREST</a></li>
<li><a href="http://object-network.blogspot.co.uk/2014/01/iot-protocols.html">IoT Protocols</a></li>
</ul>
You can <a href="https://twitter.com/duncancragg">contact me on Twitter</a> if you want to discuss anything about what you see here or on my main blog.</div>
<div>
</div>
Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-84291810357237376952014-03-01T05:36:00.001-08:002014-03-01T05:36:06.624-08:00Continuing over there..I've moved my ongoing work back on to my main blog, with the post "<a href="http://duncan-cragg.org/blog/post/object-network-approach-augmented-reality-and-inte/">The Object Network Approach to Augmented Reality and the Internet of Things</a>".<br />
<br />
See you there!Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-26773895870795614912014-02-14T14:17:00.000-08:002014-02-14T14:17:24.515-08:00Moving off Blogger..This is my last post on this blog. Blogger and Google have been a bit of a disappointment, so this seems like a good time to wrap up and find an alternative.<br />
<br />
What I want in a blog service:<br />
<ul>
<li>bug-free editor - I don't actually need WYSIWYG if it's <i>that</i> hard to do; HTML would be fine</li>
<li>reliable server to save mid-edit to, and which doesn't push down the whole page - that I'm in the middle of editing - with a message telling me that <i>once again</i> it failed to auto-save</li>
<li>a preview with links that work, so I can test them there</li>
</ul>
<div>
What I want from Google's search services:</div>
<div>
<ul>
<li>if they run a service like Blogger, they should put the pages from it in their index, preferably when they're published</li>
<li>for "site:object-network.blogspot.co.uk" to work</li>
</ul>
<div>
I'm also looking for alternatives to Feedly, the feed reader that pulls in new pages in less than a second, but then stubbornly refuses to acknowledge updates from then on.</div>
</div>
<div>
<br />
You don't know what you <i>do</i> want, until you know what you <i>don't</i> want. I now know more about what I want. So thanks, Google, for that!<br />
<br />
And for giving me a free blog, which wasn't so bad, really.</div>
Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-3192841934683331932014-02-13T15:51:00.001-08:002014-02-13T15:51:10.135-08:00Concluding my 60 Days of Things<span style="font-family: inherit;">I'm now nearly at the end of my <a href="http://object-network.blogspot.co.uk/2013/12/60-days-of-things.html">60 Days of Things</a>. As I <a href="http://object-network.blogspot.co.uk/2014/01/half-way-through-my-60-days-of-things.html">said when I was half-way</a>: "<span style="background-color: white; line-height: 18px;">I started this blog on the 14th of December last year, without intending to blog every day. But I happened to </span><i style="background-color: white; line-height: 18px;">have</i><span style="background-color: white; line-height: 18px;"> something to say every day, and once I'd established that regularity, I decided to keep it up. I do have rather a lot to say about the Object Net, and </span><a href="http://duncan-cragg.org/blog/" style="background-color: white; line-height: 18px; text-decoration: none;">my other blog</a><span style="background-color: white; line-height: 18px;"> is full of me saying it in long posts. This way I get to write shorter posts, yet more often."</span></span><br />
<span style="font-family: inherit;"><span style="background-color: white; line-height: 18px;"><br /></span></span>
<span style="font-family: inherit;"><span style="background-color: white; line-height: 18px;">It's been a lot of fun, doing the coding and hardware stuff and writing these posts. I almost never knew what I was going to write as I sat down every evening, but never once failed to find inspiration.</span></span><br />
<span style="font-family: inherit;"><span style="background-color: white; line-height: 18px;"><br /></span></span>
<span style="font-family: inherit;"><span style="background-color: white; line-height: 18px;">So here's another summary in two parts. First, what I achieved for NetMash:</span></span>
<br />
<ul>
<li><span style="line-height: 18px;">showed screenshots of my basic </span><a href="http://object-network.blogspot.co.uk/2014/01/three-lights-in-both-real-and-virtual.html" style="line-height: 18px;">jumping-around AR view</a></li>
<li><a href="http://object-network.blogspot.co.uk/2014/01/android-sensors-for-ar.html" style="line-height: 18px;">added "azimuth"</a><span style="line-height: 18px;"> - or panning - to the AR view</span></li>
<li><span style="line-height: 18px;">thought about the more </span><a href="http://object-network.blogspot.co.uk/2014/01/slightly-better-ar-microlocation.html" style="line-height: 18px;">gentle AR positioning algorithm</a></li>
<li><span style="line-height: 18px;">made a video of my </span><a href="http://object-network.blogspot.co.uk/2014/02/very-short-video-demo-of-ariot-in.html" style="line-height: 18px;">smooth AR IoT</a><span style="line-height: 18px;"> demo</span></li>
<li><span style="line-height: 18px;">discussed some </span><a href="http://object-network.blogspot.co.uk/2014/02/augmented-reality-position-from-beacon.html" style="line-height: 18px;">improvements and enhancements</a><span style="line-height: 18px;"> I need to do next</span></li>
<li><span style="line-height: 18px;">had some thoughts about the </span><a href="http://object-network.blogspot.co.uk/2014/02/seven-uses-of-raspberry-pi-camera.html" style="line-height: 18px;">ways I'd use the Pi camera</a></li>
<li><span style="line-height: 18px;">met a bunch of fellow geeks to </span><a href="http://object-network.blogspot.co.uk/2014/02/amazing-innovation-at-raspberry-pi-and.html" style="line-height: 18px;">talk about Pies</a></li>
</ul>
<div>
<span style="line-height: 18px;"><br /></span>
<span style="line-height: 18px;">Rather more articles in the "general" category this time:</span></div>
<div>
<ul>
<li><span style="line-height: 18px;">gave my thoughts on the <a href="http://object-network.blogspot.co.uk/2014/01/nest-and-google-closed-and-cloud.html">Nest and Google</a> situation</span></li>
<li><span style="line-height: 18px;">showed how I used <a href="http://object-network.blogspot.co.uk/2014/01/wifi-direct-ble-zigbee-or-z-wave-ar.html">Google Trends</a></span></li>
<li><span style="line-height: 18px;">listed interesting <a href="http://object-network.blogspot.co.uk/2014/01/internet-of-things-events-conferences.html">conferences and meetups</a></span></li>
<li><span style="line-height: 18px;">had some thoughts on the <a href="http://object-network.blogspot.co.uk/2014/01/the-economics-of-led-bulbs.html">economics of LED bulbs</a></span></li>
<li><span style="line-height: 18px;">listed the interesting <a href="http://object-network.blogspot.co.uk/2014/01/iot-protocols.html">IoT protocols</a></span></li>
<li><span style="line-height: 18px;">pondered the ever-present <a href="http://object-network.blogspot.co.uk/2014/01/security-patching-and-iot-buy-slaves.html">security issues</a> of the IoT, and some solutions</span></li>
<li><span style="line-height: 18px;">gave my vision of a <a href="http://object-network.blogspot.co.uk/2014/01/minecraft-augmented-reality-and.html">seamless 3D world</a> built like Minecraft</span></li>
<li><span style="line-height: 18px;">compared massive grids of Things to <a href="http://object-network.blogspot.co.uk/2014/01/programming-iot-programming-parallel.html">massively parallel computers</a></span></li>
<li>projected forwards into the <a href="http://object-network.blogspot.co.uk/2014/01/the-internet-of-3d-sensors-and-actuators.html">3D IoT</a> with 3D sensors, printers and screens</li>
<li>enjoyed the magic of an <a href="http://object-network.blogspot.co.uk/2014/01/augmented-reality-and-magic.html">AR fairy book</a> with my family</li>
<li>considered how to <a href="http://object-network.blogspot.co.uk/2014/01/simple-but-effective-lighting-behaviour.html">dim lights to warm</a> in the Object Net</li>
<li>thought about Monsanto and <a href="http://object-network.blogspot.co.uk/2014/01/the-internet-of-farm-yields-from.html">open and local farming</a></li>
<li>got lost at work, and considered the <a href="http://object-network.blogspot.co.uk/2014/01/internet-of-meeting-rooms.html">IoT office</a></li>
<li>imagined how <a href="http://object-network.blogspot.co.uk/2014/02/beacons-that-move.html">moving beacons</a> would help people and cars publish </li>
<li>compared the Object Net's domain-level <a href="http://object-network.blogspot.co.uk/2014/02/forest-higher-level-restful-interaction.html">FOREST</a> approach to AtomPub</li>
<li>decided that ThoughtWorks <a href="http://object-network.blogspot.co.uk/2014/02/internet-of-things-and-augmented.html">Retail</a> could use all of this stuff</li>
<li>finally got around to summarising the <a href="http://object-network.blogspot.co.uk/2014/02/what-is-object-network-again.html">Object Network</a> for you</li>
<li>got excited about <a href="http://object-network.blogspot.co.uk/2014/02/bidirectional-coap.html">bidirectional CoAP</a></li>
<li>reported on my involvement with basic <a href="http://object-network.blogspot.co.uk/2014/02/which-contact-and-event-formats-for-uk.html">UK Government Web Standards</a></li>
<li>imagined a future where you could <a href="http://object-network.blogspot.co.uk/2014/02/seamlessly-copying-data-between.html">easily copy between adjacent machines</a></li>
</ul>
<div>
<br />
I also went on a short <a href="http://object-network.blogspot.co.uk/2014/01/venice-vacation.html">trip to Venice</a>. and linked to some <a href="http://object-network.blogspot.co.uk/2014/02/venice-holiday-snaps-not-today.html">snaps</a>.</div>
<div>
<br />
I'm not sure if I'll continue the pleasant discipline of daily blogging, now that my 60 days are up, but I do hope to.<br />
<br />
I'll certainly keep blogging on the Augmented Reality Internet of Things manifested by the Object Network.<br />
<br />
I've only just <i>started</i>...</div>
<div>
<br /></div>
</div>
<span style="font-family: inherit;"><br style="background-color: white; line-height: 18px;" /></span>Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-19860429512107545312014-02-12T15:36:00.001-08:002014-02-12T15:36:39.802-08:00Amazing Innovation at Raspberry Pi and Arduino MeetupTonight I attended the <a href="http://www.meetup.com/SurreyGeeks/events/145043832/">Surrey Geeks Meetup</a> in the lovely offices of the generous <a href="http://kyan.com/about_us">Kyan</a> in Guildford. The topic was essentially anything to do with Raspberry Pi and Arduino. There was some pretty innovative stuff being talked about.<br />
<br />
The organiser, <a href="http://www.meetup.com/SurreyGeeks/members/61941422/">Jon Nethercott</a>, kicked off talking about the Arduino boards and the projects he had constructed, including an amazing <a href="http://wordpress.codewrite.co.uk/pic/2014/01/25/capacitance-meter-mk-ii/">capacitance meter</a> that required no additional hardware. You could push a capacitor into two A-D pins and the display shield would quickly tell you what value it had, from 1pF up to hundreds of uF.<br />
<br />
It has two ways of calculating the value: from 1pF to 1nF it measures the ratio of the capacitance of the test capacitor against the <i>residual 25pF capacitance of the internal circuitry</i>! I've done a lot of electronics in my early life (really early life - I built my first computer in 1977 using the <a href="https://en.wikipedia.org/wiki/RCA_1802">1802 CMOS microprocessor</a>), but I've never, as far as I recall, had to work out the voltage distribution of two capacitors in series!<br />
<br />
Jon explained it to me, and it actually works out quite intuitively - the bigger capacitor develops the smaller voltage, as from an electron point of view, it is similar to a lower resistance, as it has more "space" for electrons to flow into it.<br />
<br />
Above 1nF, which is very large relative to the 25pF residual capacitance, Jon switches to an alternative technique using an internal pull-up resistance to charge the test capacitor, then measure the developed voltage and time taken to reach it. Once again, not a scrap of extra external circuitry since it relies this time on an internal <i>resistance</i>. Cool.<br />
<br />
Related to the work I've been doing, <a href="http://www.meetup.com/SurreyGeeks/members/127100992/">Richard Jelbert</a> showed us his Pi for cars with a BLE beacon attached. This could be used to drive an app on the driver's phone to pick up a small number of events broadcast from the Pi, including from sensors. For example: when the driver enters the car, starts it, stops it .. or crashes it! This could be used to reward clean drivers with lower insurance, without any inconvenience to the driver, who otherwise has to keep messing with the app controls at the start and end of the journey.<br />
<br />
Richard also showed us his prototype for a Bitcoin vending machine. Seriously: a Bitcoin vending machine. You put in some cash and get a printed slip with two QR codes on it: the public and the private key for your entry on the blockchain.<br />
<br />
My colleague in government and another meetup organiser, <a href="http://www.meetup.com/SurreyGeeks/members/8326010/">David Carboni</a>, told us of his plans to enhance local neighbourhood safety with an automatic number plate recognition system. Every participant in a street would have Pies tracking cars via its camera. They could share the information to track stranger cars. The approach would involve quite a bit of image processing to normalise the image then extract the characters. There is <a href="http://javaanpr.sourceforge.net/">this software</a> which may be interesting to look at.<br />
<br />
Next up, a Research Assistant from Guildford University, <a href="http://www.meetup.com/SurreyGeeks/members/104501852/">James Mithen</a>, told us of his plans to get into ARM code and write another operating system for the Pi, as a fun exercise...<br />
<br />
Finally, I stood up and told everyone about the <a href="http://object-network.blogspot.co.uk/2014/02/very-short-video-demo-of-ariot-in.html">Augmented Reality Internet of Things</a> idea, with a mention of Minecraft house-modelling to get them all thinking I'm a nutter. It worked. They did.<br />
<br />
We all talked more than we hacked or wired, which was great - and there's always next time to play with kit.<br />
<br />
Really exciting stuff. And great pizza and great beer. Thanks to the <a href="http://www.meetup.com/SurreyGeeks/">organisers</a> and <a href="http://kyan.com/about_us">hosts</a>.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-91933028494244197102014-02-11T15:58:00.001-08:002014-02-11T15:58:21.328-08:00Seamlessly copying data between adjacent machinesThis morning on the train I wanted to work on a document that was in a draft email that I could access from Google by 3G on my Android phone. But I wanted to work on the document on my laptop, which doesn't have 3G.<br />
<br />
After my 25 minutes journey was done, I still hadn't solved the problem - how do I easily and reliably move data from the device in my hand to the one six inches below it?<br />
<br />
Even the fact that I had to think about all the ways means it's just not something you do in any instinctive way.<br />
<br />
I'm not asking for answers, by the way, I know about all the options (Bluetooth, hotspot, tethering). It's their reliability and ease of use that's part of the problem.<br />
<br />
On the way back, there was a <a href="http://lbbonline.com/news/audi-to-set-london-waterloo-in-motion/">massive video advert in Waterloo station for the Audi car company</a>. It said (I think; I was rushing past) "Number of Audi drivers in the station: 4567", and it was slowly incrementing. I presumed that they made that up - it seemed high - or used Twitter or something.<br />
<br />
Then I thought, well, it'd be fun to offer iPhone-using Audi drivers an app which could be a beacon saying: "I'm an Audi driver!". Then if they were told to walk past the advert, well, you get the idea.<br />
<br />
These two incidents got me thinking about <i>commodity</i> technologies for <i>easy and reliable</i> proximal data exchange.<br />
<br />
And it's really still too hard to seamlessly move data between machines and devices that are <i>right next to each other!</i><br />
<br />
In our workplace, faced with the need to get a file from one PC to the one next to it, even techies have been known to send the data across the Atlantic via the US, because it's easier to email it thousands of miles than figure out a direct route of three feet.<br />
<br />
<br />
<b>Non-Solutions</b><br />
<br />
I've listed some commodity wireless technologies already: Bluetooth, Wifi and Mobile data. You could add QR codes, NFC and RFID to those, of course, but they are still less common.<br />
<br />
Now I've got a low tolerance for poor usability, but surely everyone hesitates before considering the buggy and unreliable, and cognitively complex Bluetooth approach. I can't even think how I'd do it, to be honest. I know it involves some kind of "pairing", and lots of failed transfers.<br />
<br />
Wifi requires you to be logged in to a network with complex passcodes, and even then you'll probably need to play with IP numbers to do local file transfer.<br />
<br />
Mobile data is not always available, is slow and unreliable when it is, and requires some or other proprietary intermediary.<br />
<br />
<br />
<b>Solutions</b><br />
<br />
So, back to the Audi example: what if it was as easy as (a) being near and (b) setting the intent to share something, anything?<br />
<br />
I should be able to pull up the draft email I had on my phone, hit "Copy to adjacent device .." and enter a 3-digit number (to prevent others being able to see it casually; say only one transfer is possible and it times out after a minute, to make it even more secure).<br />
<br />
Now if I hit "Look for local data" or something on the other machine (laptop), I just need to enter the 3-digit number and it's there in seconds.<br />
<br />
Similarly: PC #1: right click on file "Copy to adjacent device .. ". 3-digit number. PC #2 "Look for local data". 3-digit number. It's there.<br />
<br />
The 3-digit number would be all you needed to confirm the particular transfer, perhaps when others were active around you, you wouldn't need to see or choose the filename being offered or anything.<br />
<div>
<br /></div>
I should be able to pull up a photo of myself that I use in public, or enter details into a profile document - including the car I drive - the hit "Publish to adjacent devices ..". No 3-digit number this time, of course. The peer device will "Look for local data" and suck it all in, then filter out the interesting stuff to stick up onto that 12-foot screen.<br />
<br />
That should be built into every single smart device we use.<br />
<br />
We could implement it in BT 4, WiFi Direct, whatever. It just needs to always be there, always work, and be that simple.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-21683245348916925282014-02-10T15:55:00.000-08:002014-02-10T15:55:35.159-08:00Seven uses of the Raspberry Pi CameraMy <a href="http://object-network.blogspot.co.uk/2013/12/raspberry-pi-for-christmas-dessert.html">Christmas Pi</a> came with a<span style="font-family: inherit;"> 5Mp (2592x1944) f/2.9 cam</span>era module, which I intend to use as my light level sensor. I also want to use it to detect the colour of the ambient light, so that my light Things can match it.<br />
<br />
Further along, I expect I can use it for <a href="http://rbnrpi.wordpress.com/project-list/setting-up-wireless-motion-detect-cam/">motion</a> <a href="http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=44966">detection</a> and to recognise QR codes, <a href="http://www.raspberrypi.org/forum/viewtopic.php?f=37&t=58257">car number plates</a> and faces. And to take pictures; almost forgot. But, baby steps..<br />
<br />
For light level and ambient colour temperature, I'd need to point the camera at a sheet of white paper.<br />
<br />
The challenge I face is to arrange one of the following:<br />
<br />
(a) get the exposure - which I presume corresponds to shutter speed in some way, as it's a fixed aperture - and AWB colour value, set in the EXIF<br />
<br />
(b) disable auto-exposure and auto-AWB, and do my own sampling of the exposure to find out how much light there is, then take the average colour of the image.<br />
<br />
(c) get the exposure and disable AWB or vice-versa, depending on the EXIF<br />
<br />
I've been looking at the <a href="http://elinux.org/Rpi_Camera_Module">documentation</a> and <a href="https://github.com/raspberrypi/userland/tree/master/host_applications/linux/apps/raspicam">sources</a> for information today, but it's quite hard to find what I want. For option (b), I think <span style="font-family: Courier New, Courier, monospace;">--shutter</span> and <span style="font-family: Courier New, Courier, monospace;">--awb</span> should allow me to set the series of shutter speeds and to turn AWB off, respectively.<br />
<br />
It's possible that I need to do (c) - there's something in the EXIF called <a href="http://www.raspberrypi.org/phpBB3/viewtopic.php?t=44784&p=356222">Light Value</a> but I can't see a colour temperature parameter.<br />
<br />
On <a href="http://www.meetup.com/SurreyGeeks/events/145043832/">Wednesday</a>, a group of us are getting together to hack some stuff, so perhaps I'll experiment then.<br />
<br />
PS<i> The title today is a dig at those awful traffic-hunting titles you see these days..</i>Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-4487095037992322932014-02-09T15:53:00.000-08:002014-02-09T15:53:42.508-08:00Augmented Reality position from beacon strengthWell hopefully you all enjoyed my <a href="http://object-network.blogspot.co.uk/2014/02/very-short-video-demo-of-ariot-in.html">51 seconds of fame</a> yesterday.<br />
<br />
Although in that video, I skilfully managed to make it look as if my position, relative to all those BLE beacons, was being accurately tracked, the fact is that my <a href="http://object-network.blogspot.co.uk/2014/01/slightly-better-ar-microlocation.html">algorithm</a> is pretty dumb:<br />
<br />
It puts you in the middle of the room or "place" unless you get close to a beacon, in which case, it gently moves you in towards the 3D virtual representation of the corresponding Thing object to that beacon, as advertised by its URL. When you go out of range, it glides you back to the middle again.<br />
<br />
It's pretty effective for a simple algorithm, I hope you'll agree, but of course it does have limitations.<br />
<br />
The main issue is that the RSSI signal strength is very jumpy, which means you can sometimes oscillate around when at the borderline. Both for the simple algorithm and for a future trilateration one, I'd need to filter or smooth the distance calculation better.<br />
<br />
What I'll try is a smoothing algorithm that works asymmetrically. When the signal strength goes up, it's pretty much guaranteed to be because you've moved closer, due to the laws of physics. The same laws of physics also dictate that when the signal strength goes down, it's because of some random interaction with a plane going overhead and the phase of the moon.<br />
<br />
So I'd have it work like this: moving in immediately sets a closer distance, but moving out is treated with greater suspicion - maybe smoothing it out over three samples.<br />
<br />
<br />
<b>Other Jobs</b><br />
<br />
I had to set up the positions of the light objects manually for the video, and that will always be the case to some extent. I imagine the new Thing would have a guess where it is when installed, then the user could nudge it into its proper place in the 3D view. The initial position could be roughly worked out at the same time as <a href="http://object-network.blogspot.co.uk/2014/01/scanning-ble-adverts-from-linux.html">fetching the URL of the nearest place</a> from nearby beacons, which also has to be hand-configured currently.<br />
<br />
Another thing is that I should demo more than one place, and move between two linked rooms. Most of the code is already there for that, I think.<br />
<br />
I also don't track up-down orientation yet, just compass rotation. Again, it's not as big a loss as you might think, but I should add that sooner or later.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-81855528259668606932014-02-08T15:48:00.004-08:002014-02-08T15:48:47.942-08:00Very short video demo of AR+IoT in the Object NetI spent all day today, thanks to my understanding family, preparing this 51-second video for you, about the work I've been doing for the ThoughtWorks <a href="http://object-network.blogspot.co.uk/2013/12/60-days-of-things.html">100 Days of Hardware</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/3eQtzyWN7lQ?feature=player_embedded' frameborder='0'></iframe></div>
<br />
It's my first ever YouTube video, and I'm quite pleased with it. I'm also pretty happy with the overall results I've achieved in the snippets of coding time I've had.<br />
<br />
<br />
<b>Mashability</b><br />
<br />
Soon into my first practice take, I realised I couldn't actually touch the screen to change the light colour for the video, since I was holding my wife's iPhone in the other hand.<br />
<br />
So I got the URL of the light up in my browser and typed in this thought-free code to make the light blink automatically, between red and green:<br />
<pre class="cyrus-readonly" style="background-color: #fafaff; border: 1px solid rgb(221, 221, 221); line-height: 12pt; margin: 7pt; padding: 0.5em; white-space: pre-wrap;">{ is: cuboid rule
Timer: 0 => 2000
light: (1 => 0) (0 => 1) 0
}</pre>
<pre class="cyrus-readonly" style="background-color: #fafaff; border: 1px solid rgb(221, 221, 221); line-height: 12pt; margin: 7pt; padding: 0.5em; white-space: pre-wrap;">{ is: cuboid rule
Timer: 0 => 2000
light: (0 => 1) (1 => 0) 0
}</pre>
<span style="font-family: inherit;">The point was not that I should have only written one rule not two, but that I could tap this in</span> quickly<span style="font-family: inherit;"> while between video takes, in a browser editing page, and immediately see the light flash - both on my phone on the table and on the Pi next to it.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Which is one of the points of the Object Network. Instant gratification!</span><br />
<span style="font-family: inherit;"><br /></span>Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-81567098204970686892014-02-07T15:53:00.000-08:002014-02-07T15:59:30.624-08:00Which Contact and Event Formats for the UK Government?Today, <a href="https://twitter.com/psd">Paul Downey</a> and I submitted two "challenge suggestions" to the UK Government's <a href="http://standards.data.gov.uk/">Standards Hub</a>.<br />
<br />
Now, we both work at the Cabinet Office, so I guess we have a head start in knowing about all this, as insiders .. but no-one will find the standards we're proposing to be in any way controversial.<br />
<br />
We suggested that the UK Government should pick a standard for <a href="http://standards.data.gov.uk/challenge/exchange-contact-information">contact information exchange</a> and calendar <a href="http://standards.data.gov.uk/challenge/exchange-calendar-events">event information exchange</a>. I won't bias the process by naming the obvious *ahem*<a href="http://tools.ietf.org/search/rfc6350">vcard</a>*cough* standards to *ahhh*<a href="http://tools.ietf.org/html/rfc5545">icalendar</a>*choo* at least give due consideration to, for these needs.<br />
<br />
In true UK Government style (or the style of any Government I imagine), this process is done politely and at a slowww pace. Several packets of tea are consumed from the start to the end of the process.<br />
<br />
To evidence this, there are already <a href="http://standards.data.gov.uk/challenges/completed">two accepted standards</a>: paraphrasable as "<a href="http://standards.data.gov.uk/profile/cross-platform-character-encoding-profile-agreed">Use UTF-8</a>!" and "<a href="http://standards.data.gov.uk/profile/persistent-resolvable-identifiers-standards-profile">Use URLs</a>!".<br />
<br />
Baby steps, baby steps.<br />
<br />
It's a long way to go from this to the Object Network <a href="http://duncan-cragg.org/777-777.json">contact</a> and <a href="http://duncan-cragg.org/blog/post/basics-object-network/">event</a> formats, but once we've got something in the Standards Hub for contact and event, the next one to try for is: "A textual structured data encoding format with maps and lists, that is useful for moving data into and out of APIs".<br />
<br />
That way we can then go on to suggest such an encoding of all the data available from our Government, which could include those contact and event types.<br />
<br />
Not sure if we'd have to get HTTP - sorry, a hypermedia/data transfer protocol - in first, though.<br />
<br />
Anyway, I've got plenty of time...Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-26952966508686943392014-02-06T15:57:00.000-08:002014-02-06T15:57:13.668-08:00Bidirectional CoAPI <a href="http://object-network.blogspot.co.uk/2013/12/coap-and-forest.html">mentioned before</a> that <a href="https://datatracker.ietf.org/doc/draft-ietf-core-coap/">CoAP</a> would seem to be a good protocol to implement <a href="http://link.springer.com/chapter/10.1007/978-1-4419-8303-9_7">FOREST</a> over, as most <a href="https://en.wikipedia.org/wiki/Constrained_Application_Protocol">implementations</a> implement the <a href="http://tools.ietf.org/html/draft-ietf-core-observe-11">observe spec</a>.<br />
<br />
CoAP is based on HTTP, thus inherits its asymmetric client-server model. However, it is also built over UDP, which is another loosening up of HTTP in CoAP that can help implement FOREST. FOREST's basic mode of interaction is peer-to-peer, with clients able to be servers and vice-versa.<br />
<br />
So if I use CoAP, I'll have UDP packets for requests going in both directions between peers, and UDP packets for responses also going back in each direction. Even though CoAP isn't specified to be bidirectional, I could easily implement a bidirectional version of it once I have a unidirectional implementation.<br />
<br />
If the code is all in NetMash, which is used on both clients and servers, then the bottom of the CoAP stack will be simply sending and receiving UDP packets: there's no separate TCP connection in each direction. You could have a single UDP port number at each end.<br />
<br />
If you need to keep receiving updates to your cache of a response to an earlier request (which it seems is a feature being added to <a href="http://www.igvita.com/2013/06/12/innovating-with-http-2.0-server-push/">HTTP/2</a>), then you're already getting towards a bidirectional protocol, since a spontaneous packet can now come back at the client instead of it always being the initiator.<br />
<br />
I played with such a bidirectional protocol of my own for an earlier version of the Object Net, back in 2005, but then switched to using just dual HTTP channels and long-polling to cover asymmetric infrastructure. It's nice to be able to re-visit the concept with the IoT and CoAP.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-24701901635387654842014-02-05T15:28:00.001-08:002014-03-01T03:32:21.108-08:00What is The Object Network again?I've been working on this project without really stopping to give an overview of all the elements of the <a href="http://the-object.net/">Object Network</a>. You can work it out from the articles and links, but here's everything summarised in one place.<br />
<br />
<br />
<b>Functional Observer</b><br />
<br />
This is the basic programming and interaction model underlying everything. In a nutshell:<br />
<br />
<i>An object’s state is set as a Function of its current state plus the state of other objects it Observes through links. </i><br />
<br />
So imagine an object representing a light that is shining yellow. It links to another object representing the values of a dimmer. The brightness of the light depends on the value of the dimmer, so as the dimmer value observed by the light reduces, the light calculates a new RGB setting based on whatever colour it's set to, modulated by the dimmer value it can see through the link.<br />
<br />
In the style of a spreadsheet, whenever the current light colour on the light, or the current value on the remote dimmer changes, it has work to do to set its current output in the RGB values.<br />
<br />
I mentioned Functional Observer <a href="http://object-network.blogspot.co.uk/2014/01/iot-rules-event-action-versus-state.html">here</a> and <a href="http://object-network.blogspot.co.uk/2014/01/the-functional-observer-programming-and.html">here</a>.<br />
<br />
<br />
<b>FOREST</b><br />
<br /><a href="http://link.springer.com/chapter/10.1007/978-1-4419-8303-9_7">Functional Observer REST</a> simply allows our objects to reside on different host machines. They talk RESTfully over HTTP in JSON. They have URLs and exchange state using GET and POST.<br />
<br />
I mentioned FOREST <a href="http://object-network.blogspot.co.uk/2013/12/coap-and-forest.html">here</a> and <a href="http://object-network.blogspot.co.uk/2014/02/forest-higher-level-restful-interaction.html">here</a>.<br />
<b><br /></b>
<b><br /></b>
<b>Cyrus</b><br />
<b><br /></b>
In order to know what state to set itself - what dependencies it has on peer objects and its current state - an object needs to be programmed or "animated". It could be programmed in Java - and that's exactly what I do for some functionality in NetMash.<br />
<br />
But we can create a language that is a direct mapping onto the Functional Observer model above. The "Function" part can be pure: you don't need I/O or side-effects when all of that is taken care of in the objects you're rewriting.<br />
<br />
<a href="http://the-cyrus.net/">Cyrus</a> is a pure Functional Observer language. Being <a href="http://c2.com/cgi/wiki?HomoiconicLanguages">homoiconic</a>, it is based on the JSON of the objects it is animating, but with much noisy syntax removed. Cyrus rules have their own URLs, of course.<br />
<br />
I mentioned Cyrus <a href="http://object-network.blogspot.co.uk/2013/12/here-is-what-cyrus-rule-look-like-to.html">here</a>, <a href="http://object-network.blogspot.co.uk/2013/12/discovery-and-set-up-in-object-network.html">here</a> and <a href="http://object-network.blogspot.co.uk/2014/01/monads-and-cyrus.html">here</a>.<br />
<br />
<br />
<b>Object Network Types</b><br />
<b><br /></b>
These distributed, interacting objects can form a global Object Network or graph. But only if they all look pretty much the same.<br />
<br />
So the Object Network defines a number of simple and stable formats within JSON for common needs, such as <a href="http://duncan-cragg.org/777-777.json">contacts</a>, <a href="http://duncan-cragg.org/blog/post/basics-object-network/">events</a>, <a href="http://duncan-cragg.org/blog/atom.json">feeds</a>, <a href="http://duncan-cragg.org/blog/post/building-object-network/atom.json">articles</a>, media, GUI layouts, <a href="http://object-network.blogspot.co.uk/2014/01/the-person-as-first-class-object-or.html">users</a>, <a href="https://github.com/DuncanCragg/Cyrus/blob/master/src/server/vm1/iot.db">3D objects</a>, <a href="https://github.com/DuncanCragg/Cyrus/blob/master/src/server/vm1/iot.db">IoT</a> <a href="http://object-network.blogspot.co.uk/2013/12/screenshots-of-android-app-viewing.html">Things</a>, <a href="http://netmash.net/o/uid-6f1a-4c7c-d111-2679.cyr">Cyrus</a> <a href="http://netmash.net/o/uid-2f18-945a-c460-9bd7.cyr">rules</a>, etc.<br />
<br />
These types can also be represented in the cleaner Cyrus syntax.<br />
<br />
I mentioned Object Network Types <a href="http://object-network.blogspot.co.uk/2014/01/minecraft-augmented-reality-and.html">here</a> and <a href="http://object-network.blogspot.co.uk/2013/12/links-between-thing-objects.html">here</a>.<br />
<b><br /></b>
<b><br /></b>
<b>NetMash</b><br />
<b><br /></b>
All of this is implemented in the <a href="http://netmash.net/">NetMash</a> Java code. NetMash is an Android app and a Java server. They share the same Object Network core code, including the implementation of the Cyrus language.<br />
<br />
This whole blog is full of screenshots of NetMash in action.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-79503189080718523072014-02-04T15:57:00.001-08:002014-02-04T15:57:29.947-08:00Internet of Things and Augmented Reality in Retail<span style="font-family: inherit;">Today I received my latest copy of the <a href="http://info.thoughtworks.com/perspectives-subscription.html">Thoughtworks Perspectives</a> mailing. This month's issue was a special on Retail. Our European Head of Retail, Mark Collin, has been <a href="http://www.essentialretail.com/news/article/52b4196378aa4-qa-european-head-of-retail-at-thoughtworks-mark-collin">interviewed</a> </span>on the "Essential Retail" site about<span style="font-family: inherit;"> the latest trends.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">That's where I discovered a new term: "Phygital", which of course means the merging of physical and digital. Cute. "Internet of Things" seems quite a mouthful in comparison.</span><br />
<blockquote class="tr_bq">
.. it will become a necessary reality that retailers have to find ways to subtly and seamlessly incorporate digital into a store experience and not just for technology sake or as a gimmick but to tackle core retailing issues like real time inventory, faster checkout, everything in my pocket (mobile – payment, loyalty, receipts, rewards, etc). The behind the scenes analytics opportunities are the significant side benefit to a customer experience premised on digital. </blockquote>
<blockquote class="tr_bq">
- <i>Mark Collin</i></blockquote>
<div>
Here's another, related article on the Thoughtworks website: <a href="http://www.thoughtworks.com/pt/insights/blog/will-ibeacons-further-enable-shopping">Will iBeacon Further Enable the Passion of Shopping?</a> And another from a Thoughtworker's own blog: <a href="http://www.jahya.net/blog/?2013-10-introduction-to-ibeacons">Introduction to iBeacons</a> by Andrew McWilliams.</div>
<span style="font-family: inherit;"><br /></span>
<br />
<b>Retail: The Thoughtworks Application of IoT/AR</b><br />
<br />
So this got me thinking, that soon enough I'm going to want to explain how my <a href="http://object-network.blogspot.co.uk/2013/12/60-days-of-things.html">60 Days of Things</a>, that I'm coming to the end of pretty soon, will benefit Thoughtworks.<br />
<br />
And the obvious applications are all in retail, in interacting with customers within some environment, such as shops, malls, airports, stations, libraries, museums, theme parks, cinemas, sports centres, swimming pools, holiday camps, and so-on.<br />
<br />
<br />
<b>IoT/AR in Retail</b><br />
<br />
Now currently, all the demos we have been doing inside TW and the typical example ideas that people have been coming up with in general in this area are about "IoT plus 2D smartphone app" interactions.<br />
<br />
I can't see anyone else that has spotted the potential of combining "<a href="http://object-network.blogspot.co.uk/2014/01/augmented-reality-and-internet-of-things.html">IoT plus Augmented Reality</a>".<br />
<br />
Tomi Ahonen, a mobile industry analyst who's predictions are extremely reliable, believes that <a href="http://communities-dominate.blogs.com/brands/2014/01/there-are-some-early-ar-numbers-all-looking-very-good-for-augmented-reality.html">Augmented Reality</a> is the next big wave after mobile.<br />
<br />
Just sayin'...<br />
<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-56524712937988819032014-02-03T15:11:00.000-08:002014-02-03T15:11:41.602-08:00FOREST: a higher-level RESTful interaction modelI was having a good chat with my buddy <a href="https://twitter.com/jimbarritt">Jim Barritt</a> today, and the subject came up of how one could describe the difference between my <a href="http://link.springer.com/chapter/10.1007/978-1-4419-8303-9_7">FOREST</a> approach to REST and the widely-used, traditional <a href="http://tools.ietf.org/search/rfc5023">AtomPub</a> style.<br />
<br />
<br />
<b>AtomPub</b><br />
<br />
Quick summary of the AtomPub style: you have a client and a server where the server is a lot like a database of articles and HTTP is used by the client to edit those articles.<br />
<br />
So the editor client says "POST" and a new article is created. It says "PUT" and the article is updated. "DELETE" is pretty obvious. The client, and indeed the world, can "GET" to read an article.<br />
<br />
The client basically runs things and the server more-or-less does what it's told, modulo whatever it needs to do to ensure security and integrity. There are other bits like Media Types and Link Relations, but that's basically the model.<br />
<br />
<br />
<b>Almost Database Integration</b><br />
<br class="Apple-interchange-newline" />When people want to "do REST", if they think they want to do it "properly", chances are they'll use this approach as their <a href="http://www.merriam-webster.com/dictionary/paradigm">paradigm</a>. I was slightly involved in the creation of the AtomPub spec, so I'm not knocking it, at least for this use case - editing articles.<br />
<br />
Trouble is, it only makes sense if your application is a lot like a database, where you have some data that you want to create, update, delete or read. So, in order to do what they believe to be "proper REST", people end up forcing their inter-server interaction protocols into this simple, low-level, data read-write model.<br />
<br />
And that feels uncomfortably close to a database integration style!<br />
<br />
<br />
<b>FOREST</b><br />
<b><br /></b>
In contrast, in the <a href="http://link.springer.com/chapter/10.1007/978-1-4419-8303-9_7">FOREST</a> style, two application servers talk to each other at a higher level, as peers - they can be both client and server to one another. A peer can GET the data of another - pull or poll - and can POST its own data to another - push.<br />
<br />
It's a simple, symmetric model of interaction where the application protocol is like a two-way conversation. Being RESTful, all such data can be found at their URLs - this isn't just a substitute for messaging. PUT and DELETE aren't used at all, because the interaction model isn't about just low-level editing of data.<br />
<div>
<br /></div>
I'll be illustrating FOREST with an example on this blog, to show how such a conversation between peers can proceed. You'll see how it enables interactions that are at a higher, domain level.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-52906623702724594862014-02-02T15:15:00.001-08:002014-02-02T15:32:17.650-08:00Beacons that MoveUp to now, I've been talking about BLE beacons attached at fixed points around the house, office, shop, park, etc. The moving bit is you; or rather your Android device, that picks up the URLs and the signal strengths and drives a 3D AR app that maps out your surroundings.<br />
<br />
Beyond that mode, I did <a href="http://object-network.blogspot.co.uk/2013/12/discovery-and-set-up-in-object-network.html">mention</a> that a fixed beacon could also <a href="http://object-network.blogspot.co.uk/2014/01/scanning-ble-adverts-from-linux.html">scan around it</a> - when first installed - to find out what place it was in and whereabouts it sat.<br />
<br />
So the other two combinations left involve the Android device itself broadcasting, either to the fixed beacons or to other Android phones. Unfortunately, Android can't yet broadcast as a beacon through BLE - see <a href="http://code.google.com/p/android/issues/detail?id=59693">this issue here</a> and <i>vote on it by starring</i>. Android needs "Peripheral Mode" support, which iPhones do already have.<br />
<br />
Once your phone can also act as a beacon, it can broadcast the URL of your person or avatar object.<br />
<br />
The first benefit of this would be more accurate location - the surrounding Things would know how far you are from them and could collaborate on trilaterating your position, which you could combine with your own trilateration of <i>their</i> positions for more accurate results.<br />
<br />
<br />
<b>Publishing You</b><br />
<br />
If your device were broadcasting, it could be used to notify surrounding people and machines of your presence, identity and other parameters and links you want to make public, through a packet of JSON fetched through the advertised URL.<br />
<br />
This would allow various applications such as: automatically paying for a service by walking through a gate, exchanging behavioural tracking for store discounts and a conference birds-of-a-feather locator. You could even advertise that it's your birthday, or that you like rock climbing, your blog URL or your relationship interests. A more private view could show your health to your doctor.<br />
<br />
You don't actually need BLE to do something like this: when you have WiFi switched on, your device <a href="http://www.raspberrypi.org/phpBB3/viewtopic.php?f=37&t=47059">gives away its unique MAC address</a> while scanning for networks. Now you just need to map from that to your personal URL, which could be done through a MAC-to-URL lookup service a bit like DNS.<br />
<br />
<br />
<b>Beacons that Move</b><br />
<br />
While waiting for Android to get peripheral mode and enable this huge range of applications, there are other examples of physical objects that can move and can be tagged with a beacon.<br />
<br />
The <a href="http://mike.saunby.net/2013/04/raspberry-pi-and-ti-cc2541-sensortag.html">TI SensorTag</a>, the <a href="https://www.kickstarter.com/projects/ninja/ninja-sphere-next-generation-control-of-your-envir">Ninja tag</a>, the <a href="https://www.kickstarter.com/projects/1015015457/chipolo-bluetooth-item-finder-for-iphone-and-andro">Chipolo</a>, the <a href="https://launch.punchthrough.com/">Light Blue Cortado</a> - all allow BLE tracking of the location of things they're attached to, or of values of their sensors, such as accelerometers.<br />
<br />
Like you, your car can have its own URL, allowing similar applications such as <a href="http://blog.automatic.com/every-automatic-road-just-became-ibeacon/">automatic payment</a> of <a href="http://www.macrumors.com/2014/01/28/automatic-ibeacons/">tolls, fuel and parking fees</a>, detection of presence for tracking in the home or the street, perhaps again in exchange for information or discounts. A more private view could show you the health of the car and allow you to set certain parameters and rules of operation.<br />
<br />
Android better <a href="https://plus.google.com/+AndroidDevelopers/posts/eUNiV1RAVCw">fix that peripheral mode</a> if its <a href="http://gigaom.com/2014/01/06/google-unveils-open-automotive-alliance-featuring-gm-audi-nvidia-and-others/">Open Auto Alliance</a> is to <a href="http://beekn.net/2014/02/ibeacon-auto/">beat</a> iOS, though..<br />
<br />
Finally, robots can have "beacons that move", allowing all the above functionality plus a whole lot more, around collaboration and coordination of their joint activities.Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-57215179449418694382014-02-01T14:56:00.000-08:002014-02-13T15:54:55.340-08:00Venice Holiday Snaps.. Not TodayI was hoping to show you some holiday snaps from Venice today, but I can't upload from my phone into the Blogger editor.<br />
<br />
So here's my Twitter link, anyway, which has some photos: <a href="https://twitter.com/duncancragg">https://twitter.com/duncancragg</a><br />
<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-73049328403164900972014-01-31T13:08:00.000-08:002014-01-31T13:08:16.739-08:00Venice VacationNo blog today, or rather no content, because I'm in Venice.<br />
<br />
I may show you some holiday snaps tomorrow..Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-8248206594298474072014-01-30T15:24:00.001-08:002014-01-30T15:26:55.526-08:00The Internet of Meeting RoomsToday, I was yet again hunting for a meeting room in the huge office building I work in. Obviously, the meeting room number gives nothing away - adjacent numbered rooms could be on opposite sides of the floor.<br />
<br />
It occurred to me that I'd like to be able to hold up NetMash in "look around" mode, and see all of the meeting rooms nearby - like X-Ray vision.<br />
<br />
This would of course be driven by BLE beacons inside each meeting room, advertising the URL of their virtual room/place object. Assuming my phone is on the office WiFi, NetMash could fetch and render those place objects for me, plus all the contents.<br />
<br />
An obvious object to have inside a room on the virtual wall would be a "card" describing the event - a meeting booking. This would be a 3D representation of a <a href="http://duncan-cragg.org/blog/post/basics-object-network/">JSON object</a> encoding an <a href="http://the-object.net/123-456.json">iCalendar event</a>.<br />
<br />
Of course, occupied rooms would also contain all the avatars of the people in them. I could find the room, then as I approached, ping a message to all of my colleagues.<br />
<br />
There could be an object that captured the whiteboard work as an image that we could virtually take away with us.<br />
<br />
And, obviously, there would be objects for lighting and heating. Being listed in the event object invitees, I could turn all the lights off and back on again as I approached, for dramatic effect.<br />
<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-66150345586791910332014-01-29T15:06:00.001-08:002014-01-29T15:06:25.076-08:00The Internet of Farm Yields - from Monsanto?With <a href="http://object-network.blogspot.co.uk/2014/01/nest-and-google-closed-and-cloud.html">Google's acquisition of Nest</a> fresh in our minds, <a href="http://www.npr.org/blogs/thesalt/2014/01/21/264577744/should-farmers-give-john-deere-and-monsanto-their-data">this article</a> now makes even more alarming reading.<div>
<br /></div>
<div>
Apparently, Monsanto are offering farmers a system that can measure crop yields over their fields by following them around using GPS while monitoring the rate of harvest at each point they cross. All this gets pushed up into Monsanto's servers.</div>
<div>
<br /></div>
<div>
The farmer then gets the payback: an automated planting system that adjusts the amount and type of seed planted at each point in the field.</div>
<div>
<br /></div>
<div>
But of course, the farmers have trusted Monsanto with their field, planting and harvesting data and are letting the company control their planting in unprecedented detail.</div>
<div>
<br /></div>
<div>
I suppose I trust Google enough with my personal life - perhaps I'll learn to regret that - but if I were a farmer, would I trust Monsanto?</div>
<div>
<br /></div>
<div>
I was going to search for some juicy stories about their unethical tactics, but found that just searching for "<a href="https://www.google.com/search?q=Monsanto">Monsanto</a>" alone threw up enough material.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<b>Open & Local Farming</b></div>
<div>
<br /></div>
<div>
Obviously, it would be perfectly possible to do all of this in an "<a href="http://object-network.blogspot.co.uk/2014/01/open-and-local-not-closed-and-cloud.html">open and local, not proprietary and cloud</a>" way.</div>
<div>
<br /></div>
<div>
The farmer would get all the above benefits, plus control and privacy, plus the benefits of being able to work with other local and global farmers and keen technologists to create systems that do much more than whatever Monsanto want.</div>
<div>
<br /></div>
<div>
<br /></div>
Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-27550605065885830682014-01-28T15:47:00.000-08:002014-01-28T15:47:56.355-08:00Slightly Better AR Microlocation AlgorithmThere are two parts to Augmented Reality: knowing which way you're looking around you, and knowing where you are. I've <a href="http://object-network.blogspot.co.uk/2014/01/android-sensors-for-ar.html">started the first one</a>, in one axis at least. I've also <a href="http://object-network.blogspot.co.uk/2014/01/three-lights-in-both-real-and-virtual.html">started the second</a>, by jumping right up to the nearest object.<br />
<br />
As I <a href="http://object-network.blogspot.co.uk/2014/01/three-lights-in-both-real-and-virtual.html">showed in that article</a>, it's a bit rough: if the nearest object is the light, I only see the light, up close, without a surrounding place. Only if no beacon is near do I see the place the light is in, with that light and any other lights.<br />
<br />
While waiting for the mathematical energy and inspiration to tackle proper <a href="https://en.wikipedia.org/wiki/Trilateration">trilateration</a>, what I actually want is for the 3D AR view to <i>always</i> be in the place that the surrounding objects or Things occupy, rather than jumping off to the object alone.<br />
<br />
Then when I move in to a beacon on a Thing, for the user object or avatar in that place, and their 3D view, to also move smoothly closer to the 3D representation of the Thing object within that place.<br />
<br />
<br />
<b>Tricky Algorithm</b><br />
<b><br /></b>
Now, it's not tested, but here's the algorithm I worked out on the train today:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;"> If the nearest object is a place:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> If already in that place:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> If looking up close at an object:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> <i>initiate zoom to the middle of the place</i></span><br />
<span style="font-family: Courier New, Courier, monospace;"> Else:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> <i>jump to the middle of the place</i></span><br />
<span style="font-family: Courier New, Courier, monospace;"> Else:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> If already in the place the object is in:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> If not already looking at the object:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><i><span style="font-family: 'Courier New', Courier, monospace;">initiate</span><span style="font-family: Courier New, Courier, monospace;"> zoom up to the object</span></i><br />
<span style="font-family: Courier New, Courier, monospace;"> Else:</span><br />
<span style="font-family: Courier New, Courier, monospace;"> <i>jump to the </i></span><i><span style="font-family: 'Courier New', Courier, monospace;">middle of the</span><span style="font-family: 'Courier New', Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">place</span></i><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;"><i>initiate</i></span><span style="font-family: Courier New, Courier, monospace;"><i> zoom up to the object</i></span><br />
<br />
Way more complex than I ever imagined! And that's not even close to a trilateration algorithm.<br />
<br />
The action "jump to the middle of the place" is resetting the 3D view to a new place URL and putting the user in the middle, of the room, etc. Arbitrary, but on average not a bad choice, I hope.<br />
<br />
By "initiate zoom..", I mean kick off a thread to move the user's avatar smoothly to a target destination position in the place. Since at this point I only have the URL of the target <i>object</i>, I also need to look up its position coordinates from the place it's in, or the coordinates of the middle of the place.<br />
<br />
On top of that, for zooming up to an object, I need to stop before I hit its actual position. So I need to get its bounding box and turn that into a bounding circle radius, so that I can stop zooming that far back.<br />
<br />
Only two weeks to go to the end of this <a href="http://object-network.blogspot.co.uk/2013/12/60-days-of-things.html">ThoughtWorks exercise</a>..<br />
<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-54776538260503673542014-01-27T14:19:00.003-08:002014-01-27T14:19:26.212-08:00Simple but Effective Lighting Behaviour via the IoTThis last weekend, I replaced all those <a href="http://object-network.blogspot.co.uk/2014/01/the-economics-of-led-bulbs.html">LED bulbs in the kitchen</a> with dimmable ones, and put a dimmer switch in. They are still super-white bulbs, which is ideal for seeing while doing jobs.<br />
<br />
But as soon as we first dimmed them down - we hit upon an obvious issue: <i>they're still super-white, only dimmer</i>.<br />
<br />
What we would like is for them to go <i>warm-coloured</i> when they're dimmed!<br />
<br />
You don't need pure white light if you're not using them to work by, and if you dim them, you normally want to set a softer mood, or make it suitable for relaxing and chatting. White light isn't the right light for that.<br />
<br />
<br />
<b>Object Net IoT Approach</b><br />
<b><br /></b>
So obviously, that gets me thinking about the Object Network solution.<br />
<br />
The ultimate IoT solution would be to have individual control over each bulb's RGB levels via some decent radio control such as <a href="http://object-network.blogspot.co.uk/2014/01/iot-protocols.html">ZigBee, Z-Wave, Bluetooth or 6LoWPAN</a>. The state the dimmer switch was in - off, or on and the control level set - could be fed into the I/O ports of a Raspberry Pi which would also terminate all the radio.<br />
<br />
The practical, DIY solution? You can buy <a href="http://www.ebay.co.uk/sch/i.html?_trksid=p2054897.m570.l1313.TR1.TRC0.A0&_nkw=GU10+RGB+LED&_sacat=11700&_from=R40">GU10 RGB LED</a> units that come with an <a href="http://object-network.blogspot.co.uk/2013/12/cheap-remote-controlled-consumer.html">IR controller</a>. It only seems to have limited colour settings, however, but it may be possible to trace the protocol and set any RGB value. You'd need to have a dimmer box containing an IR LED to control the bulbs across the kitchen (don't stand in the way!), and a variable resistor for the control. These would all be wired into a nearby Raspberry Pi's I/O pins via some adaptor circuitry.<br />
<br />
The R-Pi would be best in or near the dimmer box, so that it can have a BLE beacon advertising the URL of an object representing the state of the control. This object would sit within a place object (the kitchen) that also has the 3D light objects overhead. The lights may or may not be beacons, too, depending on their technology.<br />
<br />
<br />
<b>The Magic Bit</b><br />
<br />
Then a simple Cyrus rule in the R-Pi would set all the bulbs to white at maximum control setting, but increasingly to warmer colours while decreasing brightness as the dimmer control is turned down.<br />
<br />
You could of course override the control level in the app, or directly set any bulb colour as usual. From the coffee shop or down the garden.<br />
<br />
Or you could fiddle with the rules to make the lights go blue and surge like waves instead, when you turn down the control..<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-68363712185487374872014-01-26T08:52:00.000-08:002014-01-26T08:52:59.408-08:00Android sensors for ARI've added a simple class to NetMash called <a href="https://github.com/DuncanCragg/Cyrus/blob/master/src/android/cyrus/Sensors.java">Sensors</a>, that I use to pick up the phone orientation, to drive the 3D view. This is only active once I set the "Around" menu item. I currently only use "azimuth" to pan around the room.<br />
<br />
That's something I learned doing this project: "azimuth" is panning around you - and is also called "yaw", "pitch" is looking up and down, and "roll" is tipping or rotating the device while looking at the same thing.<br />
<br />
The <a href="http://developer.android.com/reference/android/hardware/SensorManager.html#getOrientation(float[], float[])">Android documentation</a> appears to have some rather odd axis conventions to the eyes of someone who hasn't taken the trouble to work out all the maths, but working code beats theory every time.<br />
<br />
Obviously, the code was more-or-less copied from multiple StackOverflow posts, but there's not one post or article anywhere I could find that gave me complete code like the <a href="https://github.com/DuncanCragg/Cyrus/blob/master/src/android/cyrus/Sensors.java">simple class</a> I ended up with. The smoothing algorithm I invented is probably far too simplistic, but I can refine it.<br />
<br />
<br />
<b>Actual Positions</b><br />
<br />
So now I can pan around my 3D place, it becomes pretty obvious that the room isn't exactly oriented square to North. But more importantly, the positions of the lights bear no relation to their actual positions in the room around me.<br />
<br />
Setting Thing positions to their actual locations in the room will have to be done manually for now, when I run NetMash on each Pi. They will then be notified to the place object which can pick them up in the <a href="http://object-network.blogspot.co.uk/2014/01/three-lights-in-three-d-place.html">initial discovery rule</a>.<br />
<br />
I'm still intending to video all of this working..Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-57472474940038066392014-01-25T15:51:00.001-08:002014-01-25T15:51:14.219-08:00Augmented Reality and MagicI was out with my daughter today and we went to a garden centre.<br />
<br />
To be more precise, I was out with both my daughters - one went indoor climbing with her friend, and the elder and I went off for a hot frothy milk and a coffee, respectively, in the garden centre.<br />
<br />
Garden centres are strange places - for a non-gardner like me it's mostly just stuff for other people to buy - mostly, it has to be said, for a generation that arrived before I did.<br />
<br />
But this one, like most, had a book section. Again, mostly gardening and cookery books and books about how the town looked 100 years ago. Aeroplane books and war books.<br />
<br />
And children's books. Which is where I get to the point of all this.<br />
<br />
We picked up this <a href="http://www.waterstones.com/waterstonesweb/products/patricia+moffet/fairyland+magic+28augmented+reality29/7371342/">Augmented Reality Fairy book</a>. Now, the daughter that nagged me to buy it is the <a href="https://en.wikipedia.org/wiki/Williams_syndrome">kind</a> that never gives up and gets very enthusiastic for a while, then moves on to the next opportunity very quickly. So I was a little reluctant at first.<br />
<br />
But it was discounted down to only a fiver, and had marker-based AR, which I always find fun to see, and guessed the rest of the family would enjoy too. So I bought it.<br />
<br />
Now I have to say that I was almost as excited to try it out as my daughter, and she and I sat together at the Mac and got it going. She was delighted, of course.<br />
<br />
You point the webcam at the open page and it detects which page it is and creates a superimposed fairy scene. You can activate various things, like getting a fairy to appear and cast fairy dust around.<br />
<br />
The grand finale is to hold a card disk in your palm and entice a fairy by hitting various keys to add fruit and flowers to a cup. So it's like you're holding this fairy in the palm of your hand.<br />
<br />
When the family saw all this, there was plenty of "wow"ing and "ooh"ing.<br />
<br />
But really, it was pretty basic - just a simple animated 3D scene. It was the fact that it keyed itself onto a physical thing - the book page or the disc - that gave it such a compelling edge. So engaging was this illusion, that my daughter apologised to the fairy when she turned the page and made her vanish!<br />
<br />
<br />
<b>Augmented Reality and Magic</b><br />
<b><br /></b>
For a long time, computers have lived in an abstract virtual place in our lives - only interacting properly with reality when printing something out on paper, or arranging for a package to arrive the next day from Amazon. Maybe video calling has a bit of that reality-engagement, too.<br />
<br />
Smartphones are better at integrating into our physical lives, with their sensors for orientation and GPS and their cameras.<br />
<br />
But when you combine those sensors and a 3D display within an AR app, you can create magic.<br />
<br />
The kind of magic that is possible when the unlimited creative universe of the virtual can start to invade our physical environment in truly tangible and compelling ways.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-16420585031295773742014-01-24T15:45:00.002-08:002014-01-24T15:45:59.116-08:00The Internet of 3D sensors and actuatorsThe sensors and actuators of the IoT tend to be small-scale: temperature, light, etc.<br />
<br />
The key factor is the merging of real and virtual, the bringing of Things into the Internet, and allowing communication in both directions between real and virtual and virtual back to real.<br />
<br />
On a slightly bigger scale, in the Object Network <a href="http://object-network.blogspot.co.uk/2014/01/the-person-as-first-class-object-or.html">people are first class</a> and bring their own more complex sensors - for gestures, location, orientation, etc. - and actuators - like screens, glasses, vibration and speakers.<br />
<br />
Now the whole thing begins to fill our 3D space - as the Object Network's ability to view the IoT in Augmented Reality indicates.<br />
<br />
<br />
<b>The 3D IoT</b><br />
<br />
Taking this concept of a 3D IoT to its conclusion:<br />
<br />
3D IoT sensors can include full <a href="http://www.primesense.com/">3D</a> <a href="http://structure.io/">sensor</a> <a href="http://www.kickstarter.com/projects/ikegps/spike-laser-accurate-measurement-and-modelling-on">devices</a> that suck in the shape of the world, and sensors like those on the Wii for picking up the motion of your hands and feet, or the <a href="https://www.leapmotion.com/">Leap Motion</a> that can pick up hand gestures.<br />
<br />
3D IoT actuators can include bigger, <a href="https://www.google.com/search?q=wall-sized+screens&safe=active&hs=ZDC&channel=cs&source=lnms&tbm=isch&sa=X">wall-sized screens</a>, <a href="https://www.google.com/search?q=holographic+display&safe=active&client=ubuntu&hs=wwr&channel=cs&source=lnms&tbm=isch&sa=X">holographic displays</a>, <a href="http://www.wired.co.uk/news/archive/2013-11/13/inform-shape-shifting-display">3D solid displays</a> and <a href="https://www.google.com/search?q=www.kickstarter.com+3d+printer&safe=active&source=lnms&tbm=isch&sa=X">3D printers</a>.<br />
<br />
All these technologies enable what were once <a href="https://www.google.com/search?q=minority+report+display&safe=active&client=ubuntu&hs=Xgr&channel=cs&source=lnms&tbm=isch&sa=X">futuristic applications</a> - instead of bending over a tablet and stabbing at a tiny screen, we'll be standing up, and moving our whole bodies around, interacting with surround-vision displays and holographic objects.<br />
<br />
Imagine sculpting a virtual object with your hands, then printing it out. That is a true blend of real and virtual - the "lights and thermostats" Internet of Things will seem rather lame in comparison.<br />
<br />
Of course, the ultimate sensor/actuator combo is the robot - a device that can exist in both real and virtual domains simultaneously - it could have a 3D virtual representation showing more about its state and rules, which could be as interactive as the physical robot.<br />
<br />
<a href="https://www.google.com/search?q=Telepresence+robots&safe=active&client=ubuntu&hs=zVC&channel=cs&source=lnms&tbm=isch&sa=X">Telepresence robots</a> are a variant of this, merging a real person into a virtual person and back into a nearly-real person again.<br />
<br />
The benefit of the Object Network approach to the IoT is that it <i>starts off </i>seeing your world in 3D, so all this is native to its way of working and interacting.<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com0tag:blogger.com,1999:blog-4333818336952661438.post-11362443088654714402014-01-23T15:59:00.000-08:002014-01-23T16:22:13.102-08:00Programming the IoT = Programming Parallel ComputersIt's well-known that Moore's Law is<a href="https://upload.wikimedia.org/wikipedia/commons/0/00/Transistor_Count_and_Moore%27s_Law_-_2011.svg"> over for single-CPU</a> processing, and that the only way forwards is multi-core and parallel processing.<br />
<br />
So given that, the way that we program these chips may have to change. Obviously, if you can easily split your application up into parallel, independent threads then you're fine to carry on programming in single-threaded Java. That's Web servers, of course. Even if your application has <i>inter</i>-dependent threads, you may still be able to battle on with the corresponding Java code and win.<br />
<br />
But for any interesting interactive application, different programming models and languages are needed to take away the pain of properly exploiting parallel hardware - to make all that threading code go away in the same way as Java made all that memory management code go away.<br />
<br />
<br />
<b>The Internet of Processor Things</b><br />
<br />
Now, at the same time that we're considering putting large numbers of small processors together in the same box, we're also considering scattering large numbers of small processors all around us - the sensors and actuators of the Internet of Things.<br />
<br />
In fact, there could be more of <i>those</i> processors per hectare as was planned for in the extrapolation of the Moore line: in your 2014 house you may have 16 processors in Things and mobile devices, but only a couple of quad-core desktop machines. And the same ratio may hold true in the work environment. This ratio of scattered processors to co-located ones will probably only get higher.<br />
<br />
Indeed, why do we need all the processors to be in the same box? Not all the processes need to share the I/O to the user, so it's mainly the need to communicate through a single physical memory and disk.<br />
<br />
Which is the main problem with scattering: the processors have to wait longer to communicate. But we don't even know if that's a problem: it's application dependent. And in a future where your applications are sensors and actuators, multiple mobile devices, Augmented Reality, immersive Virtual Worlds and gestural interactions, it could be that wireless data exchange is perfectly good enough.<br />
<br />
<br />
<b>Declarative Programming</b><br />
<br />
Either way, whether you're programming scattered or co-located processors, you should be able to program without worrying about concurrent thread interactions and process distribution.<br />
<br />
You should be able to program independent active and interactive objects without needing to know yet whether they're microns or miles apart. Only timeouts should change. This isn't <a href="http://eecs.harvard.edu/~waldo/Readings/waldo-94.pdf">RPC</a> [pdf].<br />
<br />
Imperative, threaded programming will have to give way to Declarative programming - where we tell the computer What we want, not How to do it - it will be up to the implementation of the language to handle the mapping to threads and then to processors, local or remote.<br />
<br />
Since we don't know right now what ratio of scattered to co-located will turn out to be best in general or in specific applications, a <a href="http://object-network.blogspot.co.uk/2014/01/the-functional-observer-programming-and.html">programming model</a> like that in Cyrus, that lets us split and recombine our active objects with perhaps only minor code adjustments, will have a massive advantage.<br />
<br />
<br />
<br />Duncan Cragghttp://www.blogger.com/profile/03169236130970103361noreply@blogger.com2