A presence robot with Chromium, WebRTC, Raspberry Pi 3 and EasyRTC

Update, July 2017 – if you really want to try it, much more complete and up-to-date instructions are on github (more).

Here’s how to make a presence robot with Chromium 51, WebRTC, Raspberry Pi 3 and EasyRTC. It’s actually very easy, especially now that Chromium 51 comes with Raspian Jessie, although it’s taken me a long time to find the exact incantation.

If you’re going to use it for real, I’d suggest using the Jabra 410 speaker / mic. I find that audio is always the most important part of a presence robot, and the Jabra provides excellent sound for a meeting of 5 – 8 people and will work for meetings with larger groups too. I’ve had the most reliable results using a separate power supply for the Jabra, via a powered hub. The whole thing still occasionally fails, so this is a work in progress. You’ll need someone at the other end to plug it in for you.

I’ve had fair success with a “portal” type setup with the Raspberry Pi touchscreen, but it’s hard to combine the Jabra and the screen in a useful box.


As you can see, the current container needs work:


Next things for me will be some sort of expressivity and / or movement. Tristan suggests emoji. Tim suggests pipecleaner arms. Henry’s interested more generally in emotion expressed via movement. I want to be able to rotate. All can be done via the WebRTC data channel I think.

You will need

  • Raspberry Pi 3 + SD card + 2.5A power supply
  • Jabra Mic
  • Powered USB hub (I like this one)
  • A pi camera – I’ve only tested it with a V1
  • A screen (e.g. this TFT)
  • A server, e.g a Linode, running Ubuntu 16 LTS. I’ve had trouble with AWS for some reason, possibly a ports issue.


Set up the Pi

(don’t use jessie-lite, use jessie)

diskutil list
diskutil unmountDisk /dev/diskN
sudo dd bs=1m if=~/Downloads/2016-09-23-raspbian-jessie.img of=/dev/rdiskN

Log in.

sudo raspi-config

expand file system, enable camera (and spi if using a TFT) and boot to desktop, logged in

Update everything

sudo apt-get update && sudo apt-get upgrade

Set up wifi

 sudo pico /etc/wpa_supplicant/wpa_supplicant.conf

Add drivers

sudo pico /etc/modules

Add V4l2 video drivers (for Chromium to pick up the camera): argh

sudo nano /etc/modprobe.d/bcm2835-v4l2.conf
options bcm2835-v4l2 gst_v4l2src_is_broken=1

Argh: USB audio

sudo pico /boot/config.txt 

#dtparam=audio=on ## comment this out
sudo pico /lib/modprobe.d/aliases.conf
#options snd-usb-audio index=-2 # comment this out
sudo pico ~.asoundrc
defaults.pcm.card 1;
defaults.ctl.card 0;

Add mini tft screen (see http://www.spotpear.com/learn/EN/raspberry-pi/Raspberry-Pi-LCD/Drive-the-LCD.html )

curl -O http://www.spotpear.com/download/diver24-5/LCD-show-160811.tar.gz
tar -zxvf LCD-show-160811.tar.gz
cd LCD-show/
sudo ./LCD35-show

Rename the bot

sudo pico /etc/hostname
sudo pico /etc/hosts

You may need to enable camera again via sudo raspi-config

Add autostart

pico ~/.config/lxsession/LXDE-pi/autostart
@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
@xscreensaver -no-splash
@xset s off
@xset -dpms
@xset s noblank
#@v4l2-ctl --set-ctrl=rotate=270 # if you need to rotate the camera picture
@/bin/bash /home/pi/start_chromium.sh
pico start_chromium.sh
#@rm -rf /home/pi/.config/chromium/
/usr/bin/chromium-browser --kiosk --disable-infobars --disable-session-crashed-bubble --no-first-run https://your-server:8443/bot.html#$myrandom &

Assemble everything:

  • Connect the USB hub to the Raspberry Pi
  • Connect the Jabra to the USB hub
  • Attach the camera and TFT screen

On the server

Add keys for login

mkdir ~/.ssh
chmod 700 ~/.ssh
pico ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Install and configure Apache (I used this guide for letsencrypt)

sudo apt-get install apache2
sudo mkdir -p /var/www/your-server/public_html
sudo chown -R $USER:$USER /var/www/your-server/public_html
sudo chmod -R 755 /var/www
nano /var/www/your-server/public_html/index.html
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/your-server.conf
sudo nano /etc/apache2/sites-available/your-server.conf
<VirtualHost *:80>     
        ServerAdmin webmaster@localhost
        ServerName your-server
        ServerAlias your-server
        ErrorLog ${APACHE_LOG_DIR}/your-server_error.log
        CustomLog ${APACHE_LOG_DIR}/your-server_access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} = your-server
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
sudo a2ensite your-server.conf
sudo service apache2 reload
sudo service apache2 restart

Add certs

You can’t skip this part – Chrome and Chromium won’t work without https

sudo apt-get install git
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
cd /opt/letsencrypt
./letsencrypt-auto --apache -d your-server
/opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
sudo mkdir /var/log/
sudo mkdir /var/log/lets-encrypt

Auto-renew certs

sudo /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log
crontab -e
# m h  dom mon dow   command
30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/lets-encrypt/le-renew.log

Get and install the EasyRTC code

Install node

curl -sL https://deb.nodesource.com/setup | sudo bash -

sudo apt-get install -y nodejs

Install the easyrtc api

cd /var/www/your-server/
git clone https://github.com/priologic/easyrtc

Replace the server part with my version

cd server
rm -r *
git clone https://github.com/libbymiller/libbybot.git
cd ..
sudo npm install

Run the node server

nohup node server.js &


Boot up the pi, and on your other machine go to


in Chrome.

When the Pi boots up it should go into full screen Chromium at https://your-server:8443/bot.html  – there should be a prompt to accept the audio and video on the pi – you need to accept that once and then it’ll work.


Camera light doesn’t go on

Re-enable the camera using

sudo raspi-config

No video

WebRTC needs a lot of ports open. With this config we’re just using some default STUN and TURN ports. On most wifi networks it should work, but on some restricted or corporate networks you may have trouble. I’ve not tried running my own TURN servers, which in theory would help with this.

No audio

I find linux audio incredibly confusing. The config above is based around this answer. YMMV especially if you have other devices attached.

Working from home

A colleague asked me about my experiences working from home so I’ve made a few notes here.

I’m unusual in my department in that I work from home three or four days a week, and one or two in London, or very occasionally Salford. I started off in this job on an EU-funded project where everyone was remote, and so it made little difference where I was physically as long as we synced up regularly. Since then I’ve worked on multiple other projects where the other participants are mostly in one place and I’m elsewhere. That’s made it more difficult, but also, sometimes, better.

A buddy

Where everyone else is in one place, the main thing I need to function well is one or more buddies who are physically there, who remember to call me in for meetings and let me know anything significant that’s happening that I’m missing because I’m not physically there. The first of these is the most important. Being remote you are easily forgettable. Without Andrew, Dan, Joanne, Tristan, and now Henry and Tim, I’d sometimes be left out.

IRC or slack

I’ve used IRC for years for various remote things (we used to do “scheduled topic chats” 15 year ago on freenode for various Semantic Web topics), the various bots that keep you informed and help you share information easily – loggers and @Edd’s “chump” in particular, but also #swhack bots of many interesting kinds. I learned a huge amount from friends in W3C who are mostly remote from each other and have made lots of tools and bots for helping them manage conference calls for many years.

Recently our team have started using slack as well as irc, so now I’m on both: Slack means that a much more diverse set of people are happy to participate, which is great. It can be very boring working on your own, and these channels make for a sense of community, as well as being useful for specific timely exchanges of information.

Lots of time on organisation

I spend a lot of time figuring out where I need to be and making decisions about what’s most important, and what needs to be face to face and what can be a call. Also: trying to figure out how annoying I’m going to be to the other people in a meeting, and whether I’m going to be able to contribute successfully, or whether it’s best to skip it. I’ve had to learn to ignore the fomo.

I have a text based todo list, which can get a little out of control, but in general has high level goals for this week and next, goals for the day, as well as specific tasks that need to be done on any particular day or a particular time. I spend a little time each morning figuring these out, and making sure I have a good sense of my calendar (Dan Connolly taught me to do this!). In general, juggling urgent and project-managery and less-urgent exploratory work is difficult and I probably don’t do enough of the latter (and I probably don’t look far enough ahead, either). I sometimes schedule my day quite concretely with tasks at specific times to make sure I devote thinking time for specific problems, or when I have a ton to do, or a lot of task switching.

Making an effort not to work

Working at home means I could work any time, and having an interesting job means that I’d probably quite enjoy it, too. There’s a temptation to do the boring admin stuff in work and leave the fun stuff until things are quieter in the evenings or at the weekend. But I make an effort not to do this, and it helps that the team I work in don’t work late or at weekends. This is a good thing. We need downtime or we’ll get depleted (I did in my last job, a startup, where I also worked at home most of the time, and where we were across multiple timezones).

Weekends are fairly easy to not work in, evenings are harder, so I schedule other things to do where possible (Bristol Hackspace, cinema, watching something specific on TV, other technical personal projects).

Sometimes you just have to be there

I’m pretty good at doing meetings remotely but we do a lot of workshops which involve getting up and doing things, writing things down on whiteboards etc. I also chair a regular meeting that I feel works better if I’m there. When I need to be there a few days, I’m lucky enough to be able to stay with some lovely friends, which means its a pleasure rather than being annoying and boring to not be at home.

What I miss and downsides

What I miss is the unscheduled time working or just hanging out with people. When I’m in London my time is usually completely scheduled, which is pretty knackering. Socialising gets crammed into short trips to the pub. The commute means I lose my evening at least once a week and sometimes arrive at work filled with train-rage (I guess the latter is normal for anyone who commutes by rail).

Not being in the same place as everything day to day means that I miss some of the up-and down-sides of being physically there, which are mostly about spontaneity: I never get included in ad-hoc meetings, so have more time to concentrate but also miss some interesting things; I don’t get distracted (by fun or not-fun) things, including bad moods in the organisation, gossip, but also impromptu games, fun trips out etc etc.

And finally…

For me, working from home in various capacities has given me opportunities I’d never have had, and I’m very lucky to be able to do it in my current role.

Wifi-connect – quick wifi access point to tell a Raspberry Pi about a wifi network

This is all Andrew Nicolaou‘s work. I’m just making a note of it here so others can have a go.

An important part of Radiodan is the way it simplifies connecting a device to a wifi network. The pattern is more common now for screenless devices – Chromecast uses it and ESPs have code patterns for it.

The idea is that if it can’t find a known wifi network, the device creates its own access point, you connect to it on a different device such as a phone or laptop, it pops up a web page for you and you add in the wifi details of the network nearby that you want it to connect to.

Andrew, Dan Nuttall and Chris Lowis wrote the original code – which I wrote up here – and then recently Andrew investigated Resin’s approach, which seems to be more reliable. Resin uses their own platform and Docker images which we’re not using, so Andrew un-dockerised it, and has recently rolled it into the new iteration of Radiodan that we’re working on.

If you want to use it in your own project without Radiodan, here are some instructions. It uses a branch of the Radiodan provisioning code, but just installs the relevant pieces and doesn’t delete anything.

First make sure you have a wifi card with the right chipset – or a Pi3 (scroll down for special Pi3 instructions). Then:

Provision an SD card (this is on Mac OS X)

diskutil list
diskutil unmountDisk /dev/disk2
sudo dd bs=1m if=~/Downloads/2016-02-09-raspbian-jessie.img of=/dev/rdisk2

Put it in the Pi, login, expand the filesystem, reboot and login again.

Checkout the Radiodan code and provision the relevant parts.

sudo apt-get update -y && sudo apt-get upgrade -y
git clone https://github.com/radiodan/provision.git
cd provision
git fetch origin
git checkout -b minimal origin/minimal
sudo ./provision iptables node wifi-connect

reboot. Wait a minute or two and you’ll see a wifi access point called “radiodan-configuration”. Connect to it and a browser window will pop up. Select the wifi network you want to connect the Pi to, add the password and save. Connect back to the wifi network you selected for the Pi and you should be able to ssh to it at pi@raspberrypi.local

For a Raspberry Pi 3, you’ll need to tweak things in order to make it possible for the built in wifi:

sudo apt-get install raspi-config
sudo BRANCH=next rpi-update

IoT Semantic Interoperability IAB workshop summaries

Danbri drew my attention to this IoT Semantic Interoperability IAB workshop and I thought I’d spend a couple of hours skimming the papers to pick out themes as I’ve done before.

These are all my own opinions, reflecting my own interests; and the summaries of individual papers are very short.


As far as I can see, the issues are basically these:

  • Implicitly from these papers, consumers don’t want to buy into a particular ecosystem. So interop is necessary between devices (or is there another reason for interop?)
  • This raises a whole load of interoperability questions (interop at which level? protocol or data model? manufacturer implementation built-in interop or gateway model?)
  • Security becomes a huge problem with interop [draft-farrell-iotsi-00.txt, IoT-Security-SI.pdf]
  • Various established standards orgs want to be involved and contribute
  • A couple of older protocols with widely deployed standards and certification want to know how to adapt to an IP world

A few things I noticed:

Perhaps there are some learnings from previous IoT type implementations:

A few standards keep coming up, all being looked at in the IETF I think –

  • YANG modelling language – CORE WG in IETF
  • CoAP – REST for small devices – IETF again
  • HATEOAS – “Hypermedia as the Engine of Application State” – links for actions

and some concepts / technologies:

  • “method” “signal” “property”
  • Events, Actions, and Properties
  • streams of data
  • object-based modelling
  • XML
  • json / json-ld

What’s the business model here? Since standardisation and some level increases commodification, where does the money come from?

A few lines on each paper

I looked at the accepted papers in the zip file so there were a few missing from the complete list.

1. Gadgets and Protocols Come and Go, Data Is Forever

J. Arkko, Ericsson

A brief summary of current standards in various organisations and then a discussion of some of the architectural and security issues.

“Security models that enable users to secure their data in appropriate ways, while granting rights for specific parties to access parts of the data. Or for a specific duration.””Also, it would be useful to change the focus of standards efforts to look more at the data than transport, for instance in current IETF working groups.”

2. Noise in specifications hurts

C. Bormann, Universitaet Bremen TZI

An argument about using a particular format for the expression of http://cbor.io for human-readable schemas.

3. YANG as the Data Modelling Language in the IoT space

Benoit Claise, Cisco Systems

“I hope that the industry will standardize on a single data modeling language for a particular technology (like IoT), so that no mapping between data models would be required.”

“Summary: you should really use YANG as the data modeling language in the IoT space”

4. The ZigBee Cluster Library over IP

Robert Cragie

This is a manufacturer-orientated library for commands for specific types of radio-controlled devices and the commands relevant to them which can be profiled and added to or new commands.

“a ‘cluster’ is a group of functionally related attributes and commands”

“The ZigBee Alliance formed a working group with a working name of “ZCL-over-IP” to undertake the work of mapping the ZCL to an equivalent protocol, which can be used effectively with the IP suite.”

They considered UDP and REST but preferred option is “Use transaction and data representations typically in use in conjunction with IP and map the existing ZCL to these transactions and data representations”

As an example:

“ZigBee Smart Energy clusters were modeled in UML and then a completely new protocol based on HTTP and XML was produced”

5. Fairhair: interoperable IoT services for major Building Automation and Lighting Control ecosystems

Dee Denteneer, Michael Verschoor, Teresa Zotti. Philips Lighting.

“Building Automation and Lighting Control – goal is a standard – “Fairhair is built on the belief that the established BA&LC ecosystems have a major asset in their mature data model; an asset which can be largely maintained when these ecosystems transition to the IP and IoT domain while the opportunity for differentiation at the networking layer will gradually disappear.”

The Fairhair framework is expected to specify at least the following services and concepts:

  • a generic model description of a domain model in terms of web resources and a mapping of elements in this data model (e.g., objects) to URIs
  • mapping of the existing methods to interact with elements in the data model (e.g., to write a property value) to RESTful interaction methods as defined by CoAP.
  • “IoT friendly” data encoding formats, such as JSON or CBOR, instead of ecosystem specific encoding formats
  • a scalable mechanism for device and service discovery, independent on the ecosystem’s specific semantics
  • other orthogonal application services, related to the security model (e.g., supporting authorization, secure unicast and multicast communication) and network management.

6. Object Oriented Approach to IoT Interoperability


Device updates:

“When discussing runtime binding, the main question is: do devices really need to have á priori knowledge of the types/classes of other devices with which they attempt to interoperate with? Or does it suffice for a device only to discover a set of permissible actions and properties supported by the other device?”

High level, object-orientated approach:

“Objects: Objects/Subjects in a sentence or phrase and corresponding to nouns Properties: Nouns (they could be other objects) or adjectives Behaviors: Verbs; actions that an object can do or allows to be done to it. In Service Oriented Architecture, these are called Services Adverbs: Parameters for Behaviors”

An implementation: XML to define NMM

7. Interoperability and the OpenDOF Project

Bryant Eastham, President and Technical Committee Chair, OpenDOF Project 

“OpenDOF” – “The OpenDOF Project considers interoperability a core principle. It has had a huge impact on its design and implementation. We provide an open repository where semantic definitions of all kinds can be shared and provide the system for others to do the same, all to increase interoperability”

“The definition of interoperability that we presented in the Introduction referred to a common understanding of an action between a requestor and potential provider. In secure systems this common understanding must extend to the security configuration of the system(s) involved.”

8. It’s Often True: Security’s Ignored (IOTSI) – and Privacy too.

S. Farrell, Trinity College Dublin; A. Cooper Cisco

1. Don’t forget that the user owns the device and, arguably, the
data produced related to that device.

2. Don’t forget that the device needs to be updated and that the
vendor will end-of-life the device, but the above still needs to
be remembered.

3. Don’t forget that while we can secure information elements in
transit and in storage, that will always be imperfect and
information will leak out.

It is worth noting that the IOTSI call for submissions itself did
ignore all of these issues.”


9. Overview of IoT semantics landscape

Christian Groves (Christian.Groves@nteczone.com) Lui Yan (scarlett.liuyan@huawei.com) Yang Weiwei (tommy@huawei.com) Huawei Technologies

Survey of existing ontologies suitable for this purpose and some recommendations – that they be open, recommend which ones to use. Appendix is a list of ontologies that might be relevant.

10. Loci of Interoperability for the Internet of Things

Ted Hardie Google

“In the absence of a common systems engineering approach to specify where different types of rule are applied, nodes cannot know where the data they supply will be interpreted, or even that it will be interpreted only once. Contextualizing the data they send will increase the likelihood that it can be interpreted correctly. That contextualization should reference the most primitive possible schema or data model that results in a correct understanding, in order to increase further the chance of correct interpretation and to avoid leakage of unnecessary data about the system to observers.”

– interesting that it references the local and remote aspects. I’m not sure about the argument that it reduces the information leakage of node though.

11. IPSO Smart Objects

Jaime Jimenez, Michael Koster, Hannes Tschofenig, Ericsson

“The data model for IPSO Smart Objects consists of four parts: 1) Object Representation 2) Data Types 3) Operations 4) Content Formats”

“Objects and resources are implicitly mapped into the URI path hierarchy by following the OMA LWM2M object model, in which each URI path component sequentially represents the Object Type ID, the Object Instance ID and the Resource Type ID”

Contains an XML example that for me, brings back unpleasant memories of SOAP.

12. IOTDB ­ Interoperability through Semantic Metastandards

David Janes

A lot of assertions, not much argument (separating out into a semantic model and state, all json-ld and REST). Implementation and schemas.

15. SenML: simple building block for IoT semantic interoperability

Ari Keränen ari.keranen@ericsson.com Cullen Jennings fluffy@cisco.com

“SenML provides a simple model for retrieving data from sensors and controlling actuators. It provides minimal semantics for the data inline and allows for more metadata with in-line extensions and links.”

  • designed for low power, low capacity and processor devices
  • being standardised in IETF CORE


[{ "n": "urn:dev:ow:10e2073a01080063", "v":23.1, "u":"Cel" }]

17. SmartThings

M. Koster

“[W3C-WoT] model: a connected Thing is defined by and interacted with through its Events, Actions, and Properties.”

“Semantic interoperability may be achieved through common definitions
of cross domain meta models [e.g. schema.org instances] and domain specific models [i.e. vendor defined programming models], and shared
vocabulary to describe the events, actions, and properties of
connected things.”

18. Semantic Interoperability Requires Self­describing Interaction Models HATEOAS for the Internet of Things

Matthias Kovatsch Siemens AG / ETH Zurich; Yassin N. Hassan ETH Zurich; Klaus Hartke Universität Bremen TZI

Proposes semantic links and semantic forms so you get some of the nice characteristics of links (e.g. bookmarkablity). Focus is on developer as user.

19. A Pragmatic Approach to Interoperability in the Internet of Things

Kai Kreuzer, Deutsche Telekom AG Kai Kreuzer

  • open source, consideration of end-user usecases and so a functional perspective
  • semantics using tags based on ontologies

20. AllJoyn / AllSeen standards org

Marcello Lioy

Just consists of 3 links:
data model guidelines

XML based. Usecases are ifttt but mostly code generation for devs.
“method” “signal” “property” (with access parameters read / write)
Without looking too closely, it looks something like Java basics in XML

21. Modeling RESTful APIs with JSON Hyper-Schema

K. Lynn, L. Dornin, Verizon Labs

An actual usecase!

“The central problem in an IoT domain such as home control might be
characterized as “translating intention into configuration”. The
challenge is to translate a high level goal such as “turn off all the
lights on the first floor”, expressed in a natural language, into

The remainder of the document is some examples of json and json-ld defining REST interactions: “JSON Hyper-Schema”.

22. OGC SensorThings API: Communicating “Where” in the Web of Things

Open Geospatial Consortium

“a standardized open data model and application programming interface for accessing sensors in the WoT and IoT”

– an ontology for sensors.

Streams of data, sensors and their properties – datamodel is here.

23. IoT Information Model Interoperability An Open, Crowd-Sourced Approach in Three Parallel Parts

Jean Paoli, Taqi Jaffri, Microsoft

  • argues for separated protocols and schemas
  •  thinks crowd sourcing from schemas to devices would be a way to build bridges

24. OMA Lightweight M2M Resource Model

Author: Joaquin Prador – OMA Technical Director

“This paper gives an introduction to standard developed at the Open Mobile Alliance (OMA), Lightweight Machine to Machine (LWM2M). LWM2M provides several interfaces built on top of Constrained Application Protocol (CoAP) to perform management of a wide range of remote embedded devices and connected appliances in the emerging Internet of Things, to perform remote service enablement and remote application management.”

1) Bootstrap 2) Device Discovery and Registration 3) Device Management and Service Enablement 4) Information Reporting

Uses CoAP and DTLS (the latter for security)

Datamodel: id, name, operations [read / write / execute], instances, type, range or enumeration, units, description

26. Semantic Overlays Over Immutable Data to Facilitate Time and Context Specific Interoperability

Pete Rai – Principal Engineer – Cisco Stephen Tallamy – Engineering Architect – Cisco


“apply semantic interoperability layers over-the-top, as and when they are needed. This approach is specifically designed to leave the source data elements untouched and effectively immutable.”

27. Towards semantic interoperability in the IoT using the Smart Appliances REFerence ontology (SAREF) and its extensions

Jasper Roes & Laura Daniele

“SAREF is not about the actual communication with devices and has not been set up to replace existing communication protocols, but it lays the base for enabling the translation of information coming from existing (and future) protocols to and from all other protocols that are referenced to SAREF.”

Took a survey of existing models transformed them to rdf/owl and created a reference ontology.

The idea is to map existing data models and protocols together.

29. Implementation Experiences of Semantic Interoperability for RESTful Gateway Management

Bill Silverajan Tampere University of Technology; Mert Ocak, Ericsson; Jaime Jiménez, Ericsson

“a gateway needs to be introduced into the communication architecture that bridges between especially IP network with semantic data models and non­IP short range radio technologies with proprietary data models” [BLE, ZigBee]

“Integrating such proprietary data models to the network requires the gateway to translate between the data models. This translation is done using proprietary methods in most of the current gateway implementations and hence, creates silos between different gateway manufacturers.”

“Surprisingly, many of the organizations are creating similar application semantics than, in practice, only differ on the vocabulary used”

Uses “Hypermedia As The Engine of Application State (HATEOAS)”

30. Key Semantic Interoperability Gaps in the Internet-of-Things Meta-Models

Ned Smith Intel; Jeff Sedayao Intel; Claire Vishik Intel

“Semantic interoperability of IoT depends heavily on a flexible, simple yet effective meta-model. A tag- value model such as that proposed by Project-Haystack appears to satisfy these criteria, but not fully.”

“Security management interoperability appears to be the most significant set of functionality that should be common across all IoT networks”

suggests use of blockchain for authorities.

"<tag> <ontology> <authority> <blockchain>"

Can’t say I understood everything in this. Guess I’m missing a lot of context

31. Open Connectivity Foundation oneIoTa Tool

J. Clarke Stevens

Interesting collaborative tool for creating interop between IoT systems.

32. Derived Models for Interoperability Between IoT Ecosystems

J. Clarke Stevens, Piper Merriam

OCF – Derived Models for Interoperability Between IoT Ecosystems_v2-examples.pdf

“The Open Connectivity Foundation’s (OCF) oneIoTa tool is essentially a web-based, Integrated Development Environment (IDE) for crowd-sourcing data models for the Internet of Things”

some examples of simple and complex conversations using rules.

33. Semantic Interoperability in Open Connectivity Foundation (OCF)

Ravi Subramaniam, Open Connectivity Foundation (OCF)

“The OCF approach is Resource-oriented with a peer to peer RESTful architecture. The approach also follows a declarative paradigm which requires the explicit definition of information, data, semantics and objectives – these declarative statements are bound to imperative actions in a late-binding manner.”

– standards org with various working groups for interop, large companies.

34. IoT Security in the context of Semantic Interoperability

Darshak Thakore, CableLabs

“can the semantic information about a model also include its security characteristics as a first class member?”

35. IoT Bridge Taxonomy

Dave Thaler, Microsoft

Assuming heterogeneity in protocols and schemas how can we achieve interoperability? (protocol and schema bridges)

“In general, we believe that bridges should use specific schema bridges for known data models (which we call “static schema bridges”), and fall back to using a dynamic schema bridge when no specific schema bridge is found for a discovered resource.”

36. Summary of AllSeen Alliance Work Relevant to Semantic Interoperability

Summary written by Dave Thaler, Microsoft

An explanation of the alljoyn standards, some taken from the website.

37. Internet of things: Toward smart networked systems and societies The Ontology Summit 2015

Mark Underwood a, Michael Gruninger b, Leo Obrst c,∗, Ken Baclawski d, Mike Bennett e, Gary Berg-Cross f, Torsten Hahmann g and Ram Sriram h a Krypton Brothers, Port Washington, NY, USA b University of Toronto, Toronto, Canada c The MITRE Corporation, McLean, VA, USA d Northeastern University, Boston, MA, USA e Hypercube Ltd, London, UK f Knowledge Strategies, Washington, DC, USA g University of Maine, Orono, ME, USA h National Institute of Standards and Technology (NIST), Gaithersburg, MD, USA

“Communiqué of the Ontology Summit 2015” – what ontologies could do for IoT interop.

“A critical obstacle in the widespreadadoption /application of ontologies to earthscience and sensor systems is the lack of tools that address concrete use cases. Developers will need to focus on those tools and techniques that support the deployment of ontologies in IoT applications.”

38. YANG-Based Constrained Management Interface (CoMI)

Peter van der Stok, vanderstok.org; Andy Bierman yumaworks.com

Proposal for adapting the YANG data modelling language for low-power, low-connectivity connected devices. The model is shared by client and server before deployment. YANG is flexible enough to express the data models required. Being adapted by IETF CORE WG for use with CoAP.

?? Submission for IAB IoT Semantic Interoperability workshop 2016


“Our goal is to work with IOT vendors and schema.org to create interoperable schemas that can be absorbed by a range of intelligent cloud services and local apps.”

Beacons, brillo, a hub / gateway and cloud platforms.

“We are still at an early stage of identifying commonalities (requirements) and thinking about interoperability between efforts by Google/Alphabet teams”

“how can we find a good balance between usability and flexibility (complexity) e.g. in terms of nesting common elements vs. precision and size of schema without nesting”

Olimex ESP 8266 dev with Arduino IDE

Bits and pieces of this are everywhere but I’ve not found it all in one place. The ESP Thing docs are excellent and mostly apply to the Olimex too, but there are some subtleties in the setup. I’ve put what I did here so I remember how to do it again.

I had a few ESPs I’d bought in May and not used – these ones from Olimex. I read recently that you can now use ESPs from the Arduino IDE, so I thought I’d give it a go.

The goal is to be able to put some code that uses wifi on the ESP using the Arduino IDE. The basic flow is:

  • Buy ESP and an FTDI USB to Serial cable, plus a small breadboard and some male-male jumper wires, and a cheap voltage regulator
  • Solder the ESP’s legs on
  • Put it in the breadboard
  • Connect it up to the FTDI cable
  • Put it in FTDI mode(?)
  • Plug it in to USB on a laptop and test it using the serial monitor on the Arduino
  • Download the Arduino ESP environment and load it in to the IDE
  • Put test code on the ESP and check it worked

Buy ESP, FTDI and other bits

I think if I was buying them now I’d get some of these ESP Things,  as they have a bunch of nice features (though not many GPIOs), but the excellent docs seem to apply to the ones I have too.

Edit – I was under the misapprehension that FTDI cables could supply 3.3V. No idea why, unless I misread the labels for some of the other outputs. But anyway: the one I bought doesn’t. You can’t use a voltage divider to get the voltage you want either, because that only works for things with tiny amounts of resistance. But, you can get a voltage regulator (e.g for 99p at Maplin) which gives you 3.3V from 5V happily enough.

5V will fry your ESP (though actually it didn’t fry mine), so do this or use a separate power supply (e.g. 2 AA batteries will work).

Solder the ESP’s legs on

I think I soldered mine’s legs on upside down, because the names of the pins were on the underside, although it’s the same way up as the Fritzing component, conveniently. Here’s the pinout document I eventually found. Here are the pin names as written on the board from the perspective I needed (some of them differ from their internal names, no idea why):


Put it in the breadboard

Put it across the middle, like this:


Connect it up to the FTDI cable

Like this:


The ESP will be powered from the FTDI (red jumper). Ground across the board to GPIO0 (21) puts it in FTDI mode (black jumpers). Yellow is RX, orange is TX.

Put it in FTDI mode

The Olimex docs suggest that you need to solder and desolder the dip switches, the three little fellas between pins 3 and 20 in the diagram above. What worked for me is to leave them as they came (which was as in the diagram above, which I think is 0 0 1). I’m fairly confused about why this worked, as the Olimex site says it comes in Flash mode, so maybe I have it upside down or something. But it did work.

Plug it in to USB and test it using the serial monitor on the Arduino

You can apparently use screen for this, but I couldn’t get it to respond “ok”.

screen /dev/tty.usbserial-A402O05B 115200

(Your serial port name will differ).

But, opening up the Arduino IDE and then the serial monitor (selecting the correct port) did work. The baud rate is 115200 – follow the link above for the commands and process.

Download the Arduino ESP environment and load it in to the IDE

These instructions worked very well for me. Make sure you actually select the board! (or else you’ll get strange errors with compiling and end up down a rathole of installing libraries manually. There’s no need, and they’ll conflict).

Then connect up the ground to GPIO0 (21) if you haven’t already, else you’ll get errors uploading like this:

warning: espcomm_send_command: wrong direction/command: 0x00 0x08, expected 0x01 0x08

Put test code on the ESP and check it worked

I used this code example, putting a little edit in the name I could actually find my board on phant. It worked!