Sign Up        Log In
Get Started

Get email updates

Receive great industry news once a month in your inbox

Get email updates

Receive great industry news once a month in your inbox

Menu
Sign Up
Log In

Real-Time Device Communication, Part 1

by Exosite

The Problem

For its entire existence, the Internet has always had the notion of Clients and Servers. Clients make a request for a resource on a server and the server responds to the client. This works great for traditional browsing of websites because the servers have all the information that the clients want and the clients know when the user wants it.

Unfortunately, the Internet of Things (IoT) doesn't fit into this model very well. There is still a client (the Internet-connected  product) and a server (the cloud service), but with embedded devices there isn't a user that pushes a refresh button when they expect something to change. This means that the device has to constantly ask the server if anything has changed, a task also known as polling. The problem with polling is that it wastes a lot a data and adds a large amount of load to the servers.

The most obvious solution would be to swap the roles so that the cloud service is the client and the device is the server. The cloud service could then make a request to the device-server to tell it that a resource changed. This would be a great solution if we didn't need to consider security. Having an open server on a low-powered device that, let's be honest, probably won't be getting timely security updates when a vulnerability is found is a great way to let an attacker into your network.

Not to mention that in this day and age, it's actually pretty hard to get a persistent open socket on the heterogeneous networks of the world. Maybe your device is getting deployed on a network that is behind a NAT router or firewall that will let you make a UPnP request to open a port. Maybe your device needs to use PMP or even PCP. Maybe you're on a network that won't let you open a port at all, and you'll need to rely on punching a hole using something like ICE or STUN and TURN with the help of an external server. Then what happens if there is more than one layer and you have to support whatever protocol that speaks as well?

how-standards-proliferate-iot

What makes this even harder is that the client in these situations is likely going to be a very small embedded device with only a couple K of RAM, a couple dozen K of flash, and a singe thread of execution. If you want a solution that is going to work on nearly every network in the world and want to still fit the firmware into a small device, you're probably going to need to stick to the standard client-server relationship where the client polls the server for changes to resources.

Long-Polling

We have implemented several solutions to work around these issues, the first of which adds something called "long polling" to the functionality of both our HTTP Data API and Remote Procedure Call (RPC) API. You still have the standard client-server relationship, in which the client asks the server if any of the resources have changed, but we've ]introduced a little cheating. When the server sees that you want to make a long-polling request, it will simply wait to respond until something has changed instead of telling you that nothing has changed. This model should work with any network, as it will appear to the network that there is a client making a request to a server, but the server just happens to be really slow.

For this explanation I'm going to stick to the HTTP Data API, but all the same concepts apply to the wait procedure on the RPC API. In this example, we're going to mock up a really simple smart plug into which our user has plugged a lamp. The only functionality we have is to turn it on and turn it off.

To start, lets find out if we currently have the power switched on or off. We'll have our client make a request to the Exosite platform to read the contents of the power dataport. This is what a standard read request looks like on the HTTP Data API:

GET /api:v1/stack/alias?power HTTP/1.1
Host: m2.exosite.com
X-Exosite-CIK: 05a4f6b4cb7cd1ff0399915a5e8bfa96ad4e0a51
Accept: application/x-www-form-urlencoded; charset=utf-8

The server responds indicating that the the power dataport has the value "off". This is the standard response:

HTTP/1.1 200 OK
Server: nginx/1.2.1
Date: Mon, 15 Dec 2014 20:51:18 GMT
Content-Length: 9
Connection: keep-alive

power=off

At some point in the future, our user will want to turn the lamp on, so the smart plug will have to constantly make requests to the platform inquiring about the current state of the power dataport. Polling too often is a massive drain of resources; this simple transaction is at least 412 bytes. If you were to make that request once per second, that would add up to over 35 MB per day or 13 GB per year and that's just for a single device in the simplest application possible.

We could cut that down by only checking once every thirty seconds, but that's still at least 400 MB per year. That may not sound like a lot, but if someone eventually had one of these in every outlet in their house, that still adds up to dozens of GB a month. And is it really feasible for a light to take thirty seconds to turn on after you flip the switch?

This is where long polling comes to save us. To make it work, we make the following request:

GET /api:v1/stack/alias?power HTTP/1.1
Host: m2.exosite.com
X-Exosite-CIK: 05a4f6b4cb7cd1ff0399915a5e8bfa96ad4e0a51
Accept: application/x-www-form-urlencoded; charset=utf-8
Request-Timeout: 3000000

The only change here is the addition of the Request-Timeout header. This header tells the platform that we want this request to use long polling and that we want the maximum timeout on the request to be 300, 000 milliseconds. The platform will receive this request but it won't respond to it, at least not right away. Instead, it will wait, either until the value in the dataport changes or until the timeout is reached.

If a timeout is reached without the dataport being written to, the platform will return the following:

HTTP/1.1 304 Not Modified
Server: nginx/1.2.1
Date: Mon, 15 Dec 2014 21:28:23 GMT
Content-Length: 27
Connection: keep-alive

HTTP/1.1 304 Not Modified

This just tells us that nothing has changed. If, instead, something does get written to the dataport before the timeout is reached, the platform will return the following:

HTTP/1.1 200 OK
Server: nginx/1.2.1
Date: Mon, 15 Dec 2014 20:51:18 GMT
Content-Length: 8
Connection: keep-alive

power=on

This should look familiar; it's exactly the same as the response to the standard read request. But unlike a standard read request, the client is getting a notification of the change in real time. By making sure that you allways have a waiting long-polling request, you will always be notified of any changes to that dataport almost instantly. The total delay is going to depend on your network type and the current network load. But on most standard hard-line networks, you should see response times under a few hundred milliseconds - basically imperceptible to most humans. This is much better than the thirty seconds we might have been waiting before.

How about the data usage? Well, if we assume that the vast majority of the time the value isn't changing, we can say that each transaction takes somewhere around 402 bytes. And if we use the maximum timeout of five minutes, we end up somewhere around 43 MB per year. Not bad.

Use it Today

This long-polling feature is available in our API today. It is supported on the RPC API with a separate wait procedure (documentation) and on the HTTP Data API by including a Request-Timeout header on a read request (documentation).

The Future

This is only the tip of the iceberg when it comes to real-time communication. In our "Real-time Device Communication, Part 2" post, we'll talk about some upcoming features on our CoAP API that will let you take advantage of the data conservation of CoAP and still get the real-time communication that we discussed in this post. After that, we'll show some real-world examples and address some of the gotchas that you might run into. If there's anything specific that you'd like to see in the next post, drop me a note and I'll see what I can do.

Topics:Technology

Welcome to the Exosite Blog

Stay updated about the latest Exosite news and events.

More...

Subscribe to Updates