Hubs Become Central to the IoT
February 01, 2017
on
on
Even before real systems are widely deployed, the Internet of Things (IoT) is rushing into a period of rapid evolution. Early — and, frankly, simplistic — ideas about IoT architecture are giving way to more nuanced views, often based on analysis of data flows and on hard questions of why the IoT really matters. The result will be new architectures, leading to new silicon. We will illustrate this trend with snapshots of three new IC deployments described at last year’s Hot Chips conference.
Let’s begin with today’s concepts. Many systems designers’ first impressions of the IoT fit into one of two camps: conservatives or idealists (Figure 1). The conservatives remain focused on conventional embedded design and see the IoT as an additional layer of requirements to be slathered over their existing designs. The idealists see the IoT as an opportunity to virtualize nearly everything, drawing all tasks except physical sensing and actuating back into the cloud. Often the best solutions turn out to be linear combinations of the extremes. But these compromises will bring about the emergence of whole new categories of computing near the network edge.
Two simple ideas
Perhaps the most frequent perception of the IoT among designers of industrial, infrastructure, and aerospace systems — the heartland of embedded computing — is a perception of just more requirements. They see the IoT in terms of new functions, such as passive data-logging, remote update, or perhaps remote command capability, that require Internet connectivity.
So the first question is, obviously, how to physically connect to the Internet. If the embedded controller is at least a modest-sized board already connected to an industrial network or Ethernet, this isn’t much of a problem. But if the controller is either small — only a microcontroller unit (MCU), for example — or physically isolated, getting to the Internet can mean additional hardware: a Wi-Fi port, a Bluetooth interface, or some combination of the myriad short-range wireless links the IoT has spawned in recent years. And of course any new connection will require a wireless hub to connect to, and a protocol stack on your system.
But there is another critical — and often underappreciated — layer in this incremental approach to the IoT: security. Connecting an embedded controller to the Internet, however indirectly, connects the controller to every hacker in the world, and raises a bright banner announcing, “I’m here; come probe me!” If the controller has any conceivable ability to harm persons or property, it must take responsibility for authentication, data protection, and functional safety. Even if the controller is doing nothing of importance, it still must be guarded against malware. A recent massive denial-of-service attack appears to have been launched from an enormous botnet composed at least partly of IoT connected devices.
This protection is more easily prescribed than accomplished. As the international news relates nearly every week, even government agencies and global enterprises have failed to secure their systems. IoT developers are impaled on the dilemma of having to do better, but with far fewer physical resources. A Hardware Security Module (HSM) inside an MCU seems barely adequate, but today it is physically unattainable.
Difficulties notwithstanding, the great advantage of this conservative view is what it conserves. The latencies and bandwidths of data flows in the embedded system remain intact—or at least they should, if connectivity and security tasks don’t introduce new uncertainties into the system. So real-time tasks continue to meet deadlines and the transfer functions of control loops remain the same. This is an obvious benefit for a multi-axis motor controller. But it can even be valuable in a system as apparently plodding as a building’s lighting management system.
An ideal, lost
The idealist’s approach to the IoT is entirely different. Start with a clean sheet of paper. Draw in all the necessary sensors and actuators. Now put an Internet connection on each one, and create a cloud application to read the sensors and command the actuators. In effect, this is a completely virtual system. You can change not only operating parameters, but algorithms and even the purpose of the system simply by changing software. For industrial applications the phrase “software-defined machine” has been suggested.
But those devils in the details are legion. And most of them relate to the presence of the Internet at the heart of the system. Internet messages are subject to unpredictable delays over a wide range — including, at the extreme, forever. So a system using this ideal architecture must tolerate delayed or lost messages. This requirement is so constraining that it leads many experienced designers to reject the idealized architecture out of hand, no matter how theoretically flexible it might be. And there is another issue.
The same connectivity and security requirements that descended on our conservative embedded system still apply. The sensors and actuators still must talk with the Internet, and must still defend themselves against it. But now we are adding these demands not to a board-level computer, but to tiny sensors, solid-state relays, or motor controller MCUs. The relative overhead is huge and the likelihood high that an attack will overwhelm the simple security measures a tiny, battery-powered or scavenging device can mount. So what to do?
These questions have led many architects to seek a middle path, neither conservative nor idealistic. They are moving critical computing functions to an intermediate location, between the sensors and the Internet. Often this intermediate site also acts as a wireless hub.
Intermediators
The idea of moving computing to an intermediate point, often between a short-range wireless network and an Internet connection, raises many new questions. Which tasks should go where? Just how much computing power and adaptability does this smart hub require? And does this arrangement require new algorithms, or is it really just a repartitioning of a conventional embedded system?
The answers to these questions come from first finding the weakest link in the system. In this case, that link would be the public, non-deterministic, occasionally absent Internet. The object becomes to distribute tasks among local sites, the hub, and the cloud so that no latency-sensitive data flows have to traverse the Internet, and secondarily, so that computations are as close as possible to the data they consume.
If we try to follow these guidelines in practice, we will see that in some applications the conservatives are exactly right: the best solution is to keep the computing resources local, and to simply layer-on connectivity and a degree of security. But we can identify at least three other interesting cases.
Enter the smartphone
One interesting case arises when there is a functional advantage to combining the operations of several nearby controllers. This situation might come up, for example, when several controllers are working on different parts of the same process, but all of them would benefit from the sensor data their neighbors are collecting. Moving the control algorithms to a wireless hub that gathers all the sensor data and controls all the actuators can allow superior control optimization.
Today such systems will typically be implemented using short-range wireless links, from local wireless MCUs on the sensors and actuators to a proprietary wireless hub. If the area to be covered gets too large for a low-power wireless link, the system can escalate to an industrial-strength wireless network, Wi-Fi, or even a cellular connection to bridge longer distances.
The eventual deployment of 5G service — sometime after 2020 probably — could simplify this picture further, offering a single medium for local links, longer-range connections and the pipe back to the internet. But mentioning cellular service brings up an interesting point that may prove valuable well before 5G is in place.
If we look at implementation of the hub, we see an increasingly complex system. There are provisions for connectivity, both upward to the internet and outward to sensors and actuators. The latter wireless connections must be flexible in RF front end, baseband, and protocol to cope with the mass confusion of wireless-network quasi-standards. Software-defined radio would be a reasonable response to the current mess.
Then there is the actual controller, where the algorithms are executed. This too must provide considerable headroom, as access to all that sensor data will probably lead to a call for more elaborate and demanding algorithms, perhaps requiring hardware acceleration on real-time tasks. And there are security needs, since the hub will bear most of the authentication and encryption responsibility for the system. These needs may dictate a hardware crypto accelerator and a secure key store.
Let’s begin with today’s concepts. Many systems designers’ first impressions of the IoT fit into one of two camps: conservatives or idealists (Figure 1). The conservatives remain focused on conventional embedded design and see the IoT as an additional layer of requirements to be slathered over their existing designs. The idealists see the IoT as an opportunity to virtualize nearly everything, drawing all tasks except physical sensing and actuating back into the cloud. Often the best solutions turn out to be linear combinations of the extremes. But these compromises will bring about the emergence of whole new categories of computing near the network edge.
Two simple ideas
Perhaps the most frequent perception of the IoT among designers of industrial, infrastructure, and aerospace systems — the heartland of embedded computing — is a perception of just more requirements. They see the IoT in terms of new functions, such as passive data-logging, remote update, or perhaps remote command capability, that require Internet connectivity.
So the first question is, obviously, how to physically connect to the Internet. If the embedded controller is at least a modest-sized board already connected to an industrial network or Ethernet, this isn’t much of a problem. But if the controller is either small — only a microcontroller unit (MCU), for example — or physically isolated, getting to the Internet can mean additional hardware: a Wi-Fi port, a Bluetooth interface, or some combination of the myriad short-range wireless links the IoT has spawned in recent years. And of course any new connection will require a wireless hub to connect to, and a protocol stack on your system.
But there is another critical — and often underappreciated — layer in this incremental approach to the IoT: security. Connecting an embedded controller to the Internet, however indirectly, connects the controller to every hacker in the world, and raises a bright banner announcing, “I’m here; come probe me!” If the controller has any conceivable ability to harm persons or property, it must take responsibility for authentication, data protection, and functional safety. Even if the controller is doing nothing of importance, it still must be guarded against malware. A recent massive denial-of-service attack appears to have been launched from an enormous botnet composed at least partly of IoT connected devices.
This protection is more easily prescribed than accomplished. As the international news relates nearly every week, even government agencies and global enterprises have failed to secure their systems. IoT developers are impaled on the dilemma of having to do better, but with far fewer physical resources. A Hardware Security Module (HSM) inside an MCU seems barely adequate, but today it is physically unattainable.
Difficulties notwithstanding, the great advantage of this conservative view is what it conserves. The latencies and bandwidths of data flows in the embedded system remain intact—or at least they should, if connectivity and security tasks don’t introduce new uncertainties into the system. So real-time tasks continue to meet deadlines and the transfer functions of control loops remain the same. This is an obvious benefit for a multi-axis motor controller. But it can even be valuable in a system as apparently plodding as a building’s lighting management system.
An ideal, lost
The idealist’s approach to the IoT is entirely different. Start with a clean sheet of paper. Draw in all the necessary sensors and actuators. Now put an Internet connection on each one, and create a cloud application to read the sensors and command the actuators. In effect, this is a completely virtual system. You can change not only operating parameters, but algorithms and even the purpose of the system simply by changing software. For industrial applications the phrase “software-defined machine” has been suggested.
But those devils in the details are legion. And most of them relate to the presence of the Internet at the heart of the system. Internet messages are subject to unpredictable delays over a wide range — including, at the extreme, forever. So a system using this ideal architecture must tolerate delayed or lost messages. This requirement is so constraining that it leads many experienced designers to reject the idealized architecture out of hand, no matter how theoretically flexible it might be. And there is another issue.
The same connectivity and security requirements that descended on our conservative embedded system still apply. The sensors and actuators still must talk with the Internet, and must still defend themselves against it. But now we are adding these demands not to a board-level computer, but to tiny sensors, solid-state relays, or motor controller MCUs. The relative overhead is huge and the likelihood high that an attack will overwhelm the simple security measures a tiny, battery-powered or scavenging device can mount. So what to do?
These questions have led many architects to seek a middle path, neither conservative nor idealistic. They are moving critical computing functions to an intermediate location, between the sensors and the Internet. Often this intermediate site also acts as a wireless hub.
Intermediators
The idea of moving computing to an intermediate point, often between a short-range wireless network and an Internet connection, raises many new questions. Which tasks should go where? Just how much computing power and adaptability does this smart hub require? And does this arrangement require new algorithms, or is it really just a repartitioning of a conventional embedded system?
The answers to these questions come from first finding the weakest link in the system. In this case, that link would be the public, non-deterministic, occasionally absent Internet. The object becomes to distribute tasks among local sites, the hub, and the cloud so that no latency-sensitive data flows have to traverse the Internet, and secondarily, so that computations are as close as possible to the data they consume.
If we try to follow these guidelines in practice, we will see that in some applications the conservatives are exactly right: the best solution is to keep the computing resources local, and to simply layer-on connectivity and a degree of security. But we can identify at least three other interesting cases.
Enter the smartphone
One interesting case arises when there is a functional advantage to combining the operations of several nearby controllers. This situation might come up, for example, when several controllers are working on different parts of the same process, but all of them would benefit from the sensor data their neighbors are collecting. Moving the control algorithms to a wireless hub that gathers all the sensor data and controls all the actuators can allow superior control optimization.
Today such systems will typically be implemented using short-range wireless links, from local wireless MCUs on the sensors and actuators to a proprietary wireless hub. If the area to be covered gets too large for a low-power wireless link, the system can escalate to an industrial-strength wireless network, Wi-Fi, or even a cellular connection to bridge longer distances.
The eventual deployment of 5G service — sometime after 2020 probably — could simplify this picture further, offering a single medium for local links, longer-range connections and the pipe back to the internet. But mentioning cellular service brings up an interesting point that may prove valuable well before 5G is in place.
If we look at implementation of the hub, we see an increasingly complex system. There are provisions for connectivity, both upward to the internet and outward to sensors and actuators. The latter wireless connections must be flexible in RF front end, baseband, and protocol to cope with the mass confusion of wireless-network quasi-standards. Software-defined radio would be a reasonable response to the current mess.
Then there is the actual controller, where the algorithms are executed. This too must provide considerable headroom, as access to all that sensor data will probably lead to a call for more elaborate and demanding algorithms, perhaps requiring hardware acceleration on real-time tasks. And there are security needs, since the hub will bear most of the authentication and encryption responsibility for the system. These needs may dictate a hardware crypto accelerator and a secure key store.
Read full article
Hide full article
About Ron Wilson
Ron Wilson, a long-time technology editor, follows emerging system design issues and creates, edits, and curates technical content for Intel PSG. >>
Discussion (0 comments)