How the Root of Trust underpins device-level IoT security

The pervasiveness of Internet of Things (IoT) networks shows up even in Tasmania’s oyster industry.
The business was nearly wiped out by a viral infection in 2016, with oyster mortality rates reaching 90% in some areas. To recover from this disaster, an IoT network of Bosch sensors connected to the Microsoft Azure platform was deployed to help farmers monitor and predict water conditions.
The sensor network led to increased yields, improved management of labor and better food safety. By 2019, Tasmanian oyster farming operations were described by Australian Seafood Industries General Manager Matt Cunningham as essentially back to normal.
While all this is great news, does the oyster industry now face another invisible threat common to all IoT deployments, this time from the technology that it has exploited to such good effect?
IoT cyberattacks pose a new threat
As noted by PSA Certified, a collaborative that established an IoT security framework, more than 5,400 attacks occur each month on IoT devices. These risks are growing and the average cost of an attack is $33,000, according to PSA Certified. Some attacks cost victims millions of dollars in immediate losses and long-term reputational damage.
PSA Certified security goals

A Root of Trust based on unique identification is the foundation of device security.
To help mitigate the risks to IoT deployments posed by these attacks, PSA Certified developed a framework that outlines stages needed to develop and certify an IoT product. The organization also defined 10 security goals.
It's no coincidence that at the top of the framework diagram sits “Unique Identification,” meaning the unique identity of each IoT device. “Attestation,” which includes the ability of such a device to prove its identity, is another core requirement.
The most common IoT devices are sensors, and each is a potential window through which hackers can gain access to an IoT network and beyond. As more sensors are deployed, the potential attack surface becomes greater, and the risk grows. Usually, sensor modules are based around a semiconductor chip, typically a microcontroller (MCU) but sometimes an application-specific integrated circuit (ASIC). The chips convert analog measurements into digital data. In most IoT deployments, including the Tasmanian oyster farms, the data is then sent to cloud servers for processing so that actionable insights can be derived from it.
Chip security underpins sensor security. Each semiconductor must have, or be given, a unique identity and must be able to prove that identity to the server-hosted application or service. A secure communications link can only be created after the sensor’s identity has been established beyond doubt. To protect the data while in transit, data transmissions are encrypted using cryptographic keys.
The sensor’s identity and cryptographic keys together form a Root of Trust — the fundamental building block of IoT security. The identities and keys are random numbers and to be secure, they need to be kept secret and away from the prying eyes of malicious actors.
How chips acquire a Root of Trust
The most common way for a chip to get a Root of Trust is to inject random numbers into its memory from a dedicated secure computer called a Hardware Security Module (HSM) via a programming interface. The HSM generates random numbers and manages the keys. Despite widespread use, the technique has downsides. The links to and from the programming interfaces are often not encrypted, potentially exposing keys to outside scrutiny. What’s more, HSMs are expensive, and OEMs usually need to work with an external programming company. The latter means there’s a third party added to the supply chain, which means further risk at a time when security experts recommend that a zero-trust approach IoT security is best practice.
There’s one other important consideration. Injected keys must be stored in the chip’s non-volatile memory. The memory then needs to be protected by some form of hardware security technology, adding cost and complexity to the chip, and hence to the sensor in which it’s used. Without such protection, side-channel attacks are possible. Side-channel attacks exploit key-dependent variables to determine if a memory cell is in a “zero” or “one” state. For example, a cell taking a “one” state might draw slightly more current than one that takes a “zero” state, and this difference could be probed in preparation for a cyberattack.
Eliminating key injection with physical unclonable functions: PUFs
The SRAM PUF

SRAM PUFs exploit the random pattern of zeros and ones produced when SRAM is powered up.
An alternative to key injection is to have the chip itself generate unique values and convert these into identities and cryptographic keys. Silicon chips exhibit random physical variations during manufacturing. The variations are the basis for physical unclonable functions (PUFs) that are instantiated based on these physical structures. PUFs create random numbers called seeds, the device’s digital fingerprint. The seeds are then converted into identities and keys using a peripheral circuit function — a key generation accelerator — integrated into the MCU chip.
Intrinsic ID’s SRAM PUF is a good example of PUF technology. SRAM is embedded in most microcontrollers and microprocessors. On power-up, the SRAM cells each take on a “zero” or “one” state that is determined by the microscopic physical variations of the silicon wafer. The SRAM, therefore, creates a unique fingerprint for the chip, giving it a unique identity. Because SRAM is already present in an MCU, the PUF only needs the addition of software to drive it. Compared with key injection, it offers enhanced security because identities are much harder to clone or steal. For example, keys are generated on-demand and not stored, providing a higher level of protection against side-channel attacks.
Today, SRAM PUF technology is found in devices from several semiconductor manufacturers, including Intel, Microsemi, NXP and Xilinx. This NXP app note describes the use of SRAM PUF technology in the LPC54S0xx family of ARM Cortex-M4 based MCUs, and Microsemi uses an SRAM PUF as the main security element in its PolarFire FPGAs.
What’s next?
Although SRAM PUFs provided a significant improvement in chip security and consequently sensor security, second-generation PUF technology could appear in semiconductors next year, if not sooner. In this new approach to PUFs, a dedicated 64 x 64 array of cells is created on MCUs (or other semiconductor chips) during manufacture. Each cell comprises two transistors and the PUF produces a 64 x 64 array of bits. The array is independent of any memory structures on the chip. Like SRAM PUFs, the technology exploits randomness in the thickness of the oxide layer on CMOS wafers resulting from the manufacturing variability discussed earlier, but in a different way. It measures the extent of quantum tunneling – a phenomenon whereby electrons propagate through barriers – which varies with the thickness and atomic structure of this oxide layer. The second-generation PUF technology involves measuring quantum effects whereby electrons propagate through this layer to varying degrees, depending on its thickness and the atomic structure at particular transistors.
The currents involved are in the order of femtoamps (10-15 amps) and the essence of the technology is an analog circuit that reads these tiny currents coming out of the array’s cells. The PUF can then generate multiple, uncorrelated random numbers from which cryptographic keys are produced inside the chips. The ability of the PUF to create multiple, uncorrelated keys on demand means that sensors based on the chips are usable for multiple applications and the need for expensive and risky key injection is eliminated.
Proponents of second-generation PUFs claim that they provide the highest possible security because they are measuring a probabilistic, quantum effect that has higher entropy (randomness) than that produced by other PUFs. Some evidence exists for this claim. Independent testing of Crypto Quantique’s QDID second-generation PUF confirmed that it is secure against all known attack methods. The company’s PUF is also PSA Certified Level 2 Ready.
Improved Roots of Trust form bright future for IoT security
Even in the short time since Tasmanian oyster growers deployed IoT networks to great effect in 2017, IoT security has improved dramatically. On-chip Root of Trust technologies are reducing or eliminating the cost, complexity and risks of key injection. The need for security is becoming better understood, organizations are collaborating to create common security standards and methodologies and legislators are forcing those reluctant to embrace IoT security to do so.
This may mean that oysters will only ever be shucked, not hacked.

