FPGA Security Features

2 hours ago 1

Again, we are under attack, and our FPGA designs are not an exception. When someone talks about hostile environments, usually we think of wars, a nuclear reactor, or Jupiter; however, a hostile environment is everywhere outside our office or factory. Let me explain this.

When we are working on an FPGA design, especially those that are connected, we usually need to use keys or secrets that are safe in our test bench. Once the design leaves our safe place to go, for example, to the customer installations, all these secrets must be protected as if they were at war. These customer facilities will be visited by good actors, and also bad ones. By bad ones I am talking about a competitor who will have access to our board.

In this article I want to show you what FPGAs offer to protect our designs. I am not going to include code examples in this article, but I will do so in a future article.

I have divided the article into four different kinds of protections.

1. Secure Storage with PUF

PUF stands for Physically Unclonable Function. Although the name may not immediately sound related to security, it is actually one of the most important hardware features that makes FPGAs a strong candidate for integration into Hardware Security Modules (HSMs).

Physically Unclonable Function (PUF) is a hardware primitive that exploits tiny, random manufacturing variations in each FPGA to produce a device-unique, repeatable response. Rather than storing a fixed secret in non-volatile memory, the chip measures its own intrinsic behavior and derives a cryptographic value that is effectively impossible to copy, even across devices from the same wafer. During an initial enrollment step the device generates helper data so that on every boot it can reliably reconstruct the same value despite temperature, voltage, or aging.

The main role of a PUF is to serve as a hardware root of trust for keys. The device deterministically regenerates a Key-Encryption Key (KEK) from the PUF response and uses it inside the silicon to unwrap “black keys” or to load working keys into the AES engine, without ever exposing the secret to software. This binds bitstreams and application keys to a specific FPGA, prevents cloning, and removes the need to store long-term keys in readable memory.

In AMD devices (Zynq UltraScale+, Versal, Spartan UltraScale+), the PUF leverages intrinsic delay variations in the fabric; after a one-time enrollment the chip deterministically reconstructs the same KEK at boot and feeds it internally to key-management/AES hardware without ever exposing it to software.

In Microchip families (PolarFire, PolarFire SoC, SmartFusion2), the PUF is SRAM-based: it derives a fingerprint from the SRAM power-up pattern and, with helper data, reliably regenerates the same KEK to unwrap black keys and bind bitstreams to that specific device, with optional tamper-triggered zeroization.

2. Memory and Bitstream Encryption

A security issue to which some FPGAs are exposed is that their design is stored in external QSPI flash memory. This can lead to a security leak if an actor sniffs the QSPI interface during configuration, enabling them to reproduce our design on other FPGAs or study it to extract our know-how.

While PUF is relatively new, bitstream protection has been with us for many years (some Spartan-6 parts already support it). From my point of view, although it has been in the industry for many years, the purpose was different. At the beginning, the goal was mainly to keep our industrial knowledge safe from competitors; however, today it means much more. In the bitstream we not only store our design, but also potentially keys or secrets that allow our systems to access servers or other online resources.

The strength of encryption relies on the length of the key used. Here, both AMD, Microchip and other vendors use 256-bit AES keys. This length not only protects our systems today, but a 256-bit key is also considered resistant to quantum attacks, so it should be safe for many more years.

3. Side-Channel Attack Countermeasures

How about finding out an AES key just by measuring the power consumption of our FPGA? This is known as Differential Power Analysis, and although it may sound a bit crazy, it is a real attack that actually works.

Newer FPGA families like AMD Spartan UltraScale+ integrate countermeasures against these types of attacks. Also, PolarFire uses a licensed IP from Rambus that adds protection to these kinds of attacks.

It’s important to know that these countermeasures are focused on critical parts of the FPGA such as the HW AES core.

There are other side-channel attacks, like measuring the electromagnetic field of the FPGA; however, although manufacturers do not include specific measures against them, remember that we control the hardware inside the FPGA, so it can be easier to mitigate these kinds of attacks by using certain types of oscillators or circuits to confuse the attacker.

If you don’t know what a side-channel attack is, this video from Samy Kamkar shows some of them.

4. Anti-Tampering Features

What happens if an external player takes the FPGA off the PCB and connects JTAG to read the eFUSEs or DNA, or reads the external memory with a SPI reader? All of these attempts to modify or access our design are known as tampering, and fortunately FPGA manufacturers also add some anti-tampering functions.

If I were trying to access an FPGA, the first thing I would look for is the JTAG pins. Ideally, these pins must not be exposed on the PCB; however, if we need a way of configuring the FPGA for debug, we need to expose them, also allowing attackers to access them. A very simple way of protecting against this is simply disabling the JTAG interface when the FPGA board leaves our facilities; this way it will remain inaccessible even with the pins exposed.

Another way of tampering is extracting the bitstream of an FPGA and loading it into another to know how it works. This can be avoided in two ways. We can encrypt our bitstream, and we can also use the DNA of the FPGA, a unique number for each device, and add to the design a module that only works if the FPGA DNA is correct.

Conclusions

Until now, cybersecurity has been seen as something unrelated to embedded systems; however, as we saw at Embedded World 2025, it is a trending topic today, especially with the CRA.

I am 100% sure that FPGAs will play a decisive role in this area, and manufacturers like AMD and Microchip are adding more cybersecurity capabilities to every design, particularly with critical fields where FPGAs are key elements, such as defense.

Do you want me to go deeper into cybersecurity on specific FPGA families in a future article? Let me know on LinkedIn or X — I’d love to hear your feedback!

Read Entire Article