LRR-DPUF: Learning Resilient and Reliable Digital Physical Unclonable Function
Jin Miao1 Meng Li2 Subhendu Roy1 Bei Yu3
1Cadence Design Systems 2University of Texas at Austin 3The Chinese University of Hong Kong
1 / 26
LRR-DPUF: Learning Resilient and Reliable Digital Physical - - PowerPoint PPT Presentation
LRR-DPUF: Learning Resilient and Reliable Digital Physical Unclonable Function Jin Miao 1 Meng Li 2 Subhendu Roy 1 Bei Yu 3 1 Cadence Design Systems 2 University of Texas at Austin 3 The Chinese University of Hong Kong 1 / 26 Outline
1 / 26
2 / 26
3 / 26
◮ Transistor analog intrinsic randomness ◮ Vulnerable to environmental and operational variations ◮ Need error correction
◮ Boolean type randomness source ◮ Immune to environmental and operational variations ◮ Less to no error correction ◮ Strong resilience to attacks
4 / 26
◮ Hybrid FPGA digital PUF however need analog PUF to start up [FPL
◮ First digital PUF by interconnection uncertainty yet only conceptual and
◮ Quantitative justifications of the use of interconnect randomness ◮ Strongly skewed latches to ensure deterministic transistor behaviors ◮ Novel highly non-linear logic network to ensure strong security
5 / 26
6 / 26
3nm 5nm Interconnect under lithography variation. Left: mask split of 20nm for top, 28nm for bottom. Right: shapes on wafer.
7 / 26
◮ Systematic: dose, focus, etc. ◮ Local: mask, line edge roughness (LER), etc.
◮ Position two interconnect layout line-ends close to each other ◮ An electron beam system can easily lead to large mask variations ◮ Mask variation further maps to different connectivity in wafer
8 / 26
◮ The existence and control of the configurations to
◮ Augment the local variation ◮ Suppress the systematic variation
0.4 0.5 0.6 0.7 0.8 0.9 1 35 40 45 50 selected mean Connectivity Rate Layout Split Distance (nm) 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1 2 3 4 5 selected stdv. Connectivity Rate Mask Error Stdv. Distance (nm) 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.98 0.99 1 1.01 1.02 Connectivity Rate Dose Value (normalized)
Interconnect connectivity rate under lithography variations: Left: layout split distance under mask error stdv. of 4nm; Center: mask error stdv. under split of 46nm; Right: dose values.
9 / 26
10 / 26
◮ Short-circuit: direct current from Vdd to Gnd, uncertain region, etc. ◮ Open-circuit: floating gate, etc.
skew-1 skew-0
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 VB (V) VA (V) skewed-1 inv skewed-0 inv
Handling dangled poly-gate by strongly skewed latch. Left: inverter pair based skewed latch; Right: the VTC relation of a strongly skewed latch.
11 / 26
◮ Linear non-separable
response is 0 response is 1
Linear non-separable nature for XOR logic. ◮ Equal output probability
12 / 26
Left: the complete unit cell logic structure; Right: simplified symbolic representation.
13 / 26
14 / 26
Z Z Z Z Z Z … … … … … … … … … … … … … … … … … … … … … … … … N rows M colums in0 in1 in2 in3 in4 inN-1
… …
A N-row by M-col LRR-DPUF architecture. Some boundary virtual connections are marked by “Z” indicating dangling status.
15 / 26
16 / 26
Z Z Z Z Z Z
Z Z
in0 in1 in2 in3 in4 in5 in6 in7
Logic cone of out2 is highlighted in red color.
17 / 26
◮ The non-linearity of LRR-DPUF increases along with a higher connectivity rate. ◮ There is a sufficiently large space of unique LRR-DPUFs even if the connectivity
◮ Increasing the number of columns strengthens the resilience to learning attacks. ◮ Any subtle change on virtual connections will be reflected to multiple outputs.
18 / 26
19 / 26
20 / 26
Avalanche effect of 8×8 LRR-DPUF over each input.
21 / 26
10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 0% 20% 40% 60% 80% 100% Prediction Error Training Set Ratio 8-row, 128-col 8-row, 32-col 8-row, 16-col 8-row, 8-col 20% 25% 30% 35% 40% 45% 50% 55% 60% 0% 20% 40% 60% 80% 100% Prediction Error Training Set Ratio 8-row, 128-col 8-row, 32-col 8-row, 16-col 8-row, 8-col
SVM attack for 8-row LRR-DPUFs over different configurations: Left: connectivity rate of 0.2 over different column sizes and training sizes; Right: connectivity rate of 0.9 over different column sizes and training sizes;
22 / 26
0% 10% 20% 30% 40% 50% 60% 70% 80% 0% 20% 40% 60% 80% 100% Prediction Error Training Set Ratio
46% 48% 50% 52% 54% 0% 20% 40% 60% 80% 100% Prediction Error Training Set Ratio RF w/ con. ratio = 90% RF w/ con. ratio = 20% ANN w/ con. ratio = 90% ANN w/ con. ratio = 20%
Left: SVM attacks over different connectivity rate and training size. Right: additional learning model attacks including i) Artificial neural network (ANN) with 10 hidden layers using Sigmoid function, and ii) Random Forest (RF) with 15 trees in the forest.
23 / 26
24 / 26
◮ A novel learning resilient and reliable digital PUF ◮ Justification for the use of interconnect randomness ◮ Strongly skewed latches for CMOS compatibility ◮ A highly non-linear logic architecture
25 / 26
26 / 26