Attacking a Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models. Simon Wilhelmstätter; Joschua Conrad; Devanshi Upadhyaya; Ilia Polian and Maurits Ortmanns. In
2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS), 2024, pp. 144–148. DOI:
https://doi.org/10.1109/AICAS59952.2024.10595935 Zusammenfassung
The tremendous success of artificial neural networks (NNs) in recent years, paired with the leap of embedded, low-power devices (e.g. IoT, wearables and smart sensors), gave rise to specialized NN accelerators that enable the inference of NNs in power-constrained environments. However, manufacturing or operating such accelerators in un-trusted environments poses risks of undesired model theft and hardware counterfeiting. One way to protect NN hardware against those threats is by locking both the model and the accelerator with secret keys that can only be supplied by entitled authorities (e.g. chip designer or distributor). However, current locking mechanisms contain severe drawbacks, such as required model retraining and vulnerability to the powerful satisfyability checking (SAT)-attack.Recently, an approach for jointly protecting the model and the accelerator was proposed. Compared to previous locking mechanisms, it promises to avoid model retraining, not leak useful model information, and resist the SAT-attack, thereby securing the NN accelerator against counterfeiting and the model against intellectual property infringement. In this paper, those claims are thoroughly evaluated and severe issues in the technical evidence are identified. Furthermore, an attack is developed that does not require an expanded threat model but is still able to completely circumvent all of the proposed protection schemes. It allows to reconstruct all NN model parameters (i.e. model theft) and enables hardware counterfeiting.BibTeX
Enabling Power Side-Channel Attack Simulation on Mixed-Signal Neural Network Accelerators. Simon Wilhelmstätter; Joschua Conrad; Devanshi Upadhyaya; Ilia Polian and Maurits Ortmanns. In
2024 IEEE International Conference on Omni-layer Intelligent Systems (COINS), 2024, pp. 1–5. DOI:
https://doi.org/10.1109/COINS61597.2024.10622156 Zusammenfassung
Due to the tremendous success of Deep Learning with neural networks (NNs) in recent years and the simultaneous leap of embedded, low-power devices (e.g. wearables, smart-phones, IoT, and smart sensors), enabling the inference of those NNs in power-constrained environments gave rise to specialized NN accelerators. One paradigm followed by many of those accelerators was the transition from digital domain computing towards performing operations in the analog domain, turning them from digital to mixed-signal NN accelerators. While power-efficiency and inference accuracy have been researched with increasing interest, security and protection against a side-channel attack (SCA) have not found much attention. However, side-channels pose a major security concern by allowing an attacker to steal valuable knowledge about proprietary NNs deployed on accelerators. In order to evaluate mixed-signal NNs accelerators concerning SCA robustness, its tendency to leak information through the side-channel needs investigation. In this work, we propose a methodology for enabling side-channel analysis of mixed-signal NNs accelerators, which shows reasonable accuracy in an early development stage. The approach enables the reuse of large portions of design sources for simulation and production while providing flexibility and fast development cycles for changes to the analog design.BibTeX
Locking enabled security analysis of cryptographic circuits. Devanshi Upadhyaya; Maël Gay and Ilia Polian. Cryptography 8, 1 (2024).
BibTeX
Optimizing Waveform Accurate Fault Attacks Using Formal Methods. Devanshi Upadhyaya and Ilia Polian. In
2024 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), 2024, pp. 1–6. DOI:
https://doi.org/10.1109/DFT63277.2024.10753549 Zusammenfassung
State-of-the-art fault attacks demand either a large number of low-precision fault injections (statistical attacks) or very few injections using sophisticated equipment (algebraic attacks) to break modern cryptosystems. For example, a popular attack breaks AES-128 with one injection, but the fault effects must be restricted to one 4-bit nibble of its state. This paper combines the advantages of the two by optimizing the probability of achieving the desired failing state bit patterns, and thus the attack's success rate, during conventional, low-cost clock manipulation. The problem bears similarities with small-delay fault test generation, and we extend formal (Boolean satisfiability, or SAT) models that were initially developed for waveform-accurate SDF ATPG procedures. A fundamental distinction of our analysis is the presence of fixed-yet-unknown secret bits, which influence the failing state bit patterns. For this reason, we use a model-counting (#SAT) approach to estimate the success probability as an average across secret bit combinations.BibTeX