Zur Webseite der Uni Stuttgart
< Reuse of Programmable Delay Monitors for Reliability Improvement in System Early Life and Wear-out Phase
12.02.19

GPU-accelerated Time Simulation with Parameterizable Delay Modeling

Kategorie: Open Seminar - Rechnerarchitektur

13:00 - 13:45, external place, Dipl.-Inf. Eric Schneider, Institut für Technische Informatik


Modern nano-electronic CMOS designs often utilize adaptation of system
parameters such as adaptive voltage and frequency scaling (AVFS) to
adjust performance and power consumption of the circuit to operational
conditions.
With the influence of parameter variations on the circuit delay,
accurate validation of the circuit timing plays an important role in
today’s design and test validation as well as design exploration.
For timing-related issues, time simulation at logic level is considered
to be timing-accurate and widely used.
Yet, conventional logic level time simulation already lacks scalability
even without taking parameter variations into account.

In this talk, a highly-parallel approach for timing accurate logic level
simulation with parameter variation-aware delay modeling on graphics
processing units (GPUs) is presented.
The modeling utilizes regression analysis over offline electrical level
simulation data to approximate the delay behavior under realistic
parameter variations.
During simulation, gate delays are calculated in parallel, allowing for
an efficient parallel simulation of individually parameterized circuit
instances.
Experimental results on a 15nm FinFET technology prove the efficiency of
the presented approach showing speedups of three orders of magnitude
over conventional time simulation with static delays.
Furthermore, extensions for application to switch level and fault
simulation are discussed and first results are presented.

Akzeptieren

Diese Webseite verwendet Cookies. Durch die Nutzung dieser Webseite erklären Sie sich damit einverstanden, dass Cookies gesetzt werden. Mehr erfahren, zum Datenschutz