Direkt zu

Zur Webseite der Uni Stuttgart

Michael Imhof

Dipl.-Inf. Michael Imhof

Name:

Dipl.-Inf. Michael Imhof

Addresse:

Universität Stuttgart

Institut für Technische Informatik

Pfaffenwaldring 47

D-70569 Stuttgart

Germany

Raum:

3.170

Telefon:

(+49) (0)711 / 685 88 393

Telefax:

(+49) (0)711 / 685 88 288

E-Mail:

michael.imhof@iti.uni-stuttgart.de

 


 

Forschung

Projekte

OTERA: Online Test Strategies for Reliable Reconfigurable Architectures

Projektseite: Online Test Strategies for Reliable Reconfigurable Architectures

Dynamisch rekonfigurierbare Architekturen ermöglichen eine signifikante Beschleunigung verschiedener Anwendungen durch die Anpassung und Optimierung der Struktur des Systems zur Laufzeit. Permanente und transiente Fehler bedrohen die zuverlässigen Betrieb einer solchen Architektur. Dieses Projekt zielt darauf ab, die Zuverlässigkeit von Laufzeit-rekonfigurierbaren Systemen durch eine neuartige System- Level-Strategie für Online-Tests und Online-Anpassung an Fehler zu erhöhen. Dies wird erreicht durch (a) Scheduling, so dass Tests für rekonfigurierbare Ressourcen mit minimaler Auswirkung auf die Leistung ausgeführt werden, (b) Ressourcen-Management, so dass teilweise fehlerhafte Ressourcen für Komponenten verwendet werden, die den fehlerhaften Teil nicht verwenden, und (c) Online-Uberwachung und Error-Checking. Um eine zuverlässige Rekonfiguration zur Laufzeit zu gewährleisten, wird jeder Rekonfigurationsprozess durch eine neuartige und effiziente Kombination von Online-Struktur- und Funktionstests gründlich getestet. Im Vergleich zu bisherigen Fehlertoleranzkonzepten vermeidet dieser Ansatz die hohen Hardwarekosten von struktureller Redundanz. Die eingesparten Ressourcen können zur weiteren Beschleunigung der Anwendungen verwendet werden. Dennoch deckt das vorgeschlagene Verfahren Fehler in den rekonfigurierbaren Ressourcen, der Anwendungslogik und Fehler im Rekonfigurationsprozess ab.

seit 10.2010, DFG-Projekt: WU 245/10-1, 10-2   

ROCK: Robuste On-Chip-Kommunikation durch hierarchische Online-Diagnose und -Rekonfiguration

Projektseite: Robuste On-Chip-Kommunikation durch hierarchische Online-Diagnose und -Rekonfiguration

Ziel des Projekts ROCK ist es, robuste Architekturen und zugehörige Entwurfsverfahren für Networks-on-Chip (NoC) zu untersuchen und prototypisch zu entwickeln, um der mit steigender Integrationsdichte zunehmenden Störanfälligkeit der On-Chip-Kommunikationsinfrastruktur gegenüber Umgebungsstrahlung, Übersprechen, Fertigungsvariabilitäten und Alterungseinflüssen zu begegnen. Dazu wird ein Ansatz verfolgt, der im Betrieb (online) Fehlerdiagnose und zielgerichtete Rekonfiguration zur Fehlerbehebung in hierarchischer Weise über die Netzwerkschichten durchführt und dabei schichtenübergreifend eine optimale Kombination von Maßnahmen auswählt. Die Optimalität umfasst die energieminimale Einhaltung von Zusicherungen bezüglich der Performability des Netzwerks, welche unter Einbeziehung der Kommunikationsperformanz und der Fehlerstatistik für das Forschungsgebiet der NoCs neu zu definieren ist. Weitere Anforderungen bestehen in der fehlertoleranten Auslegung der Diagnose- und Rekonfigurationssteuerung sowie in ihrer Transparenz für die über das NoC kommunizierenden Anwendungsprozesse. Die NoC-Architekturen und -Verfahren sind bezüglich Optimalität und Randbedingungen auch im Fehlerfall zu bewerten. Diese Bewertung beruht auf zu schaffenden funktionalen Fehlermodellen, welche mit Netzwerkmodellen zu einer NoC-Fehlersimulation integriert werden.

seit 08.2011, DFG-Projekt: WU 245/12-1    

RM-BIST: Reliability Monitoring and Managing Built-In Self Test

Projektseite: Reliability Monitoring and Managing Built-In Self Test

Das Hauptziel des RM-BIST Projekts ist es, die Test-Infrastruktur (Design for Test, DFT), die primär für den Produktionstest verwendet wird, zur Zuverlässigkeitsinfrastruktur (Design for Reliability, DFR) zu erweitern. Existierende Infrastruktur für den eingebetteten Selbsttest (Built-In Self-Test, BIST) wird durch geeignete Anpassungen während der Lebenszeit eines VLSI Systems wiederverwendet, um eine Systemüberwachung, die Identifikation kritischer Systemzustände und eine Vorhersage der Zuverlässigkeit zu ermöglichen. Zusätzlich wird die modifizierte Infrastruktur genutzt, um die Zuverlässigkeit gezielt zu steigern. Der zu entwickelnde Ansatz soll Fehler identifizieren und überwachen, welche die Systemzuverlässigkeit in verschiedenen Zeitskalen beeinflussen. Durch Prognostizierung sollen diese Fehler gleichzeitig abgemildert werden. Es werden unterschiedliche zuverlässigkeitsreduzierende Effekte behandelt, wie strahlungsinduzierte Soft Errors, intermittierende Fehler aufgrund von Prozess- und Laufzeitvariationen, Alterung von Transistoren und Elektromigration. Es ist das Ziel, eine Laufzeitunterstützung für die Überwachung und Steigerung der Zuverlässigkeit mittels Modifikation und Wiederverwendung existierender Infrastruktur für den eingebetteten Selbsttest unter minimalen Kosten bereitzustellen.

seit 07.2012, DFG-Projekt: WU 245/13-1    

The DFX Project

Projektseite: DFX

DFX is a logic synthesis tool and gate level simulator for circuit descriptions in VHDL and other hardware description languages. Besides that, DFX contains modern fault simulators and automatic test pattern generators for computer aided testing of integrated circuits.

Abgeschlossene Projekte

DAAD Projekt VIGONI: Combining Fault Tolerance and Offline Test Strategies for Nanoscaled Electronics

Projektseite: Combining Fault Tolerance and Offline Test Strategies for Nanoscaled Electronics

Projektpartner: Dipartimento di Automatica e Informatica, Politecnico di Torino

01.2007 - 12.2009, DAAD/Vigoni-Projekt    

IBM CAS Project: Improved Testing of VLSI Chips with Power Constraints



Projektseite: Improved Testing of VLSI Chips with Power Constraints

Die Schaltaktivität und damit die Verlustleistung einer Schaltung ist während des Test wesentlich erhöht und deren Einflüsse auf Testzeit, Testzuverlässigkeit sowie Produktzuverlässigkeit berücksichtigt werden muss. Im Rahmen dieses Projekts werden neue Methoden zur Test Planung zur Verwendung mit Clock Gating und Power Gating untersucht.

Projektpartner: IBM Deutschland Entwicklung, IBM CAS

10.2005 - 12.2009, IBM CAS-Projekt    

Veröffentlichungen

Meine Publikationen auf www.meimhof.de (bibtex/pdf), Google Scholar Citations, DBLP

Zeitschriften und Konferenzbeiträge

Matching entries: 0
settings...
29. On Covering Structural Defects in NoCs by Functional Tests
Dalirsani, A., Hatami, N., Imhof, M.E., Eggenberger, M., Schley, G., Radetzki, M. and Wunderlich, H.-J.
to appear in Proc. 23rd IEEE Asian Test Symposium (ATS'14), Hangzhou, China, Nov 16-19
2014
 
BibTeX:
@inproceedings{DalirHIESRW2014,
  author = {Dalirsani, Atefe and Hatami, Nadereh and Imhof, Michael E. and Eggenberger, Marcus and Schley, Gert and Radetzki, Martin and Wunderlich, Hans-Joachim},
  title = {On Covering Structural Defects in NoCs by Functional Tests},
  booktitle = {to appear in Proc. 23rd IEEE Asian Test Symposium (ATS'14)},
  year = {2014}
}
28. GUARD: GUAranteed Reliability in Dynamically Reconfigurable Systems
Zhang, H., Kochte, M.A., Imhof, M.E., Bauer, L., Wunderlich, H.-J. and Henkel, J.
Proc. 51st ACM/EDAC/IEEE Design Automation Conference (DAC'14), San Francisco, CA, USA, Jun 1-5 , pp. 1-6
2014
DOI PDF 
Abstract: Soft errors are a reliability threat for reconfigurable systems implemented with SRAM-based FPGAs. They can be handled through fault tolerance techniques like scrubbing and modular redundancy. However, selecting these techniques statically at design or compile time tends to be pessimistic and prohibits optimal adaptation to changing soft error rate at runtime.
We present the GUARD method which allows for autonomous runtime reliability management in reconfigurable architectures: Based on the error rate observed during runtime, the runtime system dynamically determines whether a computation should be executed by a hardened processor, or whether it should be accelerated by inherently less reliable reconfigurable hardware which can trade-off performance and reliability. GUARD is the first runtime system for reconfigurable architectures that guarantees a target reliability while optimizing the performance. This allows applications to dynamically chose the desired degree of reliability. Compared to related work with statically optimized fault tolerance techniques, GUARD provides up to 68.3% higher performance at the same target reliability.
BibTeX:
@inproceedings{ZhangKIBWH2014,
  author = {Zhang, Hongyan and Kochte, Michael A. and Imhof, Michael E. and Bauer, Lars and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {GUARD: GUAranteed Reliability in Dynamically Reconfigurable Systems},
  booktitle = {Proc. 51st ACM/EDAC/IEEE Design Automation Conference (DAC'14)},
  year = {2014},
  pages = {1--6},
  abstract = {Soft errors are a reliability threat for reconfigurable systems implemented with SRAM-based FPGAs. They can be handled through fault tolerance techniques like scrubbing and modular redundancy. However, selecting these techniques statically at design or compile time tends to be pessimistic and prohibits optimal adaptation to changing soft error rate at runtime.
We present the GUARD method which allows for autonomous runtime reliability management in reconfigurable architectures: Based on the error rate observed during runtime, the runtime system dynamically determines whether a computation should be executed by a hardened processor, or whether it should be accelerated by inherently less reliable reconfigurable hardware which can trade-off performance and reliability. GUARD is the first runtime system for reconfigurable architectures that guarantees a target reliability while optimizing the performance. This allows applications to dynamically chose the desired degree of reliability. Compared to related work with statically optimized fault tolerance techniques, GUARD provides up to 68.3% higher performance at the same target reliability.}, doi = {http://dx.doi.org/10.1145/2593069.2593146}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/DAC_ZhangKIBWH2014.pdf} }
27. Variation-Aware Deterministic ATPG
Sauer, M., Polian, I., Imhof, M.E., Mumtaz, A., Schneider, E., Czutro, A., Wunderlich, H.-J. and Becker, B.
Proc. 19th IEEE European Test Symposium (ETS'14), Paderborn, Germany, May 26-30 , pp. 87-92
Best paper award
2014
DOI URL PDF 
Keywords: Variation-aware test, fault efficiency, ATPG
Abstract: In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.
BibTeX:
@inproceedings{SauerPIMSCWB2014,
  author = {Sauer, Matthias and Polian, Ilia and Imhof, Michael E. and Mumtaz, Abdullah and Schneider, Eric and Czutro, Alexander and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {Variation-Aware Deterministic ATPG},
  booktitle = {Proc. 19th IEEE European Test Symposium (ETS'14)},
  year = {2014},
  pages = {87--92},
  keywords = {Variation-aware test, fault efficiency, ATPG},
  abstract = {In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6847806},
  doi = {http://dx.doi.org/10.1109/ETS.2014.6847806},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ETS_SauerPIMSCWB2014.pdf}
}
26. Structural Software-Based Self-Test of Network-on-Chip
Dalirsani, A., Imhof, M.E. and Wunderlich, H.-J.
Proc. 32nd IEEE VLSI Test Symposium (VTS'14), Napa, CA, USA, Apr 13-17
2014
DOI URL PDF 
Keywords: Network-on-Chip (NoC), Software-Based Self-Test (SBST), Automatic Test Pattern Generation (ATPG), Boolean Satisfiability (SAT)
Abstract: Software-Based Self-Test (SBST) is extended to the switches of complex Network-on-Chips (NoC). Test patterns for structural faults are turned into valid packets by using satisfiability (SAT) solvers. The test technique provides a high fault coverage for both manufacturing test and online test.
BibTeX:
@inproceedings{DalirIW2014,
  author = {Dalirsani, Atefe and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Structural Software-Based Self-Test of Network-on-Chip},
  booktitle = {Proc. 32nd IEEE VLSI Test Symposium (VTS'14)},
  year = {2014},
  keywords = {Network-on-Chip (NoC), Software-Based Self-Test (SBST), Automatic Test Pattern Generation (ATPG), Boolean Satisfiability (SAT)},
  abstract = {Software-Based Self-Test (SBST) is extended to the switches of complex Network-on-Chips (NoC). Test patterns for structural faults are turned into valid packets by using satisfiability (SAT) solvers. The test technique provides a high fault coverage for both manufacturing test and online test.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6818754},
  doi = {http://dx.doi.org/10.1109/VTS.2014.6818754},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/VTS_DalirIW2014.pdf}
}
25. Bit-Flipping Scan - A Unified Architecture for Fault Tolerance and Offline Test
Imhof, M.E. and Wunderlich, H.-J.
Proc. Design, Automation and Test in Europe (DATE'14), Dresden, Germany , Mar 24-28
2014
DOI URL PDF 
Keywords: Bit-Flipping Scan, Fault Tolerance, Test, Compaction, ATPG, Satisfiability
Abstract: Test is an essential task since the early days of digital circuits. Every produced chip undergoes at least a production test supported by on-chip test infrastructure to reduce test cost. Throughout the technology evolution fault tolerance gained importance and is now necessary in many applications to mitigate soft errors threatening consistent operation. While a variety of effective solutions exists to tackle both areas, test and fault tolerance are often implemented orthogonally, and hence do not exploit the potential synergies of a combined solution.
The unified architecture presented here facilitates fault tolerance and test by combining a checksum of the sequential state with the ability to flip arbitrary bits. Experimental results
confirm a reduced area overhead compared to a orthogonal combination of classical test and fault tolerance schemes. In combination with heuristically generated test sequences the test
application time and test data volume are reduced significantly.
BibTeX:
@inproceedings{ImhofW2014,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Bit-Flipping Scan - A Unified Architecture for Fault Tolerance and Offline Test},
  booktitle = {Proc. Design, Automation and Test in Europe (DATE'14)},
  year = {2014},
  keywords = {Bit-Flipping Scan, Fault Tolerance, Test, Compaction, ATPG, Satisfiability},
  abstract = {Test is an essential task since the early days of digital circuits. Every produced chip undergoes at least a production test supported by on-chip test infrastructure to reduce test cost. Throughout the technology evolution fault tolerance gained importance and is now necessary in many applications to mitigate soft errors threatening consistent operation. While a variety of effective solutions exists to tackle both areas, test and fault tolerance are often implemented orthogonally, and hence do not exploit the potential synergies of a combined solution.
The unified architecture presented here facilitates fault tolerance and test by combining a checksum of the sequential state with the ability to flip arbitrary bits. Experimental results
confirm a reduced area overhead compared to a orthogonal combination of classical test and fault tolerance schemes. In combination with heuristically generated test sequences the test
application time and test data volume are reduced significantly.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6800407}, doi = {http://dx.doi.org/10.7873/DATE2014.206}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/DATE_ImhofW2014.pdf} }
24. Synthesis of Workload Monitors for On-Line Stress Prediction
Baranowski, R., Cook, A., Imhof, M.E., Liu, C. and Wunderlich, H.-J.
Proc. 16th IEEE Symp. Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS), New York City, NY, USA, 2--4 Oct , pp. 137-142
2013
DOI URL PDF 
Keywords: Reliability estimation, workload monitoring, aging prediction, NBTI
Abstract: Stringent reliability requirements call for monitoring mechanisms to account for circuit degradation throughout the complete system lifetime. In this work, we efficiently monitor the stress experienced by the system as a result of its current workload. To achieve this goal, we construct workload monitors that observe the most relevant subset of the circuit’s primary and pseudo-primary inputs and produce an accurate stress approximation. The proposed approach enables the timely adoption of
suitable countermeasures to reduce or prevent any deviation from the intended circuit behavior. The relation between monitoring accuracy and hardware cost can be adjusted according to design requirements. Experimental results show the efficiency of the proposed approach for the prediction of stress induced by Negative Bias Temperature Instability (NBTI) in critical and near-critical paths of a digital circuit.
BibTeX:
@inproceedings{BaranCILW2013,
  author = {Baranowski, Rafal and Cook, Alejandro and Imhof, Michael E. and Liu, Chang and Wunderlich, Hans-Joachim},
  title = {Synthesis of Workload Monitors for On-Line Stress Prediction},
  booktitle = {Proc. 16th IEEE Symp. Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS)},
  year = {2013},
  pages = {137--142},
  keywords = {Reliability estimation, workload monitoring, aging prediction, NBTI},
  abstract = {Stringent reliability requirements call for monitoring mechanisms to account for circuit degradation throughout the complete system lifetime. In this work, we efficiently monitor the stress experienced by the system as a result of its current workload. To achieve this goal, we construct workload monitors that observe the most relevant subset of the circuit’s primary and pseudo-primary inputs and produce an accurate stress approximation. The proposed approach enables the timely adoption of
suitable countermeasures to reduce or prevent any deviation from the intended circuit behavior. The relation between monitoring accuracy and hardware cost can be adjusted according to design requirements. Experimental results show the efficiency of the proposed approach for the prediction of stress induced by Negative Bias Temperature Instability (NBTI) in critical and near-critical paths of a digital circuit.}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6653596}, doi = {http://dx.doi.org/10.1109/DFT.2013.6653596}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/DFTS_BaranCILW2013.pdf} }
23. Module Diversification: Fault Tolerance and Aging Mitigation for Runtime Reconfigurable Architectures
Zhang, H., Bauer, L., Kochte, M.A., Schneider, E., Braun, C., Imhof, M.E., Wunderlich, H.-J. and Henkel, J.
Proc. IEEE International Test Conference (ITC'13), Anaheim, CA, USA, 10--12 Sep
2013
DOI URL PDF 
Keywords: Reliability, online test, fault-tolerance, aging mitigation, partial runtime reconfiguration, FPGA
Abstract: Runtime reconfigurable architectures based on Field-Programmable Gate Arrays (FPGAs) are attractive for realizing complex applications. However, being manufactured in latest semiconductor process technologies, FPGAs are increasingly prone to aging effects, which reduce the reliability of such systems and must be tackled by aging mitigation and application of fault tolerance techniques. This paper presents module diversification, a novel design method that creates different configurations for runtime reconfigurable modules. Our method provides fault tolerance by creating the minimal number of configurations such that for any faulty Configurable Logic Block (CLB) there is at least one configuration that does not use that CLB. Additionally, we determine the fraction of time that each configuration should be used to balance the stress and to mitigate the aging process in FPGA-based runtime reconfigurable systems. The generated configurations significantly improve reliability by fault-tolerance and aging mitigation.
BibTeX:
@inproceedings{ZhangBKSBIWH2013,
  author = {Zhang, Hongyan and Bauer, Lars and Kochte, Michael A. and Schneider, Eric and Braun, Claus and Imhof, Michael E. and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {Module Diversification: Fault Tolerance and Aging Mitigation for Runtime Reconfigurable Architectures},
  booktitle = {Proc. IEEE International Test Conference (ITC'13)},
  year = {2013},
  keywords = {Reliability, online test, fault-tolerance, aging mitigation, partial runtime reconfiguration, FPGA},
  abstract = {Runtime reconfigurable architectures based on Field-Programmable Gate Arrays (FPGAs) are attractive for realizing complex applications. However, being manufactured in latest semiconductor process technologies, FPGAs are increasingly prone to aging effects, which reduce the reliability of such systems and must be tackled by aging mitigation and application of fault tolerance techniques. This paper presents module diversification, a novel design method that creates different configurations for runtime reconfigurable modules. Our method provides fault tolerance by creating the minimal number of configurations such that for any faulty Configurable Logic Block (CLB) there is at least one configuration that does not use that CLB. Additionally, we determine the fraction of time that each configuration should be used to balance the stress and to mitigate the aging process in FPGA-based runtime reconfigurable systems. The generated configurations significantly improve reliability by fault-tolerance and aging mitigation.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6651926},
  doi = {http://dx.doi.org/10.1109/TEST.2013.6651926},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/ITC_ZhangBKSBIWH2013.pdf}
}
22. Test Strategies for Reliable Runtime Reconfigurable Architectures
Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Schneider, E., Zhang, H., Henkel, J. and Wunderlich, H.-J.
IEEE Transactions on Computers
Vol. 62(8), Los Alamitos, CA, USA, Aug , pp. 1494-1507
2013
DOI URL PDF 
Keywords: FPGA, Reconfigurable Architectures, Online Test
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. The reliability of FPGAs, being manufactured in latest technologies, is threatened by soft errors, as well as aging effects and latent defects.To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the reconfigurable fabric. This can be achieved by periodic or on-demand online testing. This paper presents a reliable system architecture for runtime-reconfigurable systems, which integrates two non-concurrent online test strategies: Pre-configuration online tests (PRET) and post-configuration online tests (PORT). The PRET checks that the reconfigurable hardware is free of faults by periodic or on-demand tests. The PORT has two objectives: It tests reconfigured hardware units after reconfiguration to check that the configuration process completed correctly and it validates the expected functionality. During operation, PORT is used to periodically check the reconfigured hardware units for malfunctions in the programmable logic. Altogether, this paper presents PRET, PORT, and the system integration of such test schemes into a runtime-reconfigurable system, including the resource management and test scheduling. Experimental results show that the integration of online testing in reconfigurable systems incurs only minimum impact on performance while delivering high fault coverage and low test latency.
BibTeX:
@article{BauerBIKSZHW2013,
  author = {Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Schneider, Eric and Zhang, Hongyan and Henkel, Jörg and Wunderlich, Hans-Joachim},
  title = {Test Strategies for Reliable Runtime Reconfigurable Architectures},
  journal = {IEEE Transactions on Computers},
  publisher = {IEEE Computer Society},
  year = {2013},
  volume = {62},
  number = {8},
  pages = {1494--1507},
  keywords = {FPGA, Reconfigurable Architectures, Online Test},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. The reliability of FPGAs, being manufactured in latest technologies, is threatened by soft errors, as well as aging effects and latent defects.To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the reconfigurable fabric. This can be achieved by periodic or on-demand online testing. This paper presents a reliable system architecture for runtime-reconfigurable systems, which integrates two non-concurrent online test strategies: Pre-configuration online tests (PRET) and post-configuration online tests (PORT). The PRET checks that the reconfigurable hardware is free of faults by periodic or on-demand tests. The PORT has two objectives: It tests reconfigured hardware units after reconfiguration to check that the configuration process completed correctly and it validates the expected functionality. During operation, PORT is used to periodically check the reconfigured hardware units for malfunctions in the programmable logic. Altogether, this paper presents PRET, PORT, and the system integration of such test schemes into a runtime-reconfigurable system, including the resource management and test scheduling. Experimental results show that the integration of online testing in reconfigurable systems incurs only minimum impact on performance while delivering high fault coverage and low test latency.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6475939},
  doi = {http://dx.doi.org/10.1109/TC.2013.53},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/TC_BauerBIKSZHW2013.pdf}
}
21. Variation-Aware Fault Grading
Czutro, A., Imhof, M.E., Jiang, J., Mumtaz, A., Sauer, M., Becker, B., Polian, I. and Wunderlich, H.-J.
Proceedings of the 21st IEEE Asian Test Symposium (ATS'12), Niigata, Japan, November 19-22 , pp. 344-349
2012
DOI PDF 
Keywords: process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU
Abstract: An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.
BibTeX:
@inproceedings{CzutrIJMSBPW2012,
  author = {Czutro, A. and Imhof, Michael E. and Jiang, J. and Mumtaz, Abdullah and Sauer, M. and Becker, Bernd and Polian, Ilia and Wunderlich, Hans-Joachim},
  title = {Variation-Aware Fault Grading},
  booktitle = {Proceedings of the 21st IEEE Asian Test Symposium (ATS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {344--349},
  keywords = {process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU},
  abstract = {An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.},
  doi = {http://dx.doi.org/10.1109/ATS.2012.14},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ATS_CzutrIJMSBPW2012.pdf}
}
20. Transparent Structural Online Test for Reconfigurable Systems
Abdelfattah, M.S., Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Zhang, H., Henkel, Jö. and Wunderlich, H.-J.
Proceedings of the 18th IEEE International On-Line Testing Symposium (IOLTS'12), Sitges, Spain, June 27-29 , pp. 37-42
2012
DOI PDF 
Keywords: FPGA; Reconfigurable Architectures; Online Test
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of modern FPGAs is threatened by latent defects and aging effects. Hence, it is mandatory to ensure the reliable operation of the FPGA’s reconfigurable fabric. This can be achieved by periodic or on-demand online testing. In this paper, a system-integrated, transparent structural online test method for runtime reconfigurable systems is proposed. The required tests are scheduled like functional workloads, and thorough optimizations of the test overhead reduce the performance impact. The proposed scheme has been implemented on a reconfigurable system. The results demonstrate that thorough testing of the reconfigurable fabric can be achieved at negligible performance impact on the application.
BibTeX:
@inproceedings{AbdelBBIKZHW2012,
  author = {Abdelfattah, Mohamed S. and Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Zhang, Hongyan and Henkel, Jörg and Wunderlich, Hans-Joachim},
  title = {Transparent Structural Online Test for Reconfigurable Systems},
  booktitle = {Proceedings of the 18th IEEE International On-Line Testing Symposium (IOLTS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {37--42},
  keywords = {FPGA; Reconfigurable Architectures; Online Test},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of modern FPGAs is threatened by latent defects and aging effects. Hence, it is mandatory to ensure the reliable operation of the FPGA’s reconfigurable fabric. This can be achieved by periodic or on-demand online testing. In this paper, a system-integrated, transparent structural online test method for runtime reconfigurable systems is proposed. The required tests are scheduled like functional workloads, and thorough optimizations of the test overhead reduce the performance impact. The proposed scheme has been implemented on a reconfigurable system. The results demonstrate that thorough testing of the reconfigurable fabric can be achieved at negligible performance impact on the application.},
  doi = {http://dx.doi.org/10.1109/IOLTS.2012.6313838},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/IOLTS_AbdelBBIKZHW2012.pdf}
}
19. OTERA: Online Test Strategies for Reliable Reconfigurable Architectures
Bauer, L., Braun, C., Imhof, M.E., Kochte, M.A., Zhang, H., Wunderlich, H.-J. and Henkel, Jö.
Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS'12), Erlangen, Germany, June 25-28 , pp. 38-45
2012
DOI PDF 
Abstract: FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of FPGAs, which are manufactured in latest technologies, is threatened not only by soft errors, but also by aging effects and latent defects. To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the underlying reconfigurable fabric. This can be achieved by periodic or on-demand online testing. The OTERA project develops and evaluates components and strategies for reconfigurable systems that feature reliable reconfiguration. The research focus ranges from structural online tests for the FPGA infrastructure and functional online tests for the configured functionality up to the resource management and test scheduling. This paper gives an overview of the project tasks and presents first results.
BibTeX:
@inproceedings{BauerBIKZWH2012,
  author = {Bauer, Lars and Braun, Claus and Imhof, Michael E. and Kochte, Michael A. and Zhang, Hongyan and Wunderlich, Hans-Joachim and Henkel, Jörg},
  title = {OTERA: Online Test Strategies for Reliable Reconfigurable Architectures},
  booktitle = {Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {38--45},
  abstract = {FPGA-based reconfigurable systems allow the online adaptation to dynamically changing runtime requirements. However, the reliability of FPGAs, which are manufactured in latest technologies, is threatened not only by soft errors, but also by aging effects and latent defects. To ensure reliable reconfiguration, it is mandatory to guarantee the correct operation of the underlying reconfigurable fabric. This can be achieved by periodic or on-demand online testing. The OTERA project develops and evaluates components and strategies for reconfigurable systems that feature reliable reconfiguration. The research focus ranges from structural online tests for the FPGA infrastructure and functional online tests for the configured functionality up to the resource management and test scheduling. This paper gives an overview of the project tasks and presents first results.},
  doi = {http://dx.doi.org/10.1109/AHS.2012.6268667},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/AHS_BauerBIKZWH2012.pdf}
}
18. Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test
Cook, A., Hellebrand, S., Imhof, M.E., Mumtaz, A. and Wunderlich, H.-J.
Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12), Quito, Ecuador, April 10-13 , pp. 1-4
2012
DOI PDF 
Keywords: Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis
Abstract: Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.
BibTeX:
@inproceedings{CookHIMW2012,
  author = {Cook, Alejandro and Hellebrand, Sybille and Imhof, Michael E. and Mumtaz, Abdullah and Wunderlich, Hans-Joachim},
  title = {Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test},
  booktitle = {Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {1--4},
  keywords = {Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis},
  abstract = {Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.},
  doi = {http://dx.doi.org/10.1109/LATW.2012.6261229},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/LATW_CookHIMW2012.pdf}
}
17. A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures
Tran, D.A., Virazel, A., Bosio, A., Dilillo, L., Girard, P., Todri, A., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12), Hyatt Maui, Hawaii, USA, April 23-26 , pp. 50-55
2012
DOI PDF 
Keywords: Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption
Abstract: Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.
BibTeX:
@inproceedings{TranVBDGTIW2012,
  author = {Tran, Duc Anh and Virazel, Arnaud and Bosio, Alberto and Dilillo, Luigi and Girard, Patrick and Todri, Aida and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures},
  booktitle = {Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {50--55},
  keywords = {Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption},
  abstract = {Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.},
  doi = {http://dx.doi.org/10.1109/VTS.2012.6231079},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/VTS_TranVBDGTIW2012.pdf}
}
16. Embedded Test for Highly Accurate Defect Localization
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J.
Proceedings of the 20th IEEE Asian Test Symposium (ATS'11), New Delhi, India, November 21-23 , pp. 213-218
2011
DOI PDF 
Keywords: BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug
Abstract: Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudorandom (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.
BibTeX:
@inproceedings{MumtaIHW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {Embedded Test for Highly Accurate Defect Localization},
  booktitle = {Proceedings of the 20th IEEE Asian Test Symposium (ATS'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {213--218},
  keywords = {BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug},
  abstract = {Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudorandom (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, doi = {http://dx.doi.org/10.1109/ATS.2011.60}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2011/ATS_MumtaIHW2011.pdf} }
15. Efficient Multi-level Fault Simulation of HW/SW Systems for Structural Faults
Baranowski, R., Di Carlo, S., Hatami, N., Imhof, M.E., Kochte, M.A., Prinetto, P., Wunderlich, H.-J. and Zöllin, C.G.
SCIENCE CHINA Information Sciences
Vol. 54(9), September , pp. 1784-1796
2011
DOI PDF 
Keywords: fault simulation; multi-level; transaction-level modeling
Abstract: In recent technology nodes, reliability is increasingly considered a part of the standard design flow to be taken into account at all levels of embedded systems design. While traditional fault simulation techniques based on low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to properly cope with the complexity of modern embedded systems. Moreover, they do not allow for early exploration of design alternatives when a detailed model of the whole system is not yet available, which is highly required to increase the efficiency and quality of the design flow. Multi-level models that combine the simulation efficiency of high abstraction models with the accuracy of low-level models are therefore essential to efficiently evaluate the impact of physical defects on the system. This paper proposes a methodology to efficiently implement concurrent multi-level fault simulation across gate- and transaction-level models in an integrated simulation environment. It leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This combination of different models allows to accurately evaluate the impact of faults on the entire hardware/software system while keeping the computational effort low. Moreover, since only selected portions of the system require low-level models, early exploration of different design alternatives is efficiently supported. Experimental results obtained from three case studies are presented to demonstrate the high accuracy of the proposed method when compared with a standard gate/RT mixed-level approach and the strong improvement of simulation time which is reduced by four orders of magnitude in average.
BibTeX:
@article{BaranDHIKPWZ2011,
  author = {Baranowski, Rafal and Di Carlo, Stefano and Hatami, Nadereh and Imhof, Michael E. and Kochte, Michael A. and Prinetto, Paolo and Wunderlich, Hans-Joachim and Zöllin, Christian G.},
  title = {Efficient Multi-level Fault Simulation of HW/SW Systems for Structural Faults},
  journal = {SCIENCE CHINA Information Sciences},
  publisher = {Science China Press, co-published with Springer-Verlag},
  year = {2011},
  volume = {54},
  number = {9},
  pages = {1784--1796},
  keywords = {fault simulation; multi-level; transaction-level modeling},
  abstract = {In recent technology nodes, reliability is increasingly considered a part of the standard design flow to be taken into account at all levels of embedded systems design. While traditional fault simulation techniques based on low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to properly cope with the complexity of modern embedded systems. Moreover, they do not allow for early exploration of design alternatives when a detailed model of the whole system is not yet available, which is highly required to increase the efficiency and quality of the design flow. Multi-level models that combine the simulation efficiency of high abstraction models with the accuracy of low-level models are therefore essential to efficiently evaluate the impact of physical defects on the system. This paper proposes a methodology to efficiently implement concurrent multi-level fault simulation across gate- and transaction-level models in an integrated simulation environment. It leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This combination of different models allows to accurately evaluate the impact of faults on the entire hardware/software system while keeping the computational effort low. Moreover, since only selected portions of the system require low-level models, early exploration of different design alternatives is efficiently supported. Experimental results obtained from three case studies are presented to demonstrate the high accuracy of the proposed method when compared with a standard gate/RT mixed-level approach and the strong improvement of simulation time which is reduced by four orders of magnitude in average.},
  doi = {http://dx.doi.org/10.1007/s11432-011-4366-9},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/SCIS_BaranDHIKPWZ2011.pdf}
}
14. Korrektur transienter Fehler in eingebetteten Speicherelementen
Imhof, M.E. and Wunderlich, H.-J.
5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11), Hamburg-Harburg, Germany, September 27-29 , pp. 76-83
2011
URL PDF 
Keywords: Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction
Abstract: In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.

BibTeX:
@inproceedings{ImhofW2011,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Korrektur transienter Fehler in eingebetteten Speicherelementen},
  booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)},
  publisher = {VDE VERLAG GMBH},
  year = {2011},
  volume = {231},
  pages = {76--83},
  keywords = {Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction},
  abstract = {In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.}, url = {http://www.vde-verlag.de/proceedings-de/453357010.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/ZuE_ImhofW2011.pdf} }

13. Eingebetteter Test zur hochgenauen Defekt-Lokalisierung
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J.
5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11), Hamburg-Harburg, Germany, September 27-29 , pp. 43-47
2011
URL PDF 
Keywords: Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug
Abstract: Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.

BibTeX:
@inproceedings{MumtaIHW2011a,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {Eingebetteter Test zur hochgenauen Defekt-Lokalisierung},
  booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)},
  publisher = {VDE VERLAG GMBH},
  year = {2011},
  volume = {231},
  pages = {43--47},
  keywords = {Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug},
  abstract = {Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, url = {http://www.vde-verlag.de/proceedings-de/453357010.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/ZuE_MumtaIHW2011a.pdf} }

12. P-PET: Partial Pseudo-Exhaustive Test for High Defect Coverage
Mumtaz, A., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the IEEE International Test Conference (ITC'11), Anaheim, California, USA, September 18-23
2011
PDF 
Keywords: BIST; Pseudo-Exhaustive Testing; Defect Coverage; N-Detect
Abstract: Pattern generation for embedded testing often consists of a phase generating random patterns and a second phase where deterministic patterns are applied. This paper presents a method which optimizes the first phase significantly and increases the defect coverage, while reducing the number of deterministic patterns required in the second phase.
The method is based on the concept of pseudo-exhaustive testing (PET), which was proposed as a method for fault model independent testing with high defect coverage. As its test length can grow exponentially with the circuit size, an application to larger circuits is usually impractical.
In this paper, partial pseudo-exhaustive testing (P-PET) is presented as a synthesis technique for multiple polynomial feedback shift registers. It scales with actual technology and is comparable with the usual pseudo-random (PR) pattern testing regarding test costs and test application time. The advantages with respect to the defect coverage, N-detectability for stuck-at faults and the reduction of deterministic test lengths are shown using state-of-the art industrial circuits.
BibTeX:
@inproceedings{MumtaIW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {P-PET: Partial Pseudo-Exhaustive Test for High Defect Coverage},
  booktitle = {Proceedings of the IEEE International Test Conference (ITC'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  keywords = {BIST; Pseudo-Exhaustive Testing; Defect Coverage; N-Detect},
  abstract = {Pattern generation for embedded testing often consists of a phase generating random patterns and a second phase where deterministic patterns are applied. This paper presents a method which optimizes the first phase significantly and increases the defect coverage, while reducing the number of deterministic patterns required in the second phase.
The method is based on the concept of pseudo-exhaustive testing (PET), which was proposed as a method for fault model independent testing with high defect coverage. As its test length can grow exponentially with the circuit size, an application to larger circuits is usually impractical.
In this paper, partial pseudo-exhaustive testing (P-PET) is presented as a synthesis technique for multiple polynomial feedback shift registers. It scales with actual technology and is comparable with the usual pseudo-random (PR) pattern testing regarding test costs and test application time. The advantages with respect to the defect coverage, N-detectability for stuck-at faults and the reduction of deterministic test lengths are shown using state-of-the art industrial circuits.}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/ITC_MumtazIW2011.pdf} }
11. Soft Error Correction in Embedded Storage Elements
Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11), Athens, Greece, July 13-15 , pp. 169-174
2011
DOI PDF 
Keywords: Single Event Effect; Correction; Latch; Register
Abstract: In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.
BibTeX:
@inproceedings{ImhofW2011a,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Soft Error Correction in Embedded Storage Elements},
  booktitle = {Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {169--174},
  keywords = {Single Event Effect; Correction; Latch; Register},
  abstract = {In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.},
  doi = {http://dx.doi.org/10.1109/IOLTS.2011.5993832},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/IOLTS_ImhofW2011.pdf}
}
10. Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level
Kochte, M.A., Zöllin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P.
Proceedings of the IEEE 19th Asian Test Symposium (ATS'10), Shanghai, China, December 1-4 , pp. 3-8
2010
DOI URL PDF 
Keywords: Fault simulation; multi-level; transaction-level modeling
Abstract: In recent technology nodes, reliability is considered a part of the standard design flow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach.
BibTeX:
@inproceedings{KochtZBIWHDP2010b,
  author = {Kochte, Michael A. and Zöllin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level},
  booktitle = {Proceedings of the IEEE 19th Asian Test Symposium (ATS'10)},
  publisher = {IEEE Computer Society},
  year = {2010},
  pages = {3--8},
  keywords = {Fault simulation; multi-level; transaction-level modeling},
  abstract = {In recent technology nodes, reliability is considered a part of the standard design flow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approach.},
  url = {http://www.computer.org/csdl/proceedings/ats/2010/4248/00/4248a003-abs.html},
  doi = {http://dx.doi.org/10.1109/ATS.2010.10},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2010/ATS_KochtZBIWHDP2010.pdf}
}
9. System reliability evaluation using concurrent multi-level simulation of structural faults
Kochte, M.A., Zöllin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P.
IEEE International Test Conference (ITC'10), Austin, Texas, USA, October 31-November 5
2010
DOI PDF 
Abstract: This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system.
BibTeX:
@inproceedings{KochtZBIWHDP2010,
  author = {Kochte, Michael A. and Zöllin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {System reliability evaluation using concurrent multi-level simulation of structural faults},
  booktitle = {IEEE International Test Conference (ITC'10)},
  publisher = {IEEE Computer Society},
  year = {2010},
  abstract = {This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system.},
  doi = {http://dx.doi.org/10.1109/TEST.2010.5699309},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2010/ITC_KochtZBIWHDP2010.pdf}
}
8. Effiziente Simulation von strukturellen Fehlern für die Zuverlässigkeitsanalyse auf Systemebene
Kochte, M.A., Zöllin, C.G., Baranowski, R., Imhof, M.E., Wunderlich, H.-J., Hatami, N., Di Carlo, S. and Prinetto, P.
4. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'10), Wildbad Kreuth, Germany, September 13-15 , pp. 25-32
2010
URL PDF 
Keywords: Transaktionsebenen-Modellierung; Ebenenübergreifende Fehlersimulation
Abstract: In aktueller Prozesstechnologie muss die Zuverlässigkeit in allen Entwurfsschritten von eingebetteten Systemen betrachtet werden. Methoden, die nur Modelle auf unteren Abstraktionsebenen, wie Gatter- oder Registertransferebene, verwenden, bieten zwar eine hohe Genauigkeit, sind aber zu ineffizient, um komplexe Hardware/Software-Systeme zu analysieren. Hier werden ebenenübergreifende Verfahren benötigt, die auch hohe Abstraktion unterstützen, um effizient die Auswirkungen von Defekten im System bewerten zu können. Diese Arbeit stellt eine Methode vor, die aktuelle Techniken für die effiziente Simulation von strukturellen Fehlern mit Systemmodellierung auf Transaktionsebene kombiniert. Auf dieseWeise ist es möglich, eine präzise Bewertung der Fehlerauswirkung auf das gesamte Hardware/Software-System durchzuführen. Die Ergebnisse einer Fallstudie eines Hardware/Software-Systems zur Datenverschlüsselung und Bildkompression werden diskutiert und die Methode wird mit einem Standard-Fehlerinjektionsverfahren verglichen.
BibTeX:
@inproceedings{KochtZBIWHDP2010a,
  author = {Kochte, Michael A. and Zöllin, Christian G. and Baranowski, Rafal and Imhof, Michael E. and Wunderlich, Hans-Joachim and Hatami, Nadereh and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Effiziente Simulation von strukturellen Fehlern für die Zuverlässigkeitsanalyse auf Systemebene},
  booktitle = {4. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'10)},
  publisher = {VDE VERLAG GMBH},
  year = {2010},
  volume = {66},
  pages = {25--32},
  keywords = {Transaktionsebenen-Modellierung; Ebenenübergreifende Fehlersimulation},
  abstract = {In aktueller Prozesstechnologie muss die Zuverlässigkeit in allen Entwurfsschritten von eingebetteten Systemen betrachtet werden. Methoden, die nur Modelle auf unteren Abstraktionsebenen, wie Gatter- oder Registertransferebene, verwenden, bieten zwar eine hohe Genauigkeit, sind aber zu ineffizient, um komplexe Hardware/Software-Systeme zu analysieren. Hier werden ebenenübergreifende Verfahren benötigt, die auch hohe Abstraktion unterstützen, um effizient die Auswirkungen von Defekten im System bewerten zu können. Diese Arbeit stellt eine Methode vor, die aktuelle Techniken für die effiziente Simulation von strukturellen Fehlern mit Systemmodellierung auf Transaktionsebene kombiniert. Auf dieseWeise ist es möglich, eine präzise Bewertung der Fehlerauswirkung auf das gesamte Hardware/Software-System durchzuführen. Die Ergebnisse einer Fallstudie eines Hardware/Software-Systems zur Datenverschlüsselung und Bildkompression werden diskutiert und die Methode wird mit einem Standard-Fehlerinjektionsverfahren verglichen.},
  url = {http://www.vde-verlag.de/proceedings-de/453299003.html},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2010/ZuE_KochtZBIWHCP2010.pdf}
}
7. Test Exploration and Validation Using Transaction Level Models
Kochte, M.A., Zöllin, C.G., Imhof, M.E., Salimi Khaligh, R., Radetzki, M., Wunderlich, H.-J., Di Carlo, S. and Paolo, P.
Proceedings of the Conference on Design, Automation and Test in Europe (DATE'09), Nice, France, April 20-24 , pp. 1250-1253
2009
URL PDF 
Keywords: Test of systems-on-chip; design-for-test, transaction level modeling
Abstract: The complexity of the test infrastructure and test strategies in systems-on-chip approaches the complexity of the functional design space. This paper presents test design space exploration and validation of test strategies and schedules using transaction level models (TLMs). All aspects of the test infrastructure such as test access mechanisms, test wrappers, test data compression and test controllers are modeled at transaction level. Since many aspects of testing involve the transfer of a significant amount of test stimuli and responses, the communication-centric view of TLMs suits this purpose exceptionally well. A case study shows how TLMs can be used to efficiently evaluate DfT decisions in early design steps and how to evaluate test scheduling and resource partitioning during test planning. The presented approach has significantly higher simulation efficiency than RTL and gate level approaches.
BibTeX:
@inproceedings{KochtZISRWDP2009,
  author = {Kochte, Michael A. and Zöllin, Christian G. and Imhof, Michael E. and Salimi Khaligh, Rauf and Radetzki, Martin and Wunderlich, Hans-Joachim and Di Carlo, Stefano and Paolo, Prinetto},
  title = {Test Exploration and Validation Using Transaction Level Models},
  booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'09)},
  publisher = {IEEE Computer Society},
  year = {2009},
  pages = {1250--1253},
  keywords = {Test of systems-on-chip; design-for-test, transaction level modeling},
  abstract = {The complexity of the test infrastructure and test strategies in systems-on-chip approaches the complexity of the functional design space. This paper presents test design space exploration and validation of test strategies and schedules using transaction level models (TLMs). All aspects of the test infrastructure such as test access mechanisms, test wrappers, test data compression and test controllers are modeled at transaction level. Since many aspects of testing involve the transfer of a significant amount of test stimuli and responses, the communication-centric view of TLMs suits this purpose exceptionally well. A case study shows how TLMs can be used to efficiently evaluate DfT decisions in early design steps and how to evaluate test scheduling and resource partitioning during test planning. The presented approach has significantly higher simulation efficiency than RTL and gate level approaches.},
  url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5090856},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2009/DATE_KochtZISRWDP2009.pdf}
}
6. Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung;
Detection of transient faults in circuits with reduced power dissipation

Imhof, M.E., Wunderlich, H.-J. and Zöllin, C.G.
2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf ZuE('08), Ingolstadt, Germany, September 29-October 10 , pp. 107-114
2008
URL PDF 
Keywords: Robustes Design; Fehlertoleranz; Verlustleistung; Latch; Register; Single Event Effect; Robust design; fault tolerance; power dissipation; latch; register; single event effects
Abstract: Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements.
This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.

BibTeX:
@inproceedings{ImhofWZ2008a,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian G.},
  title = {Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung;
Detection of transient faults in circuits with reduced power dissipation}, booktitle = {2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf ZuE('08)}, publisher = {VDE VERLAG GMBH}, year = {2008}, volume = {57}, pages = {107--114}, keywords = {Robustes Design; Fehlertoleranz; Verlustleistung; Latch; Register; Single Event Effect; Robust design; fault tolerance; power dissipation; latch; register; single event effects}, abstract = {Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements.
This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.}, url = {http://www.vde-verlag.de/proceedings-de/453119017.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/ZuE_ImhofWZ2008a.pdf} }

5. Integrating Scan Design and Soft Error Correction in Low-Power Applications
Imhof, M.E., Wunderlich, H.-J. and Zöllin, C.G.
Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08), Rhodes, Greece, July 7-9 , pp. 59-64
2008
DOI URL PDF 
Keywords: Robust design; fault tolerance; latch; low power; register; single event effects
Abstract: Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.
BibTeX:
@inproceedings{ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian G.},
  title = {Integrating Scan Design and Soft Error Correction in Low-Power Applications},
  booktitle = {Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {59--64},
  keywords = {Robust design; fault tolerance; latch; low power; register; single event effects},
  abstract = {Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. 
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.}, url = {http://www.computer.org/csdl/proceedings/iolts/2008/3264/00/3264a059-abs.html}, doi = {http://dx.doi.org/10.1109/IOLTS.2008.31}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/IOLTS_ImhofWZ2008.pdf} }
4. Scan Chain Clustering for Test Power Reduction
Elm, M., Wunderlich, H.-J., Imhof, M.E., Zöllin, C.G., Leenstra, J. and Mäding, N.
Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08), Anaheim, California, USA, June 8-13 , pp. 828-833
2008
DOI PDF 
Keywords: Test; Design for Test; Low Power; Scan Design
Abstract: An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration.

The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.

BibTeX:
@inproceedings{ElmWIZLM2008,
  author = {Elm, Melanie and Wunderlich, Hans-Joachim and Imhof, Michael E. and Zöllin, Christian G. and Leenstra, Jens and Mäding, Nicolas},
  title = {Scan Chain Clustering for Test Power Reduction},
  booktitle = {Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08)},
  publisher = {ACM},
  year = {2008},
  pages = {828--833},
  keywords = {Test; Design for Test; Low Power; Scan Design},
  abstract = {An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration.

The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.}, doi = {http://dx.doi.org/10.1145/1391469.1391680}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DAC_ElmWIZLM2008.pdf} }

3. Test Set Stripping Limiting the Maximum Number of Specified Bits
Kochte, M.A., Zöllin, C.G., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08), Hong Kong, China, January 23-25 , pp. 581-586
Best paper award
2008
DOI URL PDF 
Keywords: test relaxation; test generation; tailored ATPG
Abstract: This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially
BibTeX:
@inproceedings{KochtZIW2008,
  author = {Kochte, Michael A. and Zöllin, Christian G. and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Test Set Stripping Limiting the Maximum Number of Specified Bits},
  booktitle = {Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {581--586},
  keywords = {test relaxation; test generation; tailored ATPG},
  abstract = {This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially},
  url = {http://www.computer.org/csdl/proceedings/delta/2008/3110/00/3110a581-abs.html},
  doi = {http://dx.doi.org/10.1109/DELTA.2008.64},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DELTA_KochtZIW2008.pdf}
}
2. Scan Test Planning for Power Reduction
Imhof, M.E., Zöllin, C.G., Wunderlich, H.-J., Mäding, N. and Leenstra, J.
Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07), San Diego, California, USA, June 4-8 , pp. 521-526
2007
DOI URL PDF 
Keywords: Reliability, Testing, and Fault-Tolerance (CR B.8.1); Algorithms; Reliability; test planning; power during test
Abstract: Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.
BibTeX:
@inproceedings{ImhofZWML2007a,
  author = {Imhof, Michael E. and Zöllin, Christian G. and Wunderlich, Hans-Joachim and Mäding, Nicolas and Leenstra, Jens},
  title = {Scan Test Planning for Power Reduction},
  booktitle = {Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07)},
  publisher = {ACM},
  year = {2007},
  pages = {521--526},
  keywords = {Reliability, Testing, and Fault-Tolerance (CR B.8.1); Algorithms; Reliability; test planning; power during test},
  abstract = {Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.},
  url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4261239},
  doi = {http://dx.doi.org/10.1145/1278480.1278614},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/DAC_ImhofZWML2007a.pdf}
}
1. Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute
Imhof, M.E., Zöllin, C.G., Wunderlich, H.-J., Maeding, N. and Leenstra, J.
1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07), Munich, Germany, March 26-28 , pp. 69-76
2007
URL PDF 
Abstract: Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.
BibTeX:
@inproceedings{ImhofZWML2007,
  author = {Imhof, Michael E. and Zöllin, Christian G. and Wunderlich, Hans-Joachim and Maeding, Nicolas and Leenstra, Jens},
  title = {Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute},
  booktitle = {1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)},
  publisher = {VDE VERLAG GMBH},
  year = {2007},
  volume = {52},
  pages = {69--76},
  abstract = {Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.},
  url = {http://www.vde-verlag.de/proceedings-de/463023008.html},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ZuE_ImhofZWML2007.pdf}
}
Created by JabRef on 25/08/2014.
Matching entries: 0
settings...
4. Mixed-Mode-Mustererzeugung für hohe Defekterfassung beim Eingebetteten Test
Mumtaz, A., Imhof, M.E. and Wunderlich, H.-J.
23rd GI/GMM/ITG Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'11), pp. 55-58
2011
 
Keywords: BIST, Pseudo-Erschöpfender Test, Defekterfassung, N-Detect
Abstract: Die Mustererzeugung für den eingebetteten Test besteht häufig aus einer Phase zur Erzeugung von Zufallsmustern und einer Phase, in der deterministische Muster angelegt werden. Der vorliegende Beitrag stellt eine Methode vor, die erste Phase signifikant zu optimieren, um dadurch die Defekterfassung zu vergrößern und zugleich die Zahl der erforderlichen deterministischen Muster in der zweiten Phase zu reduzieren.
Die Methode beruht auf dem pseudo-erschöpfenden Test (PET), der als Verfahren zum fehlermodellunabhängigen Test mit hoher Defekterfassung vorgeschlagen wurde. Da seine Testzeit exponentiell mit der Schaltungsgröße wachsen kann, ist die Anwendung auf große Schaltungen in der Regel ausgeschlossen. In der vorliegenden Arbeit werden eingebaute Testregister für den partiellen pseudo-erschöpfenden Test (P-PET) vorgeschlagen, der mit aktueller Technologie skaliert und hinsichtlich Testkosten und Testzeit mit dem üblichen pseudo-zufälligen Test (PZT) vergleichbar ist. Die Vorteile bezüglich der Defekterfassung, N-Detektierbarkeit für Haftfehler und der Reduktion deterministischer Testlängen werden anhand aktueller industrieller Schaltungen nachgewiesen.
BibTeX:
@inproceedings{MumtaIW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {Mixed-Mode-Mustererzeugung für hohe Defekterfassung beim Eingebetteten Test},
  booktitle = {23rd GI/GMM/ITG Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'11)},
  year = {2011},
  pages = {55--58},
  keywords = {BIST, Pseudo-Erschöpfender Test, Defekterfassung, N-Detect},
  abstract = {Die Mustererzeugung für den eingebetteten Test besteht häufig aus einer Phase zur Erzeugung von Zufallsmustern und einer Phase, in der deterministische Muster angelegt werden. Der vorliegende Beitrag stellt eine Methode vor, die erste Phase signifikant zu optimieren, um dadurch die Defekterfassung zu vergrößern und zugleich die Zahl der erforderlichen deterministischen Muster in der zweiten Phase zu reduzieren. 
Die Methode beruht auf dem pseudo-erschöpfenden Test (PET), der als Verfahren zum fehlermodellunabhängigen Test mit hoher Defekterfassung vorgeschlagen wurde. Da seine Testzeit exponentiell mit der Schaltungsgröße wachsen kann, ist die Anwendung auf große Schaltungen in der Regel ausgeschlossen. In der vorliegenden Arbeit werden eingebaute Testregister für den partiellen pseudo-erschöpfenden Test (P-PET) vorgeschlagen, der mit aktueller Technologie skaliert und hinsichtlich Testkosten und Testzeit mit dem üblichen pseudo-zufälligen Test (PZT) vergleichbar ist. Die Vorteile bezüglich der Defekterfassung, N-Detektierbarkeit für Haftfehler und der Reduktion deterministischer Testlängen werden anhand aktueller industrieller Schaltungen nachgewiesen.} }
3. Modellierung der Testinfrastruktur auf der Transaktionsebene
Kochte, M.A., Zöllin, C., Imhof, M.E., Salimi Khaligh, R., Radetzki, M., Wunderlich, H.-J., Di Carlo, S. and Prinetto, P.
21th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'09), pp. 61-66
2009
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1
Abstract: Dieser Artikel stellt eine Methode vor, den Entwurfsraum beim prüfgerechten Entwurf (engl. Design-for-Test, DfT) zu untersuchen und Teststrategien und Testschedules zu validieren. Alle Teile der Testinfrastruktur, wie etwa die Testeranbindung (Test Access Mechanisms), die Testwrapper, die Testdatenkompression sowie die entsprechenden Steuerwerke werden auf Transaktionsebenenmodelle (TLMs) abgebildet. Die kommunikationsbezogene Sicht der TLMs eignet sich besonders, da viele Aspekte des Tests die Übertragung großer Mengen an Teststimuli und -antworten erfordern. An einer Fallstudie wird der Einsatz von TLMs in frühen Entwurfsphasen erläutert. Der vorgestellte Ansatz hat wesentlich höhere Simulationseffizienz als Ansätze auf Register-Transfer- und Gatterebene.
BibTeX:
@inproceedings{KochtZISRWDP2009,
  author = {Kochte, Michael A. and Zöllin, Christian and Imhof, Michael E. and Salimi Khaligh, Rauf and Radetzki, Martin and Wunderlich, Hans-Joachim and Di Carlo, Stefano and Prinetto, Paolo},
  title = {Modellierung der Testinfrastruktur auf der Transaktionsebene},
  booktitle = {21th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'09)},
  year = {2009},
  pages = {61--66},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1},
  abstract = {Dieser Artikel stellt eine Methode vor, den Entwurfsraum beim prüfgerechten Entwurf (engl. Design-for-Test, DfT) zu untersuchen und Teststrategien und Testschedules zu validieren. Alle Teile der Testinfrastruktur, wie etwa die Testeranbindung (Test Access Mechanisms), die Testwrapper, die Testdatenkompression sowie die entsprechenden Steuerwerke werden auf Transaktionsebenenmodelle (TLMs) abgebildet. Die kommunikationsbezogene Sicht der TLMs eignet sich besonders, da viele Aspekte des Tests die Übertragung großer Mengen an Teststimuli und -antworten erfordern. An einer Fallstudie wird der Einsatz von TLMs in frühen Entwurfsphasen erläutert. Der vorgestellte Ansatz hat wesentlich höhere Simulationseffizienz als Ansätze auf Register-Transfer- und Gatterebene.}
}
2. Integrating Scan Design and Soft Error Correction in Low-Power Applications
Imhof, M.E., Wunderlich, H.-J. and Zöllin, C.
1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08)
2008
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects
Abstract: In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With small addition, single and multiple errors are detected in the clocked mode, too. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.
BibTeX:
@inproceedings{ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian},
  title = {Integrating Scan Design and Soft Error Correction in Low-Power Applications},
  booktitle = {1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08)},
  year = {2008},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects},
  abstract = {In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With small addition, single and multiple errors are detected in the clocked mode, too. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.} }
1. Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information
Imhof, M.E., Wunderlich, H.-J., Zöllin, C., Leenstra, J. and Maeding, N.
20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08), pp. 137-141
2008
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1
Abstract: Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen.
BibTeX:
@inproceedings{ImhofWZLM2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian and Leenstra, Jens and Maeding, Nicolas},
  title = {Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information},
  booktitle = {20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08)},
  year = {2008},
  pages = {137--141},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1},
  abstract = {Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen.}
}
Created by JabRef on 25/08/2014.

Lehre

Vorlesungen, Übungen, Praktika

(Haupt-) Seminare

Master-, Diplom- und Studienarbeiten, Projektarbeiten

WS2013

Delay Characterization in FPGA-based Reconfigurable Systems
Master Thesis Nr. 3505, S. Zhang, 03.06.2013 - 03.12.2013

Accelerated Computation Using Runtime Partial Reconfiguration
Master Thesis Nr. 3491, N. Nayak, 27.05.2013 - 26.11.2013

SS2013

Micro Architecture for Fault Tolerant NoCs
Diplomarbeit Nr. 3451, S. Zimmermann, 21.01.2013 - 23.07.2013

Online Self-Test Wrapper for Runtime-Reconfigurable Systems
Master Thesis Nr. 3439, J. Wang, 03.12.2012 - 04.06.2013

Embedding Deterministic patterns in Partial Pseudo-Exhaustive Test
Master Thesis Nr. 3447, A. Sannikova, 15.11.2012 - 17.05.2013

WS2012

Entwicklung einer FPGA-basierten Konsolidierungseinheit für Fließkomma- und Ganzzahldaten im Einsatzbereich der zivilen Luftfahrt
Diplomarbeit, M. Blocherer, 19.07.2012 - 18.01.2013

SS2011

Evaluation of Advanced Techniques for Structural FPGA Self-Test
Master Thesis Nr. 3161, M. Abdelfattah, 01.03.2011 - 31.08.2011

WS2010

DFX-Webinterface
Softwarepraktikum, D. Butsch und M. Mikusz, 15.05.2010 - 15.11.2010

WS2009

Algorithmen-basierte Fehlertoleranz in Many-Core Systemen
Softwarepraktikum, D. Pfander, S. Kanis, 01.08.2009 - 28.02.2010

WS2008

High Precision Encoder System Optimized for Speed Applications
Master Thesis, J. C. G. Fernandez, 07.05.2008 - 06.11.2008

SS2008

pop2pc: power of peer2peer computing
Softwarepraktikum, R. Netzel, B. Reitschuster, 01.01.2008 - 30.06.2008

WS2007

Partial Scan Design for Generation of Minimal Size, Balanced ATPG Models
Master Thesis Nr. 2589, S. Parajuli, 12.02.2007 - 14.11.2007


Comparison of Asynchronous Design Styles on the Basis of a Network-on-a-Chip Switch
Studienarbeit Nr. 2109, M. Kaufmann, 01.05.2007 - 01.11.2007

SS2007

Survey and Defect-Analysis of Power Gating Structures
Studienarbeit Nr. 2111, S. S. Wahl, 03.05.2007 - 02.10.2007


Fehlersimulation von kleinen Gatterverzögerungsfehlern unter der Annahme von Parametervariationen
Diplomarbeit Nr. 2588, C. H. Gellner, 08.02.2007 - 10.08.2007


Erzeugung pseudoerschöpfender Testmuster für große Schaltnetze
Diplomarbeit Nr. 2577, D. Taut, 22.01.2007 - 03.09.2007

WS2006

Graphenalgorithmen zur Optimierung von Scanketten im Selbsttest
Diplomarbeit Nr. 2527, N. Hoerr, 09.08.2006 - 12.02.2007


Author: Michael Imhof

(Disclaimer: the respective users themselves are responsible for the contents of the material presented in their pages. Statements or opinions on these pages are by no means expressed in behalf of the University or of its departments!)