Projekt-Partner

Zur Webseite der Uni Stuttgart

REALTEST: Test und Zuverlässigkeit nanoelektronischer Systeme

01.2006 - 07.2013, DFG-Projekt: WU 245/5-1, 5-2    

Projektbeschreibung

Ziele des Projekts sind einheitliche Verfahren für einen robusten Entwurf und einen darauf abgestimmten Test. Der Produktionstest bestimmt dabei die Funktionsfähigkeit und die verbleibende Robustheit (Quality Binning). Periodische Wartungstests erkennen Fehler während der Lebenszeit und eine Online-Überwachung schützt gegen Soft Errors während des Betriebs.


Robuste Systeme

Es ist ein bislang ungebrochener Trend, dass der Anteil von Flipflops in freier Logik stetig zunimmt. Diese Entwicklung folgt unter anderem aus dem massiven Pipelining oder dem Anwachsen der Registersätze, die beispielsweise zur Unterstützung von Spekulation, Hyperthreading und Befehlsscheduling notwendig sind. Auch Fehlertoleranztechniken erhöhen die Zahl der Speicherelemente in freier Logik, und bereits heute sind Schaltungen mit Millionen von Flipflops anzutreffen [Kupp04]. Diese Beobachtungen treffen nicht nur für Datenpfade zu sondern auch für kontrolldominierte Module, bei denen immer mehr Regularität und Geschwindigkeit im Vordergrund stehen.

Die Flipflops einer Schaltung sind in besonderem Maße gegenüber Einwirkungen der Umgebung anfällig und erfordern Schutzmechanismen wie sie heute schon bei regulären Speicherfeldern üblich sind [Dood03]. Einige der gegenwärtig industriell eingesetzten Verfahren sind hier Reparatur und Rekonfiguration, Fehlererkennung und Fehlerkorrektur durch Kodierung, periodisches Auffrischen der Daten (Scrubbing) gegen Fehlerakkumulation sowie eingebaute Selbsttestverfahren mit Redundanzanalyse und Selbstreparatur.

Besonders kritisch wirkt sich auch die Tatsache aus, dass zur Reduktion der Verlustleistung die Zahl der schaltenden Fipflops so gering wie möglich gehalten wird (Clock-Gating). Dies hat zur Folge, dass eine beträchtliche Anzahl von Flipflops ihren Wert über einen längeren Zeitraum speichern muss. Damit sind die Speicherelemente ähnlich wie ein reguläres, dynamisches Speicherfeld über längere Zeit externen Einflüssen ausgesetzt und transiente Fehler können sich akkumulieren. Eine periodische Auffrischung der Speicherinformation ist hier genauso notwendig, wie bereits heute in regulären Speicherfeldern [Hell02].

Durch die steigenden Soft Error Raten (SER) für kombinatorische Bauelemente und die ständigen Verkürzungen der Logiktiefe werden außerdem vermehrt Fehler aus der Kombinatorik in die Speicherelemente propagiert [Shiv02]. Diese Effekte müssen ebenfalls durch eine geeignete Überwachung der Speicherelemente und entsprechende Fehlertoleranzverfahren kompensiert werden. Zusätzlich bietet sich auch die Möglichkeit, kombinatorische Elemente und Latches gegen transiente Fehler zu härten [Koma04].


Soft Errors

Die zunehmende Zahl der Speicherlemente und die erforderliche Zusatzausstattung für eine erhöhte Zuverlässigkeit erschweren zugleich den Produktionstest, der bereits heute ein dominierender Kostenfaktor ist. Für freie Logik sind Teststrategien mit Prüfpfad am weitesten verbreitet. Hier werden die Testdaten seriell in die Schaltung geschoben und ausgelesen. Zur Verkürzung der Testzeit verwendet man meist mehrere Prüfpfade parallel, erzeugt die Muster im Selbsttest auf dem Chip oder führt komprimierte Testinformation von außen zu, die auf dem Chip dekomprimiert wird. Entsprechend wird die Testantwort komprimiert nach außen geführt.

Mit den Kompressionsmethoden begegnet man dem akuten Problem, dass die Bandbreite zwischen Chips und Testautomaten deutlich langsamer wächst als der Umfang der Testdaten [Mitr05; Rajs05]. Der steigende Anteil von Flipflops und die beträchtliche Redundanz zur Steigerung der Zuverlässigkeit verschärfen dieses Testproblem noch beträchtlich.

Ziel des Projekts ist die Entwicklung einer einheitlichen Entwurfsmethodik für speichernde Elemente, welche die Probleme der Zuverlässigkeit und Fehlertoleranz, des Offline-Tests und des Online-Tests behandelt. Hierzu werden die einzelnen Prüfpfade in Segmente geeigneter Größe zerlegt und ihnen Redundanz zur Maskierung oder Reparatur permanenter Fehler hinzugefügt, sodass die entstehende Struktur immer noch tolerant gegenüber transienten Fehlern ist.

Ein Prüfpfad kann als ein eindimensionaler Speicher interpretiert werden, und die entsprechenden Testverfahren für reguläre Speicherfelder, wie der periodische Test, der Online-Test und der transparente Test, lassen sich darauf anwenden. Wiederholtes Lesen und Rückschreiben würde jedoch den Zugriff auf die Flipflops einschränken und den Systembetrieb belasten. Stattdessen ist es sinnvoll, einen transparenten periodischen Selbsttest einzusetzen [Hell02; Nico96]. Mit einer einfachen Logik lässt sich eine Restcharakteristik berechnen, die es erlaubt, den Inhalt eines Prüfpfades konsistent zu halten und kontinuierlich, periodisch zu überwachen.


Fehlerkorrektur bei Speicherfeldern


Fehlerkorrektur beim Prüfpfad

Die zusätzliche Hardware, die für den Online-Test der Speicherelemente in die Schaltung integriert wurde, lässt sich zur Kompression der Testantworten verwenden. So muss nur die berechnete Charakteristik ausgewertet werden, von welcher dann auf falsche Schaltungsantworten geschlossen werden kann. Ein vollständiges Auslesen der teilweise redundanten Prüfpfadinformation ist bei dieser Lösung für den Offline Test nicht nötig, und die Testzeit wird ohne Zusatzaufwand dramatisch verkürzt.

Für die Eingangsdaten des Prüfpfads können ohne wesentliche Änderungen die derzeit bekannten Verfahren der Testdatenkompression eingesetzt werden.

 

Referenzen:

[Dodd03]

P. E. Dodd and L. W. Massengill, "Basic mechanisms and modeling of single-event upset in digital micro-electronics", IEEE Transactions on Nuclear Science, 50 (3), pp. 583-602, June 2003

[Hell02]

S. Hellebrand, H.-J. Wunderlich, A. A. Ivaniuk, Y. V. Klimets, and V. N. Yarmolik, "Efficient online and offline testing of embedded DRAMs", IEEE Trans-actions on Computers, 51 (7), pp. 801-809, 2002

[SIA]

Semiconductor Industry Association, "International technology roadmap for semiconductors", Technical Report, 2003, available at: http://public.itrs.net/public.itrs.net

[Kupp04]

R. Kuppuswamy, P. DesRosier, D. Feltham, R. Sheikh, and P. Thadikaran, "Full hold-scan systems in microprocessors: Cost/benefit analysis", Intel Tech-nology Journal, 8 (1), pp. 63-72, Feb. 2004

[Mitr05]

S. Mitra, S. S. Lumetta, M. Mitzenmacher, and N. Patil, "X-Tolerant Test Re-sponse Compaction", IEEE Design & Test of Computers, 22 (6), pp. 566-574, 2005

[Rajs05]

J. Rajski, J. Tyszer, C. Wang, and S. M. Reddy, "Finite memory test response compactors for embedded test applications", IEEE Trans. on CAD of Inte-grated Circuits and Systems, 24 (4), pp. 622-634, 2005

[Nico96]

M. Nicolaidis, "Theory of Transparent BIST for RAMs", IEEE Trans. on Com-puter, 45 (10), pp. 1141-1156, 1996

[Koma04]

Y. Komatsu, Y. Arima, T. Fujimoto, T. Yamashita, and K. Ishibashi, "A soft-error hardened latch scheme for soc in a 90nm technology and beyond", Pro-ceedings IEEE Custom Integrated Circuits Conference (CICC'04), pp. 329-332,Orlando, FL, USA, Sep 2004

[Shiv02]

P. Shivakumar, M. Kistler, S. W. Keckler, D. Burger, and L. Alvisi, "Modeling the effect of technology trends on the soft error rate of combinational logic", Proceedings International Conference on Dependable Systems and Networks (DSN'02), Bethesda, MD, USA, pp. 389-398, June 2002

 


Publikationen

Journale und Tagungsberichte
Matching entries: 0
settings...
30. SAT-Based ATPG beyond Stuck-at Fault Testing
Hellebrand, S. and Wunderlich, H.-J.
it - Information Technology
Vol. 56(4), 21 July 2014, pp. 165-172
2014
DOI PDF 
Keywords: ACM CCS→Hardware→Hardware test, SAT-based ATPG, Fault Tolerance, Self-Checking Circuits, Synthesis
Abstract: To cope with the problems of technology scaling, a robust design has become desirable. Self-checking circuits combined with rollback or repair strategies can provide a low cost solution for many applications. However, standard synthesis procedures may violate design constraints or lead to sub-optimal designs. The SAT-based strategies for the verification and synthesis of self-checking circuits presented in this paper can provide efficient solutions.
BibTeX:
@article{HelleW2014,
  author = {Hellebrand, Sybille and Wunderlich, Hans-Joachim},
  title = {{SAT-Based ATPG beyond Stuck-at Fault Testing}},
  journal = {it - Information Technology},
  year = {2014},
  volume = {56},
  number = {4},
  pages = {165--172},
  keywords = {ACM CCS→Hardware→Hardware test, SAT-based ATPG, Fault Tolerance, Self-Checking Circuits, Synthesis},
  abstract = {To cope with the problems of technology scaling, a robust design has become desirable. Self-checking circuits combined with rollback or repair strategies can provide a low cost solution for many applications. However, standard synthesis procedures may violate design constraints or lead to sub-optimal designs. The SAT-based strategies for the verification and synthesis of self-checking circuits presented in this paper can provide efficient solutions.},
  doi = {http://dx.doi.org/10.1515/itit-2013-1043},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ITIT_HelleW2014.pdf}
}
29. Variation-Aware Deterministic ATPG
Sauer, M., Polian, I., Imhof, M.E., Mumtaz, A., Schneider, E., Czutro, A., Wunderlich, H.-J. and Becker, B.
Proceedings of the 19th IEEE European Test Symposium (ETS'14), Paderborn, Germany, 26-30 May 2014, pp. 87-92
Best paper award
2014
DOI URL PDF 
Keywords: Variation-aware test, fault efficiency, ATPG
Abstract: In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.
BibTeX:
@inproceedings{SauerPIMSCWB2014,
  author = {Sauer, Matthias and Polian, Ilia and Imhof, Michael E. and Mumtaz, Abdullah and Schneider, Eric and Czutro, Alexander and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {{Variation-Aware Deterministic ATPG}},
  booktitle = {Proceedings of the 19th IEEE European Test Symposium (ETS'14)},
  year = {2014},
  pages = {87--92},
  keywords = {Variation-aware test, fault efficiency, ATPG},
  abstract = {In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6847806},
  doi = {http://dx.doi.org/10.1109/ETS.2014.6847806},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ETS_SauerPIMSCWB2014.pdf}
}
28. Accurate QBF-based Test Pattern Generation in Presence of Unknown Values
Hillebrecht, S., Kochte, M.A., Erb, D., Wunderlich, H.-J. and Becker, B.
Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13), Grenoble, France, 18-22 March 2013, pp. 436-441
2013
DOI PDF 
Keywords: Unknown values, test generation, ATPG, QBF
Abstract: Unknown (X) values may emerge during the design process as well as during system operation and test application. Sources of X-values are for example black boxes, clockdomain boundaries, analog-to-digital converters, or uncontrolled or uninitialized sequential elements. To compute a detecting pattern for a given stuck-at fault, well defined logic values are required both for fault activation as well as for fault effect propagation to observing outputs. In presence of X-values, classical test generation algorithms, based on topological algorithms or formal Boolean satisfiability (SAT) or BDD-based reasoning, may fail to generate testing patterns or to prove faults untestable. This work proposes the first efficient stuck-at fault ATPG algorithm able to prove testability or untestability of faults in presence of X-values. It overcomes the principal inaccuracy and pessimism of classical algorithms when X-values are considered. This accuracy is achieved by mapping the test generation problem to an instance of quantified Boolean formula (QBF) satisfiability. The resulting fault coverage improvement is shown by experimental results on ISCAS benchmark and larger industrial circuits.
BibTeX:
@inproceedings{HilleKEWB2013,
  author = {Hillebrecht, Stefan and Kochte, Michael A. and Erb, Dominik and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {{Accurate QBF-based Test Pattern Generation in Presence of Unknown Values}},
  booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13)},
  publisher = {IEEE Computer Society},
  year = {2013},
  pages = {436--441},
  keywords = {Unknown values, test generation, ATPG, QBF},
  abstract = {Unknown (X) values may emerge during the design process as well as during system operation and test application. Sources of X-values are for example black boxes, clockdomain boundaries, analog-to-digital converters, or uncontrolled or uninitialized sequential elements. To compute a detecting pattern for a given stuck-at fault, well defined logic values are required both for fault activation as well as for fault effect propagation to observing outputs. In presence of X-values, classical test generation algorithms, based on topological algorithms or formal Boolean satisfiability (SAT) or BDD-based reasoning, may fail to generate testing patterns or to prove faults untestable. This work proposes the first efficient stuck-at fault ATPG algorithm able to prove testability or untestability of faults in presence of X-values. It overcomes the principal inaccuracy and pessimism of classical algorithms when X-values are considered. This accuracy is achieved by mapping the test generation problem to an instance of quantified Boolean formula (QBF) satisfiability. The resulting fault coverage improvement is shown by experimental results on ISCAS benchmark and larger industrial circuits.},
  doi = {http://dx.doi.org/10.7873/DATE.2013.098},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/DATE_HilleKEWB2013.pdf}
}
27. Efficient Variation-Aware Statistical Dynamic Timing Analysis for Delay Test Applications
Wagner, M. and Wunderlich, H.-J.
Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13), Grenoble, France, 18-22 March 2013, pp. 276-281
2013
DOI PDF 
Abstract: Increasing parameter variations, caused by variations in process, temperature, power supply, and wear-out, have emerged as one of the most important challenges in semiconductor manufacturing and test. As a consequence for gate delay testing, a single test vector pair is no longer sufficient to provide the required low test escape probabilities for a single delay fault. Recently proposed statistical test generation methods are therefore guided by a metric, which defines the probability of detecting a delay fault with a given test set. However, since run time and accuracy are dominated by the large number of required metric evaluations, more efficient approximation methods are mandatory for any practical application. In this work, a new statistical dynamic timing analysis algorithm is introduced to tackle this problem. The associated approximation error is very small and predominantly caused by the impact of delay variations on path sensitization and hazards. The experimental results show a large speedup compared to classical Monte Carlo simulations.
BibTeX:
@inproceedings{WagneW2013,
  author = {Wagner, Marcus and Wunderlich, Hans-Joachim},
  title = {{Efficient Variation-Aware Statistical Dynamic Timing Analysis for Delay Test Applications }},
  booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'13)},
  year = {2013},
  pages = {276--281},
  abstract = {Increasing parameter variations, caused by variations in process, temperature, power supply, and wear-out, have emerged as one of the most important challenges in semiconductor manufacturing and test. As a consequence for gate delay testing, a single test vector pair is no longer sufficient to provide the required low test escape probabilities for a single delay fault. Recently proposed statistical test generation methods are therefore guided by a metric, which defines the probability of detecting a delay fault with a given test set. However, since run time and accuracy are dominated by the large number of required metric evaluations, more efficient approximation methods are mandatory for any practical application. In this work, a new statistical dynamic timing analysis algorithm is introduced to tackle this problem. The associated approximation error is very small and predominantly caused by the impact of delay variations on path sensitization and hazards. The experimental results show a large speedup compared to classical Monte Carlo simulations.},
  doi = {http://dx.doi.org/10.7873/DATE.2013.069},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2013/DATE_WagneW2013.pdf}
}
26. Accurate X-Propagation for Test Applications by SAT-Based Reasoning
Kochte, M.A., Elm, M. and Wunderlich, H.-J.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)
Vol. 31(12), December 2012, pp. 1908-1919
2012
DOI PDF 
Keywords: Unknown values; stuck-at fault coverage; accurate fault simulation; simulation pessimism
Abstract: Unknown or X-values during test application may originate from uncontrolled sequential cells or macros, from clock or A/D boundaries or from tri-state logic. The exact identification of X-value propagation paths in logic circuits is crucial in logic simulation and fault simulation. In the first case, it enables the proper assessment of expected responses and the effective and efficient handling of X-values during test response compaction. In the second case, it is important for a proper assessment of fault coverage of a given test set and consequently influences the efficiency of test pattern generation. The commonly employed n-valued logic simulation evaluates the propagation of X-values only pessimistically, i.e. the X-propagation paths found by n- valued logic simulation are a superset of the actual propagation paths. This paper presents an efficient method to overcome this pessimism and to determine accurately the set of signals which carry an X-value for an input pattern. As examples, it investigates the influence of this pessimism on the two applications X-masking and stuck-at fault coverage assessment. The experimental results on benchmark and industrial circuits assess the pessimism of classic algorithms and show that these algorithms significantly overestimate the signals with X-values. The experiments show that overmasking of test data during test compression can be reduced by an accurate analysis. In stuck-at fault simulation, the coverage of the test set is increased by the proposed algorithm without incurring any overhead.
BibTeX:
@article{KochtEW2012,
  author = {Kochte, Michael A. and Elm, Melanie and Wunderlich, Hans-Joachim},
  title = {{Accurate X-Propagation for Test Applications by SAT-Based Reasoning}},
  journal = {IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)},
  publisher = {IEEE Computer Society},
  year = {2012},
  volume = {31},
  number = {12},
  pages = {1908--1919},
  keywords = {Unknown values; stuck-at fault coverage; accurate fault simulation; simulation pessimism},
  abstract = {Unknown or X-values during test application may originate from uncontrolled sequential cells or macros, from clock or A/D boundaries or from tri-state logic. The exact identification of X-value propagation paths in logic circuits is crucial in logic simulation and fault simulation. In the first case, it enables the proper assessment of expected responses and the effective and efficient handling of X-values during test response compaction. In the second case, it is important for a proper assessment of fault coverage of a given test set and consequently influences the efficiency of test pattern generation. The commonly employed n-valued logic simulation evaluates the propagation of X-values only pessimistically, i.e. the X-propagation paths found by n- valued logic simulation are a superset of the actual propagation paths. This paper presents an efficient method to overcome this pessimism and to determine accurately the set of signals which carry an X-value for an input pattern. As examples, it investigates the influence of this pessimism on the two applications X-masking and stuck-at fault coverage assessment. The experimental results on benchmark and industrial circuits assess the pessimism of classic algorithms and show that these algorithms significantly overestimate the signals with X-values. The experiments show that overmasking of test data during test compression can be reduced by an accurate analysis. In stuck-at fault simulation, the coverage of the test set is increased by the proposed algorithm without incurring any overhead.},
  doi = {http://dx.doi.org/10.1109/TCAD.2012.2210422},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/TCAD_KochtEW2012.pdf}
}
25. Variation-Aware Fault Grading
Czutro, A., Imhof, M.E., Jiang, J., Mumtaz, A., Sauer, M., Becker, B., Polian, I. and Wunderlich, H.-J.
Proceedings of the 21st IEEE Asian Test Symposium (ATS'12), Niigata, Japan, 19-22 November 2012, pp. 344-349
2012
DOI PDF 
Keywords: process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU
Abstract: An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.
BibTeX:
@inproceedings{CzutrIJMSBPW2012,
  author = {Czutro, A. and Imhof, Michael E. and Jiang, J. and Mumtaz, Abdullah and Sauer, M. and Becker, Bernd and Polian, Ilia and Wunderlich, Hans-Joachim},
  title = {{Variation-Aware Fault Grading}},
  booktitle = {Proceedings of the 21st IEEE Asian Test Symposium (ATS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {344--349},
  keywords = {process variations, fault grading, Monte-Carlo, fault simulation, SAT-based, ATPG, GPGPU},
  abstract = {An iterative flow to generate test sets providing high fault coverage under extreme parameter variations is presented. The generation is guided by the novel metric of circuit coverage, calculated by massively parallel statistical fault simulation on GPGPUs. Experiments show that the statistical fault coverage of the generated test sets exceeds by far that achieved by standard approaches.},
  doi = {http://dx.doi.org/10.1109/ATS.2012.14},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ATS_CzutrIJMSBPW2012.pdf}
}
24. Built-in Self-Diagnosis Exploiting Strong Diagnostic Windows in Mixed-Mode Test
Cook, A., Hellebrand, S. and Wunderlich, H.-J.
Proceedings of the 17th IEEE European Test Symposium (ETS'12), Annecy, France, 28 May-1 June 2012, pp. 146-151
2012
DOI PDF 
Keywords: Built-in Diagnosis; Design for Diagnosis
Abstract: Efficient diagnosis procedures are crucial both for volume and for in-field diagnosis. In either case the underlying test strategy should provide a high coverage of realistic fault mechanisms and support a low-cost implementation. Built-in self-diagnosis (BISD) is a promising solution, if the diagnosis procedure is fully in line with the test flow. However, most known BISD schemes require multiple test runs or modifications of the standard scan-based test infrastructure. Some recent schemes circumvent these problems, but they focus on deterministic patterns to limit the storage requirements for diagnostic data. Thus, they cannot exploit the benefits of a mixed-mode test such as high coverage of non-target faults and reduced test data storage. This paper proposes a BISD scheme using mixed-mode patterns and partitioning the test sequence into “weak” and “strong” diagnostic windows, which are treated differently during diagnosis. As the experimental results show, this improves the coverage of non-target faults and enhances the diagnostic resolution compared to state-of-the-art approaches. At the same time the overall storage overhead for input and response data is considerably reduced.
BibTeX:
@inproceedings{CookHW2012,
  author = {Cook, Alejandro and Hellebrand, Sybille and Wunderlich, Hans-Joachim},
  title = {{Built-in Self-Diagnosis Exploiting Strong Diagnostic Windows in Mixed-Mode Test}},
  booktitle = {Proceedings of the 17th IEEE European Test Symposium (ETS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {146--151},
  keywords = {Built-in Diagnosis; Design for Diagnosis},
  abstract = {Efficient diagnosis procedures are crucial both for volume and for in-field diagnosis. In either case the underlying test strategy should provide a high coverage of realistic fault mechanisms and support a low-cost implementation. Built-in self-diagnosis (BISD) is a promising solution, if the diagnosis procedure is fully in line with the test flow. However, most known BISD schemes require multiple test runs or modifications of the standard scan-based test infrastructure. Some recent schemes circumvent these problems, but they focus on deterministic patterns to limit the storage requirements for diagnostic data. Thus, they cannot exploit the benefits of a mixed-mode test such as high coverage of non-target faults and reduced test data storage. This paper proposes a BISD scheme using mixed-mode patterns and partitioning the test sequence into “weak” and “strong” diagnostic windows, which are treated differently during diagnosis. As the experimental results show, this improves the coverage of non-target faults and enhances the diagnostic resolution compared to state-of-the-art approaches. At the same time the overall storage overhead for input and response data is considerably reduced.},
  doi = {http://dx.doi.org/10.1109/ETS.2012.6233025},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ETS_CookHW2012.pdf}
}
23. Exact Stuck-at Fault Classification in Presence of Unknowns
Hillebrecht, S., Kochte, M.A., Wunderlich, H.-J. and Becker, B.
Proceedings of the 17th IEEE European Test Symposium (ETS'12), Annecy, France, 28 May-1 June 2012, pp. 98-103
2012
DOI PDF 
Keywords: Unknown values; simulation pessimism; exact fault simulation; SAT
Abstract: Fault simulation is an essential tool in electronic design automation. The accuracy of the computation of fault coverage in classic n-valued simulation algorithms is compromised by unknown (X) values. This results in a pessimistic underestimation of the coverage, and overestimation of unknown (X) values at the primary and pseudo-primary outputs. This work proposes the first stuck-at fault simulation algorithm free of any simulation pessimism in presence of unknowns. The SAT-based algorithm exactly classifies any fault and distinguishes between definite and possible detects. The pessimism w. r. t. unknowns present in classic algorithms is discussed in the experimental results on ISCAS benchmark and industrial circuits. The applicability of our algorithm to large industrial circuits is demonstrated.
BibTeX:
@inproceedings{HilleKWB2012,
  author = {Hillebrecht, Stefan and Kochte, Michael A. and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {{Exact Stuck-at Fault Classification in Presence of Unknowns}},
  booktitle = {Proceedings of the 17th IEEE European Test Symposium (ETS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {98--103},
  keywords = {Unknown values; simulation pessimism; exact fault simulation; SAT},
  abstract = {Fault simulation is an essential tool in electronic design automation. The accuracy of the computation of fault coverage in classic n-valued simulation algorithms is compromised by unknown (X) values. This results in a pessimistic underestimation of the coverage, and overestimation of unknown (X) values at the primary and pseudo-primary outputs. This work proposes the first stuck-at fault simulation algorithm free of any simulation pessimism in presence of unknowns. The SAT-based algorithm exactly classifies any fault and distinguishes between definite and possible detects. The pessimism w. r. t. unknowns present in classic algorithms is discussed in the experimental results on ISCAS benchmark and industrial circuits. The applicability of our algorithm to large industrial circuits is demonstrated.},
  doi = {http://dx.doi.org/10.1109/ETS.2012.6233017},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/ETS_HilleKWB2012.pdf}
}
22. A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures
Tran, D.A., Virazel, A., Bosio, A., Dilillo, L., Girard, P., Todri, A., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12), Hyatt Maui, Hawaii, USA, 23-25 April 2012, pp. 50-55
2012
DOI PDF 
Keywords: Robustness; Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption
Abstract: Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.
BibTeX:
@inproceedings{TranVBDGTIW2012,
  author = {Tran, Duc Anh and Virazel, Arnaud and Bosio, Alberto and Dilillo, Luigi and Girard, Patrick and Todri, Aida and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {{A Pseudo-Dynamic Comparator for Error Detection in Fault Tolerant Architectures}},
  booktitle = {Proceedings of the 30th IEEE VLSI Test Symposium (VTS'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {50--55},
  keywords = {Robustness; Soft error; Timing error; Fault tolerance; Duplication; Comparison; Power consumption},
  abstract = {Although CMOS technology scaling offers many advantages, it suffers from robustness problem caused by hard, soft and timing errors. The robustness of future CMOS technology nodes must be improved and the use of fault tolerant architectures is probably the most viable solution. In this context, Duplication/Comparison scheme is widely used for error detection. Traditionally, this scheme uses a static comparator structure that detects hard error. However, it is not effective for soft and timing errors detection due to the possible masking of glitches by the comparator itself. To solve this problem, we propose a pseudo-dynamic comparator architecture that combines a dynamic CMOS transition detector and a static comparator. Experimental results show that the proposed comparator detects not only hard errors but also small glitches related to soft and timing errors. Moreover, its dynamic characteristics allow reducing the power consumption while keeping an equivalent silicon area compared to a static comparator. This study is the first step towards a full fault tolerant approach targeting robustness improvement of CMOS logic circuits.},
  doi = {http://dx.doi.org/10.1109/VTS.2012.6231079},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/VTS_TranVBDGTIW2012.pdf}
}
21. Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test
Cook, A., Hellebrand, S., Imhof, M.E., Mumtaz, A. and Wunderlich, H.-J.
Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12), Quito, Ecuador, 10-13 April 2012, pp. 1-4
2012
DOI PDF 
Keywords: Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis
Abstract: Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.
BibTeX:
@inproceedings{CookHIMW2012,
  author = {Cook, Alejandro and Hellebrand, Sybille and Imhof, Michael E. and Mumtaz, Abdullah and Wunderlich, Hans-Joachim},
  title = {{Built-in Self-Diagnosis Targeting Arbitrary Defects with Partial Pseudo-Exhaustive Test}},
  booktitle = {Proceedings of the 13th IEEE Latin-American Test Workshop (LATW'12)},
  publisher = {IEEE Computer Society},
  year = {2012},
  pages = {1--4},
  keywords = {Built-in Self-Test; Pseudo-Exhaustive Test; Built-in Self-Diagnosis},
  abstract = {Pseudo-exhaustive test completely verifies all output functions of a combinational circuit, which provides a high coverage of non-target faults and allows an efficient on-chip implementation. To avoid long test times caused by large output cones, partial pseudo-exhaustive test (P-PET) has been proposed recently. Here only cones with a limited number of inputs are tested exhaustively, and the remaining faults are targeted with deterministic patterns. Using P-PET patterns for built-in diagnosis, however, is challenging because of the large amount of associated response data. This paper presents a built-in diagnosis scheme which only relies on sparsely distributed data in the response sequence, but still preserves the benefits of P-PET.},
  doi = {http://dx.doi.org/10.1109/LATW.2012.6261229},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2012/LATW_CookHIMW2012.pdf}
}
20. Diagnostic Test of Robust Circuits
Cook, A., Hellebrand, S., Indlekofer, T. and Wunderlich, H.-J.
Proceedings of the 20th IEEE Asian Test Symposium (ATS'11), New Delhi, India, 20-23 November 2011, pp. 285-290
2011
DOI PDF 
Keywords: Robust Circuits; Built-in Self-Test; Built-in Self-Diagnosis; Time Redundancy
Abstract: Robust circuits are able to tolerate certain faults, but also pose additional challenges for test and diagnosis. To improve yield, the test must distinguish between critical faults
and such faults, that could be compensated during system operation; in addition, efficient diagnosis procedures are needed to support yield ramp-up in the case of critical faults. Previous
work on circuits with time redundancy has shown that “signature rollback” can distinguish critical permanent faults from uncritical transient faults. The test is partitioned into shorter
sessions, and a rollback is triggered immediately after a faulty session. If the repeated session shows the correct result, then a transient fault is assumed. The reference values for the sessions are represented in a very compact format. Storing only a few bits characterizing the MISR state over time can provide the same quality as storing the complete signature. In this work
the signature rollback scheme is extended to an integrated test and diagnosis procedure. It is shown that a single test run with highly compacted reference data is sufficient to reach a comparable diagnostic resolution to that of a diagnostic session without any data compaction.
BibTeX:
@inproceedings{CookHIW2011,
  author = {Cook, Alejandro and Hellebrand, Sybille and Indlekofer, Thomas and Wunderlich, Hans-Joachim},
  title = {{Diagnostic Test of Robust Circuits}},
  booktitle = {Proceedings of the 20th IEEE Asian Test Symposium (ATS'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {285--290},
  keywords = {Robust Circuits; Built-in Self-Test; Built-in Self-Diagnosis; Time Redundancy},
  abstract = {Robust circuits are able to tolerate certain faults, but also pose additional challenges for test and diagnosis. To improve yield, the test must distinguish between critical faults
and such faults, that could be compensated during system operation; in addition, efficient diagnosis procedures are needed to support yield ramp-up in the case of critical faults. Previous
work on circuits with time redundancy has shown that “signature rollback” can distinguish critical permanent faults from uncritical transient faults. The test is partitioned into shorter
sessions, and a rollback is triggered immediately after a faulty session. If the repeated session shows the correct result, then a transient fault is assumed. The reference values for the sessions are represented in a very compact format. Storing only a few bits characterizing the MISR state over time can provide the same quality as storing the complete signature. In this work
the signature rollback scheme is extended to an integrated test and diagnosis procedure. It is shown that a single test run with highly compacted reference data is sufficient to reach a comparable diagnostic resolution to that of a diagnostic session without any data compaction.}, doi = {http://dx.doi.org/10.1109/ATS.2011.55}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2011/ATS_CookHIW2011.pdf} }
19. Embedded Test for Highly Accurate Defect Localization
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J.
Proceedings of the 20th IEEE Asian Test Symposium (ATS'11), New Delhi, India, 20-23 November 2011, pp. 213-218
2011
DOI PDF 
Keywords: BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug
Abstract: Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudorandom (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.
BibTeX:
@inproceedings{MumtaIHW2011,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {{Embedded Test for Highly Accurate Defect Localization}},
  booktitle = {Proceedings of the 20th IEEE Asian Test Symposium (ATS'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {213--218},
  keywords = {BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug},
  abstract = {Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudorandom (PR) patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, doi = {http://dx.doi.org/10.1109/ATS.2011.60}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2011/ATS_MumtaIHW2011.pdf} }
18. Robuster Selbsttest mit Diagnose
Cook, A., Hellebrand, S., Indlekofer, T. and Wunderlich, H.-J.
5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)
Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 48-53
2011
URL PDF 
Abstract: Robuste Schaltungen können bestimmte Fehler tolerieren, stellen aber auch besonders hohe Anforderungen an Test und Diagnose. Um Ausbeuteverluste zu vermeiden, muss der Test kritische Fehler von unkritischen Fehlern unterscheiden, die sich während des Systembetriebs nicht auswirken. Zur Verbesserung des Produktionsprozesses muss außerdem eine effiziente Diagnose für erkannte kritische Fehler unterstützt werden. Bisherige Arbeiten für Schaltungen mit Zeitredundanz haben gezeigt, dass ein Selbsttest mit Rücksetzpunkten kostengünstig kritische permanente Fehler von unkritischen transienten Fehlern unterscheiden kann. Hier wird der Selbsttest in N Sitzungen unterteilt, die bei einem Fehler sofort wiederholt werden. Tritt beim zweiten Durchlauf einer Sitzung kein Fehler mehr auf, geht man von einem transienten Fehler aus. Dabei genügt es, die Referenzantworten für die einzelnen Sitzungen in stark kompaktierter Form abzulegen. Statt einer vollständigen Signatur wird nur eine kurze Bitfolge gespeichert, welche die Signaturberechnung über mehrere Zeitpunkte hinweg charakterisiert. Die vorliegende Arbeit erweitert das Testen mit Rücksetzpunkten zu einem integrierten Test- und Diagnoseprozess. Es wird gezeigt, dass ein einziger Testdurchlauf mit stark kompaktierten Referenzwerten genügt, um eine vergleichbare diagnostische Auflösung zu erreichen wie bei einem Test ohne Antwortkompaktierung.
BibTeX:
@inproceedings{CookHIW2011a,
  author = {Cook, Alejandro and Hellebrand, Sybille and Indlekofer, Thomas and Wunderlich, Hans-Joachim},
  title = {{Robuster Selbsttest mit Diagnose}},
  booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)},
  publisher = {VDE VERLAG GMBH},
  year = {2011},
  volume = {231},
  pages = {48--53},
  abstract = {Robuste Schaltungen können bestimmte Fehler tolerieren, stellen aber auch besonders hohe Anforderungen an Test und Diagnose. Um Ausbeuteverluste zu vermeiden, muss der Test kritische Fehler von unkritischen Fehlern unterscheiden, die sich während des Systembetriebs nicht auswirken. Zur Verbesserung des Produktionsprozesses muss außerdem eine effiziente Diagnose für erkannte kritische Fehler unterstützt werden. Bisherige Arbeiten für Schaltungen mit Zeitredundanz haben gezeigt, dass ein Selbsttest mit Rücksetzpunkten kostengünstig kritische permanente Fehler von unkritischen transienten Fehlern unterscheiden kann. Hier wird der Selbsttest in N Sitzungen unterteilt, die bei einem Fehler sofort wiederholt werden. Tritt beim zweiten Durchlauf einer Sitzung kein Fehler mehr auf, geht man von einem transienten Fehler aus. Dabei genügt es, die Referenzantworten für die einzelnen Sitzungen in stark kompaktierter Form abzulegen. Statt einer vollständigen Signatur wird nur eine kurze Bitfolge gespeichert, welche die Signaturberechnung über mehrere Zeitpunkte hinweg charakterisiert. Die vorliegende Arbeit erweitert das Testen mit Rücksetzpunkten zu einem integrierten Test- und Diagnoseprozess. Es wird gezeigt, dass ein einziger Testdurchlauf mit stark kompaktierten Referenzwerten genügt, um eine vergleichbare diagnostische Auflösung zu erreichen wie bei einem Test ohne Antwortkompaktierung.},
  url = {http://www.vde-verlag.de/proceedings-en/453357011.html},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2011/ZUE_CookHIW2011.pdf}
}
17. Korrektur transienter Fehler in eingebetteten Speicherelementen
Imhof, M.E. and Wunderlich, H.-J.
5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)
Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 76-83
2011
URL PDF 
Keywords: Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction
Abstract: In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.

BibTeX:
@inproceedings{ImhofW2011,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {{Korrektur transienter Fehler in eingebetteten Speicherelementen}},
  booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)},
  publisher = {VDE VERLAG GMBH},
  year = {2011},
  volume = {231},
  pages = {76--83},
  keywords = {Transiente Fehler; Soft Error; Single Event Upset (SEU); Erkennung; Lokalisierung; Korrektur; Latch; Register; Single Event Effect; Detection; Localization; Correction},
  abstract = {In der vorliegenden Arbeit wird ein Schema zur Korrektur von transienten Fehlern in eingebetteten, pegelgesteuerten Speicherelementen vorgestellt. Das Schema verwendet Struktur- und Informationsredundanz, um Single Event Upsets (SEUs) in Registern zu erkennen und zu korrigieren. Mit geringem Mehraufwand kann ein betroffenes Bit lokalisiert und mit einem hier vorgestellten Bit-Flipping-Latch (BFL) rückgesetzt werden, so dass die Zahl zusätzlicher Taktzyklen im Fehlerfall minimiert wird. Ein Vergleich mit anderen Erkennungs- und Korrekturschemata zeigt einen deutlich reduzierten Hardwaremehraufwand.

In this paper a soft error correction scheme for embedded level sensitive storage elements is presented. The scheme employs structural- and information-redundancy to detect and correct Single Event Upsets (SEUs) in registers. With low additional hardware overhead the affected bit can be localized and reset with the presented Bit-Flipping-Latch (BFL), thereby minimizing the amount of additional clock cycles in the faulty case. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.}, url = {http://www.vde-verlag.de/proceedings-de/453357010.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/ZuE_ImhofW2011.pdf} }

16. Eingebetteter Test zur hochgenauen Defekt-Lokalisierung
Mumtaz, A., Imhof, M.E., Holst, S. and Wunderlich, H.-J.
5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)
Vol. 231, Hamburg-Harburg, Germany, 27-29 September 2011, pp. 43-47
2011
URL PDF 
Keywords: Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug
Abstract: Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.

BibTeX:
@inproceedings{MumtaIHW2011a,
  author = {Mumtaz, Abdullah and Imhof, Michael E. and Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {{Eingebetteter Test zur hochgenauen Defekt-Lokalisierung}},
  booktitle = {5. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'11)},
  publisher = {VDE VERLAG GMBH},
  year = {2011},
  volume = {231},
  pages = {43--47},
  keywords = {Eingebetteter Selbsttest; Pseudoerschöpfender Test; Diagnose; Debug; BIST; Pseudo-Exhaustive Testing; Diagnosis; Debug},
  abstract = {Moderne Diagnosealgorithmen können aus den vorhandenen Fehlerdaten direkt die defekte Schaltungsstruktur identifizieren, ohne sich auf spezialisierte Fehlermodelle zu beschränken. Solche Algorithmen benötigen jedoch Testmuster mit einer hohen Defekterfassung. Dies ist insbesondere im eingebetteten Test eine große Herausforderung. Der Partielle Pseudo-Erschöpfende Test (P-PET) ist eine Methode, um die Defekterfassung im Vergleich zu einem Zufallstest oder einem deterministischen Test für das Haftfehlermodell zu erhöhen. Wird die im eingebetteten Test übliche Phase der vorgeschalteten Erzeugung von Pseudozufallsmustern durch die Erzeugung partieller pseudo-erschöpfender Muster ersetzt, kann bei vergleichbarem Hardware-Aufwand und gleicher Testzeit eine optimale Defekterfassung für den größten Schaltungsteil erreicht werden. Diese Arbeit kombiniert zum ersten Mal P-PET mit einem fehlermodell-unabhängigen Diagnosealgorithmus und zeigt, dass sich beliebige Defekte im Mittel wesentlich präziser diagnostizieren lassen als mit Zufallsmustern oder einem deterministischen Test für Haftfehler.

Modern diagnosis algorithms are able to identify the defective circuit structure directly from existing fail data without being limited to any specialized fault models. Such algorithms however require test patterns with a high defect coverage, posing a major challenge particularly for embedded testing.
In mixed-mode embedded test, a large amount of pseudo-random patterns are applied prior to deterministic test pattern. Partial Pseudo-Exhaustive Testing (P-PET) replaces these pseudo-random patterns during embedded testing by partial pseudo-exhaustive patterns to test a large portion of a circuit fault-model independently. The overall defect coverage is optimized compared to random testing or deterministic tests using the stuck-at fault model while maintaining a comparable hardware overhead and the same test application time.
This work for the first time combines P-PET with a fault model independent diagnosis algorithm and shows that arbitrary defects can be diagnosed on average much more precisely than with standard embedded testing. The results are compared to random pattern testing and deterministic testing targeting stuck-at faults.}, url = {http://www.vde-verlag.de/proceedings-de/453357010.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/ZuE_MumtaIHW2011a.pdf} }

15. Variation-Aware Fault Modeling
Hopsch, F., Becker, B., Hellebrand, S., Polian, I., Straube, B., Vermeiren, W. and Wunderlich, H.-J.
SCIENCE CHINA Information Sciences
Vol. 54(9), September 2011, pp. 1813-1826
2011
DOI PDF 
Keywords: process variations; test methods; statistical test; histogram data base
Abstract: To achieve a high product quality for nano-scale systems, both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defect-oriented testing with statistical library characterization. It uses Monte Carlo simulations at electrical level to extract delay distributions of cells in the presence of defects and for the defect-free case. This allows distinguishing the effects of process variations on the cell delay from defectinduced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction, the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45 nm technology.
BibTeX:
@article{HopscBHPSVW2011,
  author = {Hopsch, Fabian and Becker, Bernd and Hellebrand, Sybille and Polian, Ilia and Straube, Bernd and Vermeiren, Wolfgang and Wunderlich, Hans-Joachim},
  title = {{Variation-Aware Fault Modeling}},
  journal = {SCIENCE CHINA Information Sciences},
  publisher = {Science China Press, co-published with Springer-Verlag},
  year = {2011},
  volume = {54},
  number = {9},
  pages = {1813--1826},
  keywords = {process variations; test methods; statistical test; histogram data base},
  abstract = {To achieve a high product quality for nano-scale systems, both realistic defect mechanisms and process variations must be taken into account. While existing approaches for variation-aware digital testing either restrict themselves to special classes of defects or assume given probability distributions to model variabilities, the proposed approach combines defect-oriented testing with statistical library characterization. It uses Monte Carlo simulations at electrical level to extract delay distributions of cells in the presence of defects and for the defect-free case. This allows distinguishing the effects of process variations on the cell delay from defectinduced cell delays under process variations. To provide a suitable interface for test algorithms at higher levels of abstraction, the distributions are represented as histograms and stored in a histogram data base (HDB). Thus, the computationally expensive defect analysis needs to be performed only once as a preprocessing step for library characterization, and statistical test algorithms do not require any low level information beyond the HDB. The generation of the HDB is demonstrated for primitive cells in 45 nm technology.},
  doi = {http://dx.doi.org/10.1007/s11432-011-4367-8},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/SCIS_HopscBHPSVW2011.pdf}
}
14. Soft Error Correction in Embedded Storage Elements
Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11), Athens, Greece, 13-15 July 2011, pp. 169-174
2011
DOI PDF 
Keywords: Single Event Effect; Correction; Latch; Register
Abstract: In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.
BibTeX:
@inproceedings{ImhofW2011a,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {{Soft Error Correction in Embedded Storage Elements}},
  booktitle = {Proceedings of the 17th IEEE International On-Line Testing Symposium (IOLTS'11)},
  publisher = {IEEE Computer Society},
  year = {2011},
  pages = {169--174},
  keywords = {Single Event Effect; Correction; Latch; Register},
  abstract = {In this paper a soft error correction scheme for embedded storage elements in level sensitive designs is presented. It employs space redundancy to detect and locate Single Event Upsets (SEUs). It is able to detect SEUs in registers and employ architectural replay to perform correction with low additional hardware overhead. Together with the proposed bit flipping latch an online correction can be implemented on bit level with a minimal loss of clock cycles. A comparison with other detection and correction schemes shows a significantly lower hardware overhead.},
  doi = {http://dx.doi.org/10.1109/IOLTS.2011.5993832},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2011/IOLTS_ImhofW2011.pdf}
}
13. Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung;
Detection of transient faults in circuits with reduced power dissipation

Imhof, M.E., Wunderlich, H.-J. and Zoellin, C.G.
2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'08)
Vol. 57, Ingolstadt, Germany, 29 September-1 October 2008, pp. 107-114
2008
URL PDF 
Keywords: Robustes Design; Fehlertoleranz; Verlustleistung; Latch; Register; Single Event Effect; Robust design; fault tolerance; power dissipation; latch; register; single event effects
Abstract: Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements.
This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.

BibTeX:
@inproceedings{ImhofWZ2008a,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.},
  title = {{Erkennung von transienten Fehlern in Schaltungen mit reduzierter Verlustleistung;
Detection of transient faults in circuits with reduced power dissipation}}, booktitle = {2. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'08)}, publisher = {VDE VERLAG GMBH}, year = {2008}, volume = {57}, pages = {107--114}, keywords = {Robustes Design; Fehlertoleranz; Verlustleistung; Latch; Register; Single Event Effect; Robust design; fault tolerance; power dissipation; latch; register; single event effects}, abstract = {Für Speicherfelder sind fehlerkorrigierende Codes die vorherrschende Methode, um akzeptable Fehlerraten zu erreichen. In vielen aktuellen Schaltungen erreicht die Zahl der Speicherelemente in freier Logik die Größenordnung der Zahl von SRAM-Zellen vor wenigen Jahren. Zur Reduktion der Verlustleistung wird häufig der Takt der pegelgesteuerten Speicherelemente unterdrückt und die Speicherelemente müssen ihren Zustand über lange Zeitintervalle halten. Die Notwendigkeit Speicherzellen abzusichern wird zusätzlich durch die Miniaturisierung verstärkt, die zu einer erhöhten Empfindlichkeit der Speicherelemente geführt hat. Dieser Artikel stellt eine Methode zur fehlertoleranten Anordnung von pegelgesteuerten Speicherelementen vor, die bei unterdrücktem Takt Einfachfehler lokalisieren und Mehrfachfehler erkennen kann. Bei aktiviertem Takt können Einfach- und Mehrfachfehler erkannt werden. Die Register können ähnlich wie Prüfpfade effizient in den Entwurfsgang integriert werden. Die Diagnoseinformation kann auf Modulebene leicht berechnet und genutzt werden.

For memories error correcting codes are the method of choice to guarantee acceptable error rates. In many current designs the number of storage elements in random logic reaches the number of SRAM-cells some years ago. Clock-gating is often employed to reduce the power dissipation of level-sensitive storage elements while the elements have to retain their state over long periods of time. The necessity to protect storage elements is amplified by the miniaturization, which leads to an increased susceptibility of the storage elements.
This article proposes a method for the fault-tolerant arrangement of level-sensitive storage elements, which can locate single faults and detect multiple faults while being clock-gated. With active clock single and multiple faults can be detected. The registers can be efficiently integrated similar to the scan design flow. The diagnostic information can be easily computed and used at module level.}, url = {http://www.vde-verlag.de/proceedings-de/453119017.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/ZuE_ImhofWZ2008a.pdf} }

12. Integrating Scan Design and Soft Error Correction in Low-Power Applications
Imhof, M.E., Wunderlich, H.-J. and Zoellin, C.G.
Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08), Rhodes, Greece, 7-9 July 2008, pp. 59-64
2008
DOI URL PDF 
Keywords: Robust design; fault tolerance; latch; low power; register; single event effects
Abstract: Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.
BibTeX:
@inproceedings{ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zoellin, Christian G.},
  title = {{Integrating Scan Design and Soft Error Correction in Low-Power Applications}},
  booktitle = {Proceedings of the 14th IEEE International On-Line Testing Symposium (IOLTS'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {59--64},
  keywords = {Robust design; fault tolerance; latch; low power; register; single event effects},
  abstract = {Error correcting coding is the dominant technique to achieve acceptable soft-error rates in memory arrays. In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. Often latches are clock gated and have to retain their states during longer periods. Moreover, miniaturization has led to elevated susceptibility of the memory elements and further increases the need for protection. 
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With active clock, single and multiple errors are detected. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.}, url = {http://www.computer.org/csdl/proceedings/iolts/2008/3264/00/3264a059-abs.html}, doi = {http://dx.doi.org/10.1109/IOLTS.2008.31}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/IOLTS_ImhofWZ2008.pdf} }
11. Scan Chain Clustering for Test Power Reduction
Elm, M., Wunderlich, H.-J., Imhof, M.E., Zoellin, C.G., Leenstra, J. and Maeding, N.
Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08), Anaheim, California, USA, 8-13 June 2008, pp. 828-833
2008
DOI PDF 
Keywords: Test; Design for Test; Low Power; Scan Design
Abstract: An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration.

The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.

BibTeX:
@inproceedings{ElmWIZLM2008,
  author = {Elm, Melanie and Wunderlich, Hans-Joachim and Imhof, Michael E. and Zoellin, Christian G. and Leenstra, Jens and Maeding, Nicolas},
  title = {{Scan Chain Clustering for Test Power Reduction}},
  booktitle = {Proceedings of the 45th ACM/IEEE Design Automation Conference (DAC'08)},
  publisher = {ACM},
  year = {2008},
  pages = {828--833},
  keywords = {Test; Design for Test; Low Power; Scan Design},
  abstract = {An effective technique to save power during scan based test is to switch off unused scan chains. The results obtained with this method strongly depend on the mapping of scan flip-flops into scan chains, which determines how many chains can be deactivated per pattern.
In this paper, a new method to cluster flip-flops into scan chains is presented, which minimizes the power consumption during test. The approach does not specify any ordering inside the chains and fits seamlessly to any standard tool for scan chain integration.

The application of known test power reduction techniques to the optimized scan chain configurations shows significant improvements for large industrial circuits.}, doi = {http://dx.doi.org/10.1145/1391469.1391680}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DAC_ElmWIZLM2008.pdf} }

10. Selective Hardening in Early Design Steps
Zoellin, C.G., Wunderlich, H.-J., Polian, I. and Becker, B.
Proceedings of the 13th IEEE European Test Symposium (ETS'08), Lago Maggiore, Italy, 25-29 May 2008, pp. 185-190
2008
DOI URL PDF 
Keywords: Soft error mitigation; reliability
Abstract: Hardening a circuit against soft errors should be performed in early design steps before the circuit is laid out. A viable approach to achieve soft error rate (SER) reduction at a reasonable cost is to harden only parts of a circuit. When selecting which locations in the circuit to harden, priority should be given to critical spots for which an error is likely to cause a system malfunction. The criticality of the spots depends on parameters not all available in early design steps. We employ a selection strategy which takes only gate-level information into account and does not use any low-level electrical or timing information.
We validate the quality of the solution using an accurate SER estimator based on the new UGC particle strike model. Although only partial information is utilized for hardening, the exact validation shows that the susceptibility of a circuit to soft errors is reduced significantly. The results of the hardening strategy presented are also superior to known purely topological strategies in terms of both hardware overhead and protection.
BibTeX:
@inproceedings{ZoellWPB2008,
  author = {Zoellin, Christian G. and Wunderlich, Hans-Joachim and Polian, Ilia and Becker, Bernd},
  title = {{Selective Hardening in Early Design Steps}},
  booktitle = {Proceedings of the 13th IEEE European Test Symposium (ETS'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {185--190},
  keywords = {Soft error mitigation; reliability},
  abstract = {Hardening a circuit against soft errors should be performed in early design steps before the circuit is laid out. A viable approach to achieve soft error rate (SER) reduction at a reasonable cost is to harden only parts of a circuit. When selecting which locations in the circuit to harden, priority should be given to critical spots for which an error is likely to cause a system malfunction. The criticality of the spots depends on parameters not all available in early design steps. We employ a selection strategy which takes only gate-level information into account and does not use any low-level electrical or timing information. 
We validate the quality of the solution using an accurate SER estimator based on the new UGC particle strike model. Although only partial information is utilized for hardening, the exact validation shows that the susceptibility of a circuit to soft errors is reduced significantly. The results of the hardening strategy presented are also superior to known purely topological strategies in terms of both hardware overhead and protection.}, url = {http://www.computer.org/csdl/proceedings/ets/2008/3150/00/3150a185-abs.html}, doi = {http://dx.doi.org/10.1109/ETS.2008.30}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/ETS_ZoellWPB2008.pdf} }
9. Signature Rollback – A Technique for Testing Robust Circuits
Amgalan, U., Hachmann, C., Hellebrand, S. and Wunderlich, H.-J.
Proceedings of the 26th IEEE VLSI Test Symposium (VTS'08), San Diego, California, USA, 27 April-1 May 2008, pp. 125-130
2008
DOI URL PDF 
Keywords: Embedded Test; Robust Design; Rollback and Recovery; Test Quality and Reliability; Time Redundancy
Abstract: Dealing with static and dynamic parameter variations has become a major challenge for design and test. To avoid unnecessary yield loss and to ensure reliable system operation a robust design has become mandatory. However, standard structural test procedures still address classical fault models and cannot deal with the non-deterministic behavior caused by parameter variations and other reasons. Chips may be rejected, even if the test reveals only non-critical failures that could be compensated during system operation. This paper introduces a scheme for embedded test, which can distinguish critical permanent and noncritical transient failures for circuits with time redundancy. To minimize both yield loss and the overall test time, the scheme relies on partitioning the test into shorter sessions. If a faulty signature is observed at the end of a session, a rollback is triggered, and this particular session is repeated. An analytical model for the expected overall test time provides guidelines to determine the optimal parameters of the scheme.
BibTeX:
@inproceedings{AmgalHHW2008,
  author = {Amgalan, Uranmandakh and Hachmann, Christian and Hellebrand, Sybille and Wunderlich, Hans-Joachim},
  title = {{Signature Rollback – A Technique for Testing Robust Circuits}},
  booktitle = {Proceedings of the 26th IEEE VLSI Test Symposium (VTS'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {125--130},
  keywords = {Embedded Test; Robust Design; Rollback and Recovery; Test Quality and Reliability; Time Redundancy},
  abstract = {Dealing with static and dynamic parameter variations has become a major challenge for design and test. To avoid unnecessary yield loss and to ensure reliable system operation a robust design has become mandatory. However, standard structural test procedures still address classical fault models and cannot deal with the non-deterministic behavior caused by parameter variations and other reasons. Chips may be rejected, even if the test reveals only non-critical failures that could be compensated during system operation. This paper introduces a scheme for embedded test, which can distinguish critical permanent and noncritical transient failures for circuits with time redundancy. To minimize both yield loss and the overall test time, the scheme relies on partitioning the test into shorter sessions. If a faulty signature is observed at the end of a session, a rollback is triggered, and this particular session is repeated. An analytical model for the expected overall test time provides guidelines to determine the optimal parameters of the scheme.},
  url = {http://www.computer.org/csdl/proceedings/vts/2008/3123/00/3123a125-abs.html},
  doi = {http://dx.doi.org/10.1109/VTS.2008.34},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/VTS_AmgalHHW2008.pdf}
}
8. Test Set Stripping Limiting the Maximum Number of Specified Bits
Kochte, M.A., Zoellin, C.G., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08), Hong Kong, China, 23-25 January 2008, pp. 581-586
Best paper award
2008
DOI URL PDF 
Keywords: test relaxation; test generation; tailored ATPG
Abstract: This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially
BibTeX:
@inproceedings{KochtZIW2008,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {{Test Set Stripping Limiting the Maximum Number of Specified Bits}},
  booktitle = {Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {581--586},
  keywords = {test relaxation; test generation; tailored ATPG},
  abstract = {This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially},
  url = {http://www.computer.org/csdl/proceedings/delta/2008/3110/00/3110a581-abs.html},
  doi = {http://dx.doi.org/10.1109/DELTA.2008.64},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DELTA_KochtZIW2008.pdf}
}
7. Programmable Deterministic Built-in Self-test
Hakmi, A.-W., Wunderlich, H.-J., Zoellin, C.G., Glowatz, A., Hapke, F., Schloeffel, J. and Souef, L.
Proceedings of the International Test Conference (ITC'07), Santa Clara, California, USA, 21-25 October 2007, pp. 1-9
2007
DOI PDF 
Keywords: Deterministic BIST, Test data compression
Abstract: In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministic patterns than existing programmable methods and provides high flexibility for test engineering in both internal and external test.
Theoretical analysis suggests that significantly more care bits can be encoded in the seed of a Linear Feedback Shift Register (LFSR), if a limited number of conflicting equations is ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern. In contrast to known deterministic BIST schemes based on test set embedding, the embedding logic function is not hardwired. Instead, this information is stored in memory using a special compression and decompression method. Experiments for benchmark circuits and industrial designs demonstrate that the approach has considerably higher overall coding efficiency than the existing methods.
BibTeX:
@inproceedings{HakmiWZGHSS2007,
  author = {Hakmi, Abdul-Wahid and Wunderlich, Hans-Joachim and Zoellin, Christian G. and Glowatz, Andreas and Hapke, Friedrich and Schloeffel, Juergen and Souef, Laurent},
  title = {{Programmable Deterministic Built-in Self-test}},
  booktitle = {Proceedings of the International Test Conference (ITC'07)},
  publisher = {IEEE Computer Society},
  year = {2007},
  pages = {1--9},
  keywords = {Deterministic BIST, Test data compression},
  abstract = {In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministic patterns than existing programmable methods and provides high flexibility for test engineering in both internal and external test. 
Theoretical analysis suggests that significantly more care bits can be encoded in the seed of a Linear Feedback Shift Register (LFSR), if a limited number of conflicting equations is ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern. In contrast to known deterministic BIST schemes based on test set embedding, the embedding logic function is not hardwired. Instead, this information is stored in memory using a special compression and decompression method. Experiments for benchmark circuits and industrial designs demonstrate that the approach has considerably higher overall coding efficiency than the existing methods.}, doi = {http://dx.doi.org/10.1109/TEST.2007.4437611}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ITC_HakmiWZGHSS2007.pdf} }
6. A Refined Electrical Model for Particle Strikes and its Impact on SEU Prediction
Hellebrand, S., Zoellin, C.G., Wunderlich, H.-J., Ludwig, S., Coym, T. and Straube, B.
Proceedings of the 22nd IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT'07), Rome, Italy, 26-28 September 2007, pp. 50-58
2007
DOI URL PDF 
Abstract: Decreasing feature sizes have led to an increased vulnerability of random logic to soft errors. In combinational logic a particle strike may lead to a glitch at the output of a gate, also referred to as single even transient (SET), which in turn can propagate to a register and cause a single event upset (SEU) there.
Circuit level modeling and analysis of SETs provides an attractive compromise between computationally expensive simulations at device level and less accurate techniques at higher levels. At the circuit level particle strikes crossing a pn-junction are traditionally modeled with the help of a transient current source. However, the common models assume a constant voltage across the pn-junction, which may lead to inaccurate predictions concerning the shape of expected glitches. To overcome this problem, a refined circuit level model for strikes through pnjunctions is investigated and validated in this paper. The refined model yields significantly different results than common models. This has a considerable impact on SEU prediction, which is confirmed by extensive simulations at gate level. In most cases, the refined, more realistic, model reveals an almost doubled risk of a system failure after an SET.
BibTeX:
@inproceedings{HelleZWLCS2007,
  author = {Hellebrand, Sybille and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Ludwig, Stefan and Coym, Torsten and Straube, Bernd},
  title = {{A Refined Electrical Model for Particle Strikes and its Impact on SEU Prediction}},
  booktitle = {Proceedings of the 22nd IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT'07)},
  publisher = {IEEE Computer Society},
  year = {2007},
  pages = {50--58},
  abstract = {Decreasing feature sizes have led to an increased vulnerability of random logic to soft errors. In combinational logic a particle strike may lead to a glitch at the output of a gate, also referred to as single even transient (SET), which in turn can propagate to a register and cause a single event upset (SEU) there. 
Circuit level modeling and analysis of SETs provides an attractive compromise between computationally expensive simulations at device level and less accurate techniques at higher levels. At the circuit level particle strikes crossing a pn-junction are traditionally modeled with the help of a transient current source. However, the common models assume a constant voltage across the pn-junction, which may lead to inaccurate predictions concerning the shape of expected glitches. To overcome this problem, a refined circuit level model for strikes through pnjunctions is investigated and validated in this paper. The refined model yields significantly different results than common models. This has a considerable impact on SEU prediction, which is confirmed by extensive simulations at gate level. In most cases, the refined, more realistic, model reveals an almost doubled risk of a system failure after an SET.}, url = {http://www.computer.org/csdl/proceedings/dft/2007/2885/00/28850050-abs.html}, doi = {http://dx.doi.org/10.1109/DFT.2007.43}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/DFT_HelleZWLCS2007.pdf} }
5. Testing and Monitoring Nanoscale Systems - Challenges and Strategies for Advanced Quality Assurance (Invited Paper)
Hellebrand, S., Zoellin, C.G., Wunderlich, H.-J., Ludwig, S., Coym, T. and Straube, B.
Proceedings of 43rd International Conference on Microelectronics, Devices and Material with the Workshop on Electronic Testing (MIDEM'07), Bled, Slovenia, 12-14 September 2007, pp. 3-10
2007
PDF 
Abstract: The increased number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design becomes mandatory to ensure dependable systems and acceptable yields. Design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. The RealTest Project addresses these problems for nanoscale CMOS and targets unified design and test strategies to support both a robust design and a coordinated quality assurance after manufacturing and during the lifetime of a system. The paper first gives a short overview of the research activities within the project and then focuses on a first result concerning soft errors in combinational logic. It will be shown that common electrical models for particle strikes in random logic have underestimated the effects on the system behavior. The refined model developed within the RealTest Project predicts about twice as many single events upsets (SEUs) caused by particle strikes as traditional models.
BibTeX:
@inproceedings{HelleZWLCS2007a,
  author = {Hellebrand, Sybille and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Ludwig, Stefan and Coym, Torsten and Straube, Bernd},
  title = {{Testing and Monitoring Nanoscale Systems - Challenges and Strategies for Advanced Quality Assurance (Invited Paper)}},
  booktitle = {Proceedings of 43rd International Conference on Microelectronics, Devices and Material with the Workshop on Electronic Testing (MIDEM'07)},
  publisher = {MIDEM},
  year = {2007},
  pages = {3--10},
  abstract = {The increased number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design becomes mandatory to ensure dependable systems and acceptable yields. Design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. The RealTest Project addresses these problems for nanoscale CMOS and targets unified design and test strategies to support both a robust design and a coordinated quality assurance after manufacturing and during the lifetime of a system. The paper first gives a short overview of the research activities within the project and then focuses on a first result concerning soft errors in combinational logic. It will be shown that common electrical models for particle strikes in random logic have underestimated the effects on the system behavior. The refined model developed within the RealTest Project predicts about twice as many single events upsets (SEUs) caused by particle strikes as traditional models.},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/MIDEM_HelleZWLCS2007a.pdf}
}
4. Scan Test Planning for Power Reduction
Imhof, M.E., Zoellin, C.G., Wunderlich, H.-J., Maeding, N. and Leenstra, J.
Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07), San Diego, California, USA, 4-8 June 2007, pp. 521-526
2007
DOI URL PDF 
Keywords: Test planning, power during test
Abstract: Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.
BibTeX:
@inproceedings{ImhofZWML2007a,
  author = {Imhof, Michael E. and Zoellin, Christian G. and Wunderlich, Hans-Joachim and Maeding, Nicolas and Leenstra, Jens},
  title = {{Scan Test Planning for Power Reduction}},
  booktitle = {Proceedings of the 44th ACM/IEEE Design Automation Conference (DAC'07)},
  publisher = {ACM},
  year = {2007},
  pages = {521--526},
  keywords = {Test planning, power during test},
  abstract = {Many STUMPS architectures found in current chip designs allow disabling of individual scan chains for debug and diagnosis. In a recent paper it has been shown that this feature can be used for reducing the power consumption during test. Here, we present an efficient algorithm for the automated generation of a test plan that keeps fault coverage as well as test time, while significantly reducing the amount of wasted energy. A fault isolation table, which is usually used for diagnosis and debug, is employed to accurately determine scan chains that can be disabled. The algorithm was successfully applied to large industrial circuits and identifies a very large amount of excess pattern shift activity.},
  url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4261239},
  doi = {http://dx.doi.org/10.1145/1278480.1278614},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/DAC_ImhofZWML2007a.pdf}
}
3. Test und Zuverlässigkeit nanoelektronischer Systeme
Becker, B., Polian, I., Hellebrand, S., Straube, B. and Wunderlich, H.-J.
1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)
Vol. 52, Munich, Germany, 26-28 March 2007, pp. 139-140
2007
URL PDF 
Abstract: Neben der zunehmenden Anfälligkeit gegenüber Fertigungsfehlern bereiten insbesondere vermehrte Parameterschwankungen, zeitabhängige Materialveränderungen und eine erhöhte Störanfälligkeit während des Betriebs massive Probleme bei der Qualitätssicherung für nanoelektronische Systeme. Für eine wirtschaftliche Produktion und einen zuverlässigen Systembetrieb wird einerseits ein robuster Entwurf unabdingbar, andererseits ist damit auch ein Paradigmenwechsel beim Test erforderlich. Anstatt lediglich defektbehaftete Systeme zu erkennen und auszusortieren, muss der Test bestimmen, ob ein System trotz einer gewissen Menge von Fehlern funktionsfähig ist, und die verbleibende Robustheit gegenüber Störungen im Betrieb charakterisieren. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen.
BibTeX:
@inproceedings{BeckeHSW2007,
  author = {Becker, Bernd and Polian, Ilia and Hellebrand, Sybille and Straube, Bernd and Wunderlich, Hans-Joachim},
  title = {{Test und Zuverlässigkeit nanoelektronischer Systeme}},
  booktitle = {1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)},
  publisher = {VDE VERLAG GMBH},
  year = {2007},
  volume = {52},
  pages = {139--140},
  abstract = {Neben der zunehmenden Anfälligkeit gegenüber Fertigungsfehlern bereiten insbesondere vermehrte Parameterschwankungen, zeitabhängige Materialveränderungen und eine erhöhte Störanfälligkeit während des Betriebs massive Probleme bei der Qualitätssicherung für nanoelektronische Systeme. Für eine wirtschaftliche Produktion und einen zuverlässigen Systembetrieb wird einerseits ein robuster Entwurf unabdingbar, andererseits ist damit auch ein Paradigmenwechsel beim Test erforderlich. Anstatt lediglich defektbehaftete Systeme zu erkennen und auszusortieren, muss der Test bestimmen, ob ein System trotz einer gewissen Menge von Fehlern funktionsfähig ist, und die verbleibende Robustheit gegenüber Störungen im Betrieb charakterisieren. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen.},
  url = {http://www.vde-verlag.de/proceedings-de/463023018.html},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ZuE_BeckeHSW2007.pdf}
}
2. Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute
Imhof, M.E., Zöllin, C.G., Wunderlich, H.-J., Mäding, N. and Leenstra, J.
1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)
Vol. 52, Munich, Germany, 26-28 March 2007, pp. 69-76
2007
URL PDF 
Abstract: Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.
BibTeX:
@inproceedings{ImhofZWML2007,
  author = {Imhof, Michael E. and Zöllin, Christian G. and Wunderlich, Hans-Joachim and Mäding, Nicolas and Leenstra, Jens},
  title = {{Verlustleistungsoptimierende Testplanung zur Steigerung von Zuverlässigkeit und Ausbeute}},
  booktitle = {1. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'07)},
  publisher = {VDE VERLAG GMBH},
  year = {2007},
  volume = {52},
  pages = {69--76},
  abstract = {Die stark erhöhte durchschnittliche und maximale Verlustleistung während des Tests integrierter Schaltungen kann zu einer Beeinträchtigung der Ausbeute bei der Produktion sowie der Zuverlässigkeit im späteren Betrieb führen. Wir stellen eine Testplanung für Schaltungen mit parallelen Prüfpfaden vor, welche die Verlustleistung während des Tests reduziert. Die Testplanung wird auf ein überdeckungsproblem abgebildet, das mit einem heuristischen Lösungsverfahren effizient auch für große Schaltungen gelöst werden kann. Die Effizienz des vorgestellten Verfahrens wird sowohl für die bekannten Benchmarkschaltungen als auch für große industrielle Schaltungen demonstriert.},
  url = {http://www.vde-verlag.de/proceedings-de/463023008.html},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ZuE_ImhofZWML2007.pdf}
}
1. DFG-Projekt RealTest - Test und Zuverlässigkeit nanoelektronischer Systeme;
DFG-Project – Test and Reliability of Nano-Electronic Systems

Becker, B., Polian, I., Hellebrand, S., Straube, B. and Wunderlich, H.-J.
it - Information Technology
Vol. 48(5), October 2006, pp. 304-311
2006
DOI PDF 
Keywords: Nanoelektronik; Entwurf; Test; Zuverlässigkeit; Fehlertoleranz/Nano-electronics; Design; Test; Dependability; Fault Tolerance
Abstract: Entwurf, Verifikation und Test zuverlässiger nanoelektronischer Systeme erfordern grundlegend neue Methoden und Ansätze. Ein robuster Entwurf wird unabdingbar, um Fertigungsfehler, Parameterschwankungen, zeitabhängige Materialveränderungen und vorübergehende Störungen in gewissem Umfang zu tolerieren. Gleichzeitig verlieren gerade dadurch viele traditionelle Testverfahren ihre Aussagekraft. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen.

The increasing number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design is mandatory to ensure dependable systems and acceptable yields. The quest for design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. Within the framework of the RealTest project unified design and test strategies are developed to support a robust design and a coordinated quality assurance after the production and during the lifetime of a system.

BibTeX:
@article{BeckePHSW2006,
  author = {Becker, Bernd and Polian, Ilia and Hellebrand, Sybille and Straube, Bernd and Wunderlich, Hans-Joachim},
  title = {{DFG-Projekt RealTest - Test und Zuverlässigkeit nanoelektronischer Systeme;
DFG-Project – Test and Reliability of Nano-Electronic Systems}}, journal = {it - Information Technology}, publisher = {Oldenbourg Wissenschaftsverlag}, year = {2006}, volume = {48}, number = {5}, pages = {304--311}, keywords = {Nanoelektronik; Entwurf; Test; Zuverlässigkeit; Fehlertoleranz/Nano-electronics; Design; Test; Dependability; Fault Tolerance}, abstract = {Entwurf, Verifikation und Test zuverlässiger nanoelektronischer Systeme erfordern grundlegend neue Methoden und Ansätze. Ein robuster Entwurf wird unabdingbar, um Fertigungsfehler, Parameterschwankungen, zeitabhängige Materialveränderungen und vorübergehende Störungen in gewissem Umfang zu tolerieren. Gleichzeitig verlieren gerade dadurch viele traditionelle Testverfahren ihre Aussagekraft. Im Rahmen des Projekts RealTest werden einheitliche Entwurfs- und Teststrategien entwickelt, die sowohl einen robusten Entwurf als auch eine darauf abgestimmte Qualitätssicherung unterstützen.

The increasing number of fabrication defects, spatial and temporal variability of parameters, as well as the growing impact of soft errors in nanoelectronic systems require a paradigm shift in design, verification and test. A robust design is mandatory to ensure dependable systems and acceptable yields. The quest for design robustness, however, invalidates many traditional approaches for testing and implies enormous challenges. Within the framework of the RealTest project unified design and test strategies are developed to support a robust design and a coordinated quality assurance after the production and during the lifetime of a system.}, doi = {http://dx.doi.org/10.1524/itit.2006.48.5.304}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2006/IT_BeckePHSW2006.pdf} }

Created by JabRef on 20/10/2017.
Workshopbeiträge
Matching entries: 0
settings...
4. Integrating Scan Design and Soft Error Correction in Low-Power Applications
Imhof, M.E., Wunderlich, H.-J. and Zöllin, C.
1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08), Verbania, Italy, 25-29 May 2008
2008
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects
Abstract: In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With small addition, single and multiple errors are detected in the clocked mode, too. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.
BibTeX:
@inproceedings{ImhofWZ2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian},
  title = {{Integrating Scan Design and Soft Error Correction in Low-Power Applications}},
  booktitle = {1st International Workshop on the Impact of Low-Power Design on Test and Reliability (LPonTR'08)},
  year = {2008},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1; Robust design; fault tolerance; latch; low power; register; single event effects},
  abstract = {In many modern circuits, the number of memory elements in the random logic is in the order of the number of SRAM cells on chips only a few years ago. In arrays, error correcting coding is the dominant technique to achieve acceptable soft-error rates. For low power applications, often latches are clock gated and have to retain their states during longer periods while miniaturization has led to elevated susceptibility and further increases the need for protection.
This paper presents a fault-tolerant register latch organization that is able to detect single-bit errors while it is clock gated. With small addition, single and multiple errors are detected in the clocked mode, too. The registers can be efficiently integrated similar to the scan design flow, and error detecting or locating information can be collected at module level. The resulting structure can be efficiently reused for offline and general online testing.} }
3. Ein verfeinertes elektrisches Modell für Teilchentreffer und dessen Auswirkung auf die Bewertung der Schaltungsempfindlichkeit
Coym, T., Hellebrand, S., Ludwig, S., Straube, B., Wunderlich, H.-J. and Zöllin, C.
20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08), Wien, Austria, 24-26 February 2008, pp. 153-157
2008
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1
BibTeX:
@inproceedings{CoymHLSWZ2008,
  author = {Coym, Torsten and Hellebrand, Sybille and Ludwig, Stefan and Straube, Bernd and Wunderlich, Hans-Joachim and Zöllin, Christian},
  title = {{Ein verfeinertes elektrisches Modell für Teilchentreffer und dessen Auswirkung auf die Bewertung der Schaltungsempfindlichkeit}},
  booktitle = {20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08)},
  year = {2008},
  pages = {153--157},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1}
}
2. Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information
Imhof, M.E., Wunderlich, H.-J., Zöllin, C., Leenstra, J. and Maeding, N.
20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08), Wien, Austria, 24-26 February 2008, pp. 137-141
2008
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1
Abstract: Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen.
BibTeX:
@inproceedings{ImhofWZLM2008,
  author = {Imhof, Michael E. and Wunderlich, Hans-Joachim and Zöllin, Christian and Leenstra, Jens and Maeding, Nicolas},
  title = {{Reduktion der Verlustleistung beim Selbsttest durch Verwendung testmengenspezifischer Information}},
  booktitle = {20th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'08)},
  year = {2008},
  pages = {137--141},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1},
  abstract = {Der während des Selbsttests von Schaltungen mit deaktivierbaren Prüfpfaden verwendete Testplan entscheidet über die Verlustleistung während des Tests. Bestehende Verfahren zur Erzeugung des Testplans verwenden überwiegend topologische Information, zum Beispiel den Ausgangskegel eines Fehlers. Aufgrund der implizit gegebenen Verknüpfung zwischen Testplan und Mustermenge ergeben sich weitreichende Synergieeffekte durch die Ausschöpfung mustermengenabhängiger Informationen. Die Verwendung von testmengenspezifischer Information im vorgestellten Algorithmus zeigt bei gleichbleibender Fehlererfassungsrate und Testdauer deutliche Einsparungen in der benötigten Verlustleistung. Das Verfahren wird an industriellen und Benchmark-Schaltungen mit bestehenden, überwiegend topologisch arbeitenden Verfahren verglichen.}
}
1. Programmable Deterministic Built-in Self-test
Hakmi, A.-W., Wunderlich, H.-J., Zöllin, C., Glowatz, A., Schlöffel, J. and Hapke, F.
19th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'07), Erlangen, Germany, 11-13 March 2007, pp. 61-65
2007
 
Keywords: Reliability; Testing; Fault-Tolerance; CR B.8.1; Deterministic BIST; test data compression; reseeding
Abstract: In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministric patterns than existing programmable methods and provides high flexibilily for test engineering in bolh internal and external test. Theoretical analysis suggests that significanlly more care bits can be encoded in the seed or a Linear Feedback Shift Register (LFSR) if a limited number of conflicting equations are ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern, but in contrast to bit-flipping BIST, the test set is not embedded by a synthesized logic function. Instead, this information is stored in memory using a special compression architecture. Experiments for benchmark circuits industrial designs demonstrate that the approach has considerably higher overall coding efficency than the existing methods.
BibTeX:
@inproceedings{HakmiWZGSH2007,
  author = {Hakmi, Abdul-Wahid and Wunderlich, Hans-Joachim and Zöllin, Christian and Glowatz, Andreas and Schlöffel, Jürgen and Hapke, Friedrich},
  title = {{Programmable Deterministic Built-in Self-test}},
  booktitle = {19th ITG/GI/GMM Workshop "Testmethoden und Zuverlässigkeit von Schaltungen und Systemen" (TuZ'07)},
  year = {2007},
  pages = {61--65},
  keywords = {Reliability; Testing; Fault-Tolerance; CR B.8.1; Deterministic BIST; test data compression; reseeding},
  abstract = {In this paper, we propose a new programmable deterministic Built-In Self-Test (BIST) method that requires significantly lower storage for deterministric patterns than existing programmable methods and provides high flexibilily for test engineering in bolh internal and external test. Theoretical analysis suggests that significanlly more care bits can be encoded in the seed or a Linear Feedback Shift Register (LFSR) if a limited number of conflicting equations are ignored in the employed linear equation system. The ignored care bits are separately embedded into the LFSR pattern, but in contrast to bit-flipping BIST, the test set is not embedded by a synthesized logic function. Instead, this information is stored in memory using a special compression architecture. Experiments for benchmark circuits industrial designs demonstrate that the approach has considerably higher overall coding efficency than the existing methods.}
}
Created by JabRef on 20/10/2017.