Zur Webseite der Uni Stuttgart

Best Papers

Matching entries: 0
settings...
9. Applying Efficient Fault Tolerance to Enable the Preconditioned Conjugate Gradient Solver on Approximate Computing Hardware
Schöll, A., Braun, C. and Wunderlich, H.-J.
Proceedings of the IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT'16), University of Connecticut, USA, 19-20 September 2016, pp. 21-26
DFT 2016 Best Paper Award
2016
DOI PDF 
Keywords: Approximate Computing, Fault Tolerance, Sparse Linear System Solving, Preconditioned Conjugate Gradient
Abstract: A new technique is presented that allows to execute the preconditioned conjugate gradient (PCG) solver on approximate hardware while ensuring correct solver results. This technique expands the scope of approximate computing to scientific and engineering applications. The changing error resilience of PCG during the solving process is exploited by different levels of approximation which trade off numerical accuracy and hardware utilization. Such approximation levels are determined at runtime by periodically estimating the error resilience. An efficient fault tolerance technique allows reductions in hardware utilization by ensuring the continued exploitation of maximum allowed energy-accuracy trade-offs. Experimental results show that the hardware utilization is reduced on average by 14.5% and by up to 41.0% compared to executing PCG on accurate hardware.
BibTeX:
@inproceedings{SchoeBW2016,
  author = {Schöll, Alexander and Braun, Claus and Wunderlich, Hans-Joachim},
  title = {{Applying Efficient Fault Tolerance to Enable the Preconditioned Conjugate Gradient Solver on Approximate Computing Hardware}},
  booktitle = {Proceedings of the IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT'16)},
  year = {2016},
  pages = {21-26},
  keywords = {Approximate Computing, Fault Tolerance, Sparse Linear System Solving, Preconditioned Conjugate Gradient},
  abstract = {A new technique is presented that allows to execute the preconditioned conjugate gradient (PCG) solver on approximate hardware while ensuring correct solver results. This technique expands the scope of approximate computing to scientific and engineering applications. The changing error resilience of PCG during the solving process is exploited by different levels of approximation which trade off numerical accuracy and hardware utilization. Such approximation levels are determined at runtime by periodically estimating the error resilience. An efficient fault tolerance technique allows reductions in hardware utilization by ensuring the continued exploitation of maximum allowed energy-accuracy trade-offs. Experimental results show that the hardware utilization is reduced on average by 14.5% and by up to 41.0% compared to executing PCG on accurate hardware.},
  doi = {http://dx.doi.org/10.1109/DFT.2016.7684063},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2016/DFT_SchoeBW2016.pdf}
}
8. Logic/Clock-Path-Aware At-Speed Scan Test Generation for Avoiding False Capture Failures and Reducing Clock Stretch
Asada, K., Wen, X., Holst, S., Miyase, K., Kajihara, S., Kochte, M.A., Schneider, E., Wunderlich, H.-J. and Qian, J.
Proceedings of the 24th IEEE Asian Test Symposium (ATS'15), Mumbai, India, 22-25 November 2015, pp. 103-108
ATS 2015 Best Paper Award
2015
DOI PDF 
Keywords: launch switching activity, IR-drop, logic path, clock path, false capture failure, test clock stretch, X-filling
Abstract: IR-drop induced by launch switching activity (LSA) in capture mode during at-speed scan testing increases delay along not only logic paths (LPs) but also clock paths (CPs). Excessive extra delay along LPs compromises test yields due to false capture failures, while excessive extra delay along CPs compromises test quality due to test clock stretch. This paper is the first to mitigate the impact of LSA on both LPs and CPs with a novel LCPA (Logic/Clock-Path-Aware) at-speed scan test generation scheme, featuring (1) a new metric for assessing the risk of false capture failures based on the amount of LSA around both LPs and CPs, (2) a procedure for avoiding false capture failures by reducing LSA around LPs or masking uncertain test responses, and (3) a procedure for reducing test clock stretch by reducing LSA around CPs. Experimental results demonstrate the effectiveness of the LCPA scheme in improving test yields and test quality.
BibTeX:
@inproceedings{AsadaWHMKKSWQ2015,
  author = {Asada, Koji and Wen, Xiaoqing and Holst, Stefan and Miyase, Kohei and Kajihara, Seiji and Kochte, Michael A. and Schneider, Eric and Wunderlich, Hans-Joachim and Qian, Jun},
  title = {{Logic/Clock-Path-Aware At-Speed Scan Test Generation for Avoiding False Capture Failures and Reducing Clock Stretch}},
  booktitle = {Proceedings of the 24th IEEE Asian Test Symposium (ATS'15)},
  year = {2015},
  pages = {103-108},
  keywords = { launch switching activity, IR-drop, logic path, clock path, false capture failure, test clock stretch, X-filling },
  abstract = {IR-drop induced by launch switching activity (LSA) in capture mode during at-speed scan testing increases delay along not only logic paths (LPs) but also clock paths (CPs). Excessive extra delay along LPs compromises test yields due to false capture failures, while excessive extra delay along CPs compromises test quality due to test clock stretch. This paper is the first to mitigate the impact of LSA on both LPs and CPs with a novel LCPA (Logic/Clock-Path-Aware) at-speed scan test generation scheme, featuring (1) a new metric for assessing the risk of false capture failures based on the amount of LSA around both LPs and CPs, (2) a procedure for avoiding false capture failures by reducing LSA around LPs or masking uncertain test responses, and (3) a procedure for reducing test clock stretch by reducing LSA around CPs. Experimental results demonstrate the effectiveness of the LCPA scheme in improving test yields and test quality.},
  doi = {http://dx.doi.org/10.1109/ATS.2015.25},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2015/ATS_AsadaWHMKKSWQ2015.pdf}
}
7. Access Port Protection for Reconfigurable Scan Networks
Baranowski, R., Kochte, M.A. and Wunderlich, H.-J.
Journal of Electronic Testing: Theory and Applications (JETTA)
Vol. 30(6), December 2014, pp. 711-723
2014 JETTA-TTTC Best Paper Award
2014
DOI URL PDF 
Keywords: Debug and diagnosis, reconfigurable scan network, IJTAG, IEEE P1687, secure DFT, hardware security
Abstract: Scan infrastructures based on IEEE Std. 1149.1 (JTAG), 1500 (SECT), and P1687 (IJTAG) provide a cost-effective access mechanism for test, reconfiguration, and debugging purposes. The improved accessibility of on-chip instruments, however, poses a serious threat to system safety and security. While state-of-theart protection methods for scan architectures compliant with JTAG and SECT are very effective, most of these techniques face scalability issues in reconfigurable scan networks allowed by the upcoming IJTAG standard. This paper describes a scalable solution for multilevel access management in reconfigurable scan networks. The access to protected instruments is restricted locally at the interface to the network. The access restriction is realized by a sequence filter that allows only a precomputed set of scan-in access sequences. This approach does not require any modification of the scan architecture and causes no access time penalty. Therefore, it is well suited for core-based designs with hard macros and 3D integrated circuits. Experimental results for complex reconfigurable scan networks show that the area overhead depends primarily on the number of allowed accesses, and is marginal even if this number exceeds the count of registers in the network.
BibTeX:
@article{BaranKW2014a,
  author = {Baranowski, Rafal and Kochte, Michael A. and Wunderlich, Hans-Joachim},
  title = {{Access Port Protection for Reconfigurable Scan Networks}},
  journal = {Journal of Electronic Testing: Theory and Applications (JETTA)},
  publisher = {Springer-Verlag},
  year = {2014},
  volume = {30},
  number = {6},
  pages = {711--723},
  keywords = {Debug and diagnosis, reconfigurable scan network, IJTAG, IEEE P1687, secure DFT, hardware security},
  abstract = {Scan infrastructures based on IEEE Std. 1149.1 (JTAG), 1500 (SECT), and P1687 (IJTAG) provide a cost-effective access mechanism for test, reconfiguration, and debugging purposes. The improved accessibility of on-chip instruments, however, poses a serious threat to system safety and security. While state-of-theart protection methods for scan architectures compliant with JTAG and SECT are very effective, most of these techniques face scalability issues in reconfigurable scan networks allowed by the upcoming IJTAG standard. This paper describes a scalable solution for multilevel access management in reconfigurable scan networks. The access to protected instruments is restricted locally at the interface to the network. The access restriction is realized by a sequence filter that allows only a precomputed set of scan-in access sequences. This approach does not require any modification of the scan architecture and causes no access time penalty. Therefore, it is well suited for core-based designs with hard macros and 3D integrated circuits. Experimental results for complex reconfigurable scan networks show that the area overhead depends primarily on the number of allowed accesses, and is marginal even if this number exceeds the count of registers in the network.},
  url = { http://link.springer.com/article/10.1007/s10836-014-5484-2 },
  doi = {http://dx.doi.org/10.1007/s10836-014-5484-2},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/JETTA_BaranKW2014.pdf}
}
6. Adaptive Parallel Simulation of a Two-Timescale-Model for Apoptotic Receptor-Clustering on GPUs
Schöll, A., Braun, C., Daub, M., Schneider, G. and Wunderlich, H.-J.
Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM'14), Belfast, United Kingdom, 2-5 November 2014, pp. 424-431
SimTech Best Paper Award
2014
DOI PDF 
Keywords: Heterogeneous computing, GPU computing, parallel particle simulation, multi-timescale model, adaptive Euler-Maruyama approximation, ligand-receptor aggregation
Abstract: Computational biology contributes important solutions for major biological challenges. Unfortunately, most applications in computational biology are highly computeintensive and associated with extensive computing times. Biological problems of interest are often not treatable with traditional simulation models on conventional multi-core CPU systems. This interdisciplinary work introduces a new multi-timescale simulation model for apoptotic receptor-clustering and a new parallel evaluation algorithm that exploits the computational performance of heterogeneous CPU-GPU computing systems. For this purpose, the different dynamics involved in receptor-clustering are separated and simulated on two timescales. Additionally, the time step sizes are adaptively refined on each timescale independently.
This new approach improves the simulation performance significantly and reduces computing times from months to hours for observation times of several seconds.
BibTeX:
@inproceedings{SchoeBDSW2014,
  author = {Schöll, Alexander and Braun, Claus and Daub, Markus and Schneider, Guido and Wunderlich, Hans-Joachim},
  title = {{Adaptive Parallel Simulation of a Two-Timescale-Model for Apoptotic Receptor-Clustering on GPUs}},
  booktitle = {Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM'14)},
  year = {2014},
  pages = {424--431},
  keywords = {Heterogeneous computing, GPU computing, parallel particle simulation, multi-timescale model, adaptive Euler-Maruyama approximation, ligand-receptor aggregation},
  abstract = {Computational biology contributes important solutions for major biological challenges. Unfortunately, most applications in computational biology are highly computeintensive and associated with extensive computing times. Biological problems of interest are often not treatable with traditional simulation models on conventional multi-core CPU systems. This interdisciplinary work introduces a new multi-timescale simulation model for apoptotic receptor-clustering and a new parallel evaluation algorithm that exploits the computational performance of heterogeneous CPU-GPU computing systems. For this purpose, the different dynamics involved in receptor-clustering are separated and simulated on two timescales. Additionally, the time step sizes are adaptively refined on each timescale independently.
This new approach improves the simulation performance significantly and reduces computing times from months to hours for observation times of several seconds.}, doi = {http://dx.doi.org/10.1109/BIBM.2014.6999195}, file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/BIBM_SchoeBDSW2014.pdf} }
5. Variation-Aware Deterministic ATPG
Sauer, M., Polian, I., Imhof, M.E., Mumtaz, A., Schneider, E., Czutro, A., Wunderlich, H.-J. and Becker, B.
Proceedings of the 19th IEEE European Test Symposium (ETS'14), Paderborn, Germany, 26-30 May 2014, pp. 87-92
Best paper award
2014
DOI URL PDF 
Keywords: Variation-aware test, fault efficiency, ATPG
Abstract: In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.
BibTeX:
@inproceedings{SauerPIMSCWB2014,
  author = {Sauer, Matthias and Polian, Ilia and Imhof, Michael E. and Mumtaz, Abdullah and Schneider, Eric and Czutro, Alexander and Wunderlich, Hans-Joachim and Becker, Bernd},
  title = {{Variation-Aware Deterministic ATPG}},
  booktitle = {Proceedings of the 19th IEEE European Test Symposium (ETS'14)},
  year = {2014},
  pages = {87--92},
  keywords = {Variation-aware test, fault efficiency, ATPG},
  abstract = {In technologies affected by variability, the detection status of a small-delay fault may vary among manufactured circuit instances. The same fault may be detected, missed or provably undetectable in different circuit instances. We introduce the first complete flow to accurately evaluate and systematically maximize the test quality under variability. As the number of possible circuit instances is infinite, we employ statistical analysis to obtain a test set that achieves a fault-efficiency target with an user-defined confidence level. The algorithm combines a classical path-oriented test-generation procedure with a novel waveformaccurate engine that can formally prove that a small-delay fault is not detectable and does not count towards fault efficiency. Extensive simulation results demonstrate the performance of the generated test sets for industrial circuits affected by uncorrelated and correlated variations.},
  url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6847806},
  doi = {http://dx.doi.org/10.1109/ETS.2014.6847806},
  file = {http://www.iti.uni-stuttgart.de/fileadmin/rami/files/publications/2014/ETS_SauerPIMSCWB2014.pdf}
}
4. XP-SISR: Eingebaute Selbstdiagnose für Schaltungen mit Prüfpfad
Elm, M. and Wunderlich, H.-J.
3. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'09)
Vol. 61, Stuttgart, Germany, 21-23 September 2009, pp. 21-28
Best paper award
2009
URL PDF 
Keywords: Logic BIST; Diagnosis
Abstract: Die Vorteile des Eingebauten Selbsttests (BIST --- Built-In Self-Test) sind bekannt, für eingebettete Speicher ist BIST sogar die bevorzugte Teststrategie. Für freie Logik wird BIST deutlich seltener eingesetzt. Grund hierfür ist zum einen, dass deterministische Testmuster für eine hohe Fehlerabdeckung benötigt werden und diese im Selbsttest hohe Kosten verursachen. Zum anderen lassen sich aus den Testantworten, die zu einer einzigen Signatur kompaktiert werden, nur wenige diagnostische Informationen ziehen. In den vergangenen Jahren wurden kontinuierlich Fortschritte zur Lösung des ersten Problems erzielt. Dieser Beitrag befasst sich mit der Lösung des zweiten Problems.
Eine neue Methode für die Eingebaute Selbstdiagnose (BISD --- Built-In Self-Diagnosis) wird vorgeschlagen. Kern der Methode ist eine kombinierte, extreme Raum- und Zeitkompaktierung, die es erstmals ermöglicht, erwartete Antworten und fehlerhafte Antworten mit vernachlässigbarem Aufwand auf dem zu testenden Chip zu speichern. Somit können in einer einzigen Selbsttestsitzung pro Chip alle zur Diagnose notwendigen Daten gesammelt werden.
Das BISD Schema umfasst neben der Kompaktierungshardware einen Diagnosealgorithmus und ein Verfahren zur Testmustererzeugung, die Aliasingeffekte und die durch die starke Kompaktierung verringerte diagnostische Auflösung kompensieren können. Experimente mit aktuellen, industriellen Schaltungen zeigen, dass die diagnostische Auflösung im Vergleich zum externen Test erhalten bleibt und der zusätzliche Hardware-Aufwand zu vernachlässigen ist.

The advantages of Built-In Self-Test (BIST) are well known, and for embedded memories BIST is already the preferred test method. However, for random logic BIST is less often employed. This is mainly due to the following two reasons: On the one hand, deterministic patterns might be necessary to achieve reasonable fault coverage, yet they are expensive in built-in tests. On the other hand, the diagnostic information provided by BIST-signatures is rather poor. During the last years the first issue has been tackled successfully. This paper deals with the second issue.
A new method for Built-In Self-Diagnosis (BISD) is presented. The method's backbone is a combination of extreme space and time compaction, which for the first time allows to store the expected test responses and the failing test responses with negligible overhead on chip. Consequently, all data relevant to diagnosis can be collected during a single self-test session.
The BISD method additionally comprises a diagnosis algorithm and a test pattern generation scheme, which overcome aliasing and the reduced diagnostic resolution introduced by the extreme compaction. Experiments with recent, industrial designs demonstrate, that diagnostic resolution is maintaned compared to external testing and the additional hardware needed to implement the BISD-scheme is negligibly small.

BibTeX:
@inproceedings{ElmW2009,
  author = {Elm, Melanie and Wunderlich, Hans-Joachim},
  title = {{XP-SISR: Eingebaute Selbstdiagnose für Schaltungen mit Prüfpfad}},
  booktitle = {3. GMM/GI/ITG-Fachtagung Zuverlässigkeit und Entwurf (ZuE'09)},
  publisher = {VDE VERLAG GMBH},
  year = {2009},
  volume = {61},
  pages = {21--28},
  keywords = {Logic BIST; Diagnosis},
  abstract = {Die Vorteile des Eingebauten Selbsttests (BIST --- Built-In Self-Test) sind bekannt, für eingebettete Speicher ist BIST sogar die bevorzugte Teststrategie. Für freie Logik wird BIST deutlich seltener eingesetzt. Grund hierfür ist zum einen, dass deterministische Testmuster für eine hohe Fehlerabdeckung benötigt werden und diese im Selbsttest hohe Kosten verursachen. Zum anderen lassen sich aus den Testantworten, die zu einer einzigen Signatur kompaktiert werden, nur wenige diagnostische Informationen ziehen. In den vergangenen Jahren wurden kontinuierlich Fortschritte zur Lösung des ersten Problems erzielt. Dieser Beitrag befasst sich mit der Lösung des zweiten Problems.
Eine neue Methode für die Eingebaute Selbstdiagnose (BISD --- Built-In Self-Diagnosis) wird vorgeschlagen. Kern der Methode ist eine kombinierte, extreme Raum- und Zeitkompaktierung, die es erstmals ermöglicht, erwartete Antworten und fehlerhafte Antworten mit vernachlässigbarem Aufwand auf dem zu testenden Chip zu speichern. Somit können in einer einzigen Selbsttestsitzung pro Chip alle zur Diagnose notwendigen Daten gesammelt werden.
Das BISD Schema umfasst neben der Kompaktierungshardware einen Diagnosealgorithmus und ein Verfahren zur Testmustererzeugung, die Aliasingeffekte und die durch die starke Kompaktierung verringerte diagnostische Auflösung kompensieren können. Experimente mit aktuellen, industriellen Schaltungen zeigen, dass die diagnostische Auflösung im Vergleich zum externen Test erhalten bleibt und der zusätzliche Hardware-Aufwand zu vernachlässigen ist.

The advantages of Built-In Self-Test (BIST) are well known, and for embedded memories BIST is already the preferred test method. However, for random logic BIST is less often employed. This is mainly due to the following two reasons: On the one hand, deterministic patterns might be necessary to achieve reasonable fault coverage, yet they are expensive in built-in tests. On the other hand, the diagnostic information provided by BIST-signatures is rather poor. During the last years the first issue has been tackled successfully. This paper deals with the second issue.
A new method for Built-In Self-Diagnosis (BISD) is presented. The method's backbone is a combination of extreme space and time compaction, which for the first time allows to store the expected test responses and the failing test responses with negligible overhead on chip. Consequently, all data relevant to diagnosis can be collected during a single self-test session.
The BISD method additionally comprises a diagnosis algorithm and a test pattern generation scheme, which overcome aliasing and the reduced diagnostic resolution introduced by the extreme compaction. Experiments with recent, industrial designs demonstrate, that diagnostic resolution is maintaned compared to external testing and the additional hardware needed to implement the BISD-scheme is negligibly small.}, url = {http://www.vde-verlag.de/proceedings-de/453178004.html}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2009/ZuE_ElmW2009.pdf} }

3. Test Set Stripping Limiting the Maximum Number of Specified Bits
Kochte, M.A., Zoellin, C.G., Imhof, M.E. and Wunderlich, H.-J.
Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08), Hong Kong, China, 23-25 January 2008, pp. 581-586
Best paper award
2008
DOI URL PDF 
Keywords: test relaxation; test generation; tailored ATPG
Abstract: This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially
BibTeX:
@inproceedings{KochtZIW2008,
  author = {Kochte, Michael A. and Zoellin, Christian G. and Imhof, Michael E. and Wunderlich, Hans-Joachim},
  title = {{Test Set Stripping Limiting the Maximum Number of Specified Bits}},
  booktitle = {Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications (DELTA'08)},
  publisher = {IEEE Computer Society},
  year = {2008},
  pages = {581--586},
  keywords = {test relaxation; test generation; tailored ATPG},
  abstract = {This paper presents a technique that limits the maximum number of specified bits of any pattern in a given test set. The outlined method uses algorithms similar to ATPG, but exploits the information in the test set to quickly find test patterns with the desired properties. The resulting test sets show a significant reduction in the maximum number of specified bits in the test patterns. Furthermore, results for commercial ATPG test sets show that even the overall number of specified bits is reduced substantially},
  url = {http://www.computer.org/csdl/proceedings/delta/2008/3110/00/3110a581-abs.html},
  doi = {http://dx.doi.org/10.1109/DELTA.2008.64},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2008/DELTA_KochtZIW2008.pdf}
}
2. Adaptive Debug and Diagnosis Without Fault Dictionaries
Holst, S. and Wunderlich, H.-J.
Proceedings of the 12th IEEE European Test Symposium (ETS'07), Freiburg, Germany, 20-24 May 2007, pp. 7-12
Best paper award
2007
DOI URL PDF 
Keywords: Diagnosis; Debug; Test; VLSI
Abstract: Diagnosis is essential in modern chip production to increase yield, and debug constitutes a major part in the presilicon development process. For recent process technologies, defect mechanisms are increasingly complex, and continuous efforts are made to model these defects by using sophisticated fault models. Traditional static approaches for debug and diagnosis with a simplified fault model are more and more limited.
In this paper, a method is presented, which identifies possible faulty regions in a combinational circiut, based on its input/output behavior and independent of a fault model. The new adaptive, statistical approach combines a flexible and powerful effect-cause pattern analysis algorithm with high-resolution ATPG. We show the effectiveness of the approach through experiments with benchmark and industrial circuits.
BibTeX:
@inproceedings{HolstW2007,
  author = {Holst, Stefan and Wunderlich, Hans-Joachim},
  title = {{Adaptive Debug and Diagnosis Without Fault Dictionaries}},
  booktitle = {Proceedings of the 12th IEEE European Test Symposium (ETS'07)},
  publisher = {IEEE Computer Society},
  year = {2007},
  pages = {7--12},
  keywords = {Diagnosis; Debug; Test; VLSI},
  abstract = {Diagnosis is essential in modern chip production to increase yield, and debug constitutes a major part in the presilicon development process. For recent process technologies, defect mechanisms are increasingly complex, and continuous efforts are made to model these defects by using sophisticated fault models. Traditional static approaches for debug and diagnosis with a simplified fault model are more and more limited. 
In this paper, a method is presented, which identifies possible faulty regions in a combinational circiut, based on its input/output behavior and independent of a fault model. The new adaptive, statistical approach combines a flexible and powerful effect-cause pattern analysis algorithm with high-resolution ATPG. We show the effectiveness of the approach through experiments with benchmark and industrial circuits.}, url = {http://www.computer.org/csdl/proceedings/ets/2007/2827/00/28270007-abs.html}, doi = {http://dx.doi.org/10.1109/ETS.2007.9}, file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/ETS_HolstW2007.pdf} }
1. Analyzing Test and Repair Times for 2D Integrated Memory Built-in Test and Repair
Öhler, P., Hellebrand, S. and Wunderlich, H.-J.
Proceedings of the 10th IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems (DDECS'07), Krakow, Poland, 11-13 April 2007, pp. 185-190
Best paper award
2007
DOI URL PDF 
Abstract: An efficient on-chip infrastructure for memory test and repair is crucial to enhance yield and availability of SoCs. A commonly used repair strategy is to equip memories with spare rows and columns (2D redundancy). To advoid the prohibitive storage requirements for failure bitmaps and the complex data structures inherent in most algorithms for offline repair analysis, existing heuristics for built-in repair analysis (BIRA) either use very simple search strategies or restict the search to smaller local bitmaps. Exact BIRA algorithms work with sub analyzers for each possible repair combination. While a parallel implementation suffers from a high hardware overhead, a serial implementation leads to increased test times. Recently an integrated built-in test and repair approach has been proposed which interleaves test and repair analysis and supports an exact solution with moderate hardware overhead and reasonable test times. The search is based on a depth first traversal of a binary tree, which can be efficiently implemented using a stack of limited size. This algorithm can be realized with different repair strategies guiding the selection of spare rows or columns in each step. In this paper the impact of four different repair strategies on the test and repair time is analyzed.
BibTeX:
@inproceedings{OehleHW2007a,
  author = {Öhler, Phillip and Hellebrand, Sybille and Wunderlich, Hans-Joachim},
  title = {{Analyzing Test and Repair Times for 2D Integrated Memory Built-in Test and Repair}},
  booktitle = {Proceedings of the 10th IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems (DDECS'07)},
  publisher = {IEEE Computer Society},
  year = {2007},
  pages = {185--190},
  abstract = {An efficient on-chip infrastructure for memory test and repair is crucial to enhance yield and availability of SoCs. A commonly used repair strategy is to equip memories with spare rows and columns (2D redundancy). To advoid the prohibitive storage requirements for failure bitmaps and the complex data structures inherent in most algorithms for offline repair analysis, existing heuristics for built-in repair analysis (BIRA) either use very simple search strategies or restict the search to smaller local bitmaps. Exact BIRA algorithms work with sub analyzers for each possible repair combination. While a parallel implementation suffers from a high hardware overhead, a serial implementation leads to increased test times. Recently an integrated built-in test and repair approach has been proposed which interleaves test and repair analysis and supports an exact solution with moderate hardware overhead and reasonable test times. The search is based on a depth first traversal of a binary tree, which can be efficiently implemented using a stack of limited size. This algorithm can be realized with different repair strategies guiding the selection of spare rows or columns in each step. In this paper the impact of four different repair strategies on the test and repair time is analyzed.},
  url = {http://www.computer.org/csdl/proceedings/ddecs/2007/1161/00/04295278-abs.html},
  doi = {http://dx.doi.org/10.1109/DDECS.2007.4295278},
  file = {http://www.iti.uni-stuttgart.de//fileadmin/rami/files/publications/2007/DDECS_OehleHW2007a.pdf}
}
Created by JabRef on 31/07/2017.