Reliability is a paramount challenge in the automotive industry for circuit designers, particularly given increasing concerns about transistor aging and self-heating effects. These issues become more pronounced as technology nodes advance and scaling approaches atomic dimensions, such as in nanosheet transistors. Estimating safety margins and guardbands to manage reliability throughout a chip's intended lifetime is fundamentally difficult, especially when accounting for workload dependencies. In this talk, we will explore how machine learning can pioneer innovative solutions to accelerate reliability analysis from device physics to system-level considerations. By training accurate machine learning models, circuit designers can precisely estimate the impact of aging on the delay and power characteristics of complex circuits. We will also demonstrate how, for the first time, mature commercial sign-off tools can be seamlessly integrated to address reliability challenges across full-chip designs, with a particular focus on the unique challenges posed by AI chips.
Bio: Design within the Technical University of Munich (TUM). He is, additionally, the head of Brain-inspired Computing at the Munich Institute of Robotics and Machine Intelligence. Further, he is the head of the Semiconductor Test and Reliability department at the University of Stuttgart, Germany. Prior to that, he was a Research Group Leader at the Karlsruhe Institute of Technology (KIT). He received his Ph.D. degree with the highest distinction (summa cum laude) from KIT in 2015. He has more than 270 publications (including over 115 articles in many top journals such as Nature Communications) in multidisciplinary research areas covering semiconductor device physics, circuit design and computer architecture. He is frequenrly a reviewer in Nature journals and many top IEEE journals. His research interest is reliability of advanced CMOS technologies, emerging beyond-CMOS technologies, AI acceleation, in-memory computing with a special focus on nonevolatile memories, and cryogenic CMOS circuits for quantum computing. His research in HW security and reliability have been funded by the German Research Foundation, Federal Ministry of Education and Research, Infineon, Advantest, and the U.S. Office of Naval Research.
The key requirements for IOT devices are ultra-low-power, high-processing capabilities, autonomy at low cost, as well as reliability and accuracy to enable Green AI at the edge. Artificial Intelligence embedded (AI) models are resource- intensive and face challenges with traditional computing architectures due to the memory wall problem. Computing-in-Memory (CIM) with emerging resistive memories offers a solution by combining memory blocks and computing units for higher efficiency and lower power consumption. Bayesian Neural Networks implemented as a non von-Neumann architecture based on spintronic technologies, presents technical challenges at all level, but also due to variability and manufacturing defects related to the immature emerging technologies.
We address these challenges through full-stack hardware and software co-design, developing jointly novel algorithmic and circuit design approaches to enhance the performance, energy-efficiency and robustness.
Bio: Lorena Anghel got her PhD in 2000 from Grenoble Institute of Engineering and Management, Grenoble INP- UGA. Currently she is Full Professor in Microelectronics and Embedded Systems Engineering and member of the research staff of SPINTEC Laboratory. Her research interests include design and validation of reliable digital integrated circuits, hardware/software tolerant design, aging induced reliability issues, defects and variation tolerance for emerging technologies, with a particular focus on design of logic and memory circuits based on spintronic device. Since 2019 she has been holding an Excellence Chair position at the AI Multi-Disciplinary Institute in Grenoble on the topic of “Non Volatile Emerging based Spiking Neural Network”. She has actively participated in several European Projects as well as French national projects ANR, serving as a work package leader or scientific coordinator. Dr. Anghel has held various positions of responsibility in organizing numerous major conferences and symposiums related to her research domains. Dr. Anghel has been recipient of several Best Paper Awards and one Outstanding Paper Award. She is currently Vice President for Research and Scientific Council for Grenoble INP - University of Grenoble Alpes.
Modern chips have reached an extraordinary scale, with some designs now boasting more components than there are stars in the Milky Way or neurons in the human brain. Achieving such complexity on a chip the size of a fingernail is made possible by advanced Electronic Design Automation tools. This presentation begins with a look at the core methodologies and algorithms that enable chip design at this immense scale, alongside a discussion of the key challenges that remain. Integrated Circuit design involves hundreds of algorithmic steps, progressively transforming a functional blueprint into the precise layout of an operational circuit. Efficiently arranging and connecting billions of components on a single IC is essential to balancing cost, performance, reliability, and power consumption. We’ll examine how these competing objectives are managed to achieve optimal results. In the second part, we’ll explore how these design principles meet the intensive computational demands of AI. The presentation will cover the various hardware styles and explore which ones are suitable in self-driving cars or in the latest mobile phones.
Bio: Patrick is Senior Fellow at AMD and adjunct lecturer in Stanford University's Department of Electrical Engineering. With an extensive career in Electronic Design Automation, he has held roles at both Cadence and Synopsys and served as Chief Technologist at Magma Design Automation, where he contributed to the development of a pioneering RTL-to-GDS2 synthesis tool. Patrick has also worked with AI hardware startups and held a Full Professorship in Electrical Engineering at Eindhoven University. He is the Finance Chair on the Executive Committee of the Design Automation Conference. Patrick earned his MSc and PhD degrees from Delft University of Technology in the Netherlands.
Vector-logical design and test computing is a processor-free in-memory SoC IP-core testing mechanism based on read-write transactions on logical vectors and their derivatives. A vector logic mechanism is a communicating relationship between the redundancy of data structures and the computational complexity of algorithms for their processing when an increase in one leads to a decrease in the other and vice versa. The redundancy of smart data structures always leads to the minimization of the algorithm for their processing. Vector-logical mechanisms are proposed for modeling smart data structures for simulation and testing digital projects. The study aims to design and test computing based on vector-logic mechanisms located in memory to save energy and design time. The research subject is vector-logical in-memory computing, which is used to solve the problems of modeling, testing, and simulation of digital projects based on vector-logical models of the IP-core SoC. Here, all computing components are entirely new, focused on the EDA (educational) market of cost-effective engineering solutions. There's no powerful CPU here — only read-write transactions and one vector xor operation, which are easily converted to transactions. Fault-as-address simulation (FAAS) is a mechanism for simulating combinations of circuit line faults represented by the bit addresses of logical vector elements. The advantage of the proposed FAAS mechanism is the predictable complexity of the algorithm and memory consumption for storing data structures when simulating a test set. Modeling the testing map on smart data structure solves all issues: simulating and diagnosing faults-as-address, synthesizing a minimum test, and assessing its quality. All these processes operate with three components (test-model-errors) of the equation T⨁L⨁F=0, among which two must be known to search for the third. This is prompt algorithm-free computing on the logic vector-as-query. «The method is very interesting and original, promising. Congratulations!» Raimund Ubar.
Bio: Vladimir Hahanov was born in the USSR in 1953. He is a Doctor of Science, Professor of Computer Engineering Faculty, and Design Automation Department at Kharkiv National University of Radio Electronics. His research and development fields include Vector Logic in-memory computing, design, and test of computers, test generation and fault simulation for SoC, quantum memory-driven computing, Intelligent Computing, cyber-physical, cyber social computing, pattern recognition & machine learning computing, digital smart cyber university, and cloud-driven traffic control. He has supervised 40 Doctors of Science and Ph.D. He has been the General Chair of the IEEE East-West Design & Test Symposium for 21 years since 2003. He is also the author of 650+ publications, 25 textbooks, five patents, and 209 Scopus-indexed papers, with 993 citations by 623 documents, h-index 15. Prof. Hahanov has been an IEEE Senior Member since 2010, IEEE Computer Society Golden Core Member, SAE member, and IFAC member, Communication Society member.
Artificial Intelligence (AI) reshapes industries relying on the integration of cloud and edge computing where cloud systems manage complex model training and edge devices deliver real-time, efficient AI inference. The latter establishes the Edge AI concept and sets new reliability requirements for the backbone hardware chips backing safety- and mission-critical applications. The main research and engineering challenges for edge-AI chips reliability stem from the limited computing and energy resources of the edge devices. The talk explores techniques for soft-error and lifetime reliability assessment and enhancement for Deep Learning accelerators. It advocates the role of approximate computing and looks into specifics of the systolic-array-, data-flow-based and industry-grade accelerator architectures for ASICs and FPGAs. Bio: Maksim Jenihhin is a tenured associate professor of computing systems reliability and head of the research group “Trustworthy and Efficient Computing Hardware (TECH)” at the Tallinn University of Technology, Estonia. He received his PhD degree in Computer Engineering from the same university in 2008. His research interests include reliable and efficient hardware for AI acceleration, methodologies and EDA tools for hardware design, verification and security, as well as nanoelectronics reliability and manufacturing test topics. He has published more than 180 research papers, supervised several PhD students and postdocs and served on executive and program committees for numerous IEEE conferences (DATE, ETS, DDECS, EWDTS, VLSI-SoC, LATS, NorCAS, etc.). Prof. Jenihhin coordinates European collaborative research projects HORIZON MSCA DN “TIRAMISU” (2024), HORIZON TWINN “TAICHIP” (2024) and national ones about energy efficiency and reliability of edge-AI chips and cross-layer self-health awareness of autonomous systems.
The semiconductor industry is experiencing unprecedented growth, accompanied by significant technology challenges, particularly in Europe which aspires to capture 20% of the global market share.
While emerging technologies such as Generative AI, AI-enhanced Electronic Design Automation (EDA), and advanced cloud services offer promising, the most pressing issue in Europe lies in cultivating a robust talent pipeline, encompassing comprehensive education, strategic acquisition, and continuous development, to bridge the skill gap.
This presentation will delve into Synopsys' role in addressing Europe's semiconductor talent crisis highlighting our workforce development strategy to attract, develop, and retain top-tier talent. This to drive innovation and to position Europe to meet the evolving needs of the semiconductor industry.
By focusing on these key areas, we aim to become the preferred EDA partner for workforce development in Europe's semiconductor ecosystem, ensuring its long-term success and global competitiveness.
Bio: Catherine's experience focuses on worldwide customer success management, encompassing technical support, training, project management, and team leadership. Her expertise spans both industry and academia.
Holding a master's degree in Microelectronics from ESIEE Paris, Catherine started her professional journey as a Field Application Engineer, advancing to roles as a Technical Project Manager and Team Manager at LSI-Logic, Synopsys, and Texas Instruments in France and the United States.
After a stint in the academic world, Catherine returned to industry in 2018.
In 2022, Catherine rejoined Synopsys with a mission to bridge the gap between industry and academia. Her current role focuses on establishing and nurturing education and research collaborations with European universities, capitalizing on her multifaceted background.
Accurate power consumption analysis in the early stages of the chip design process can significantly enhance design efficiency, and machine learning (ML) plays a crucial role in such data-centric applications. Architectural-level power analysis provides essential guidance for subsequent detailed design stages. This report reviews the advancements in the application of ML for power prediction, and introduces a machine learning approach for architectural-level power prediction in SystemC and RTL-level designs. By employing an LSTM-based power model, the prediction error at the architectural level can be maintained within 2.6%. Additionally, the report explores the application of graph neural networks to improve the transferability of power analysis, further enhancing the accuracy and applicability of power prediction. Finally, it is concluded that with the continued development of machine learning technologies, their role in chip verification and power prediction and optimization will become increasingly significant.
Bio:Dr. Kang Li is an Associate Professor in the School of Integrated Circuits at Xidian University. His research interests include digital integrated circuit design, optimization and automation technologies, and the study of reliability device and circuit models. He has extensive experience in the design of application-specific processors, PPA (Power, Performance, and Area) optimization, and reliability design techniques. Dr. Li focuses on integrating AI with design automation to develop high-efficiency and high-precision power prediction technologies and has developed power evaluation tools for system-on-chip designs. He has led and participated in more than 10 major projects in related areas, including VLSI key projects, the NNSF of China, the Ministry of Science and Technology's key R&D programs, and industry collaboration projects. He has published over 20 papers indexed by SCI and EI and holds 8 authorized patents.
Scaling test automation and optimization continues to be a challenge as the size and complexity of SOCs and 3D ICs continues to grow. We will show how a breadth of AI techniques can provide better insight and solutions to the problem.
Bio: Fadi Maamari is Vice-President of Engineering at Synopsys where he heads the Software Modernization Group tasked with laying the software foundation for leveraging new compute and AI technology. He was Chief Product Architect at Atrenta prior to its acquisition by Synopsys, and Vice-President of Engineering and COO of LogicVision when it was acquired by Mentor. He has a Ph.D. in Electrical Engineering from McGill University in Montreal and started his career at AT&T Bell Labs working on various design automation algorithms.
During the short 66-year old history of integrated circuit development, its parameters have changed at an unprecedented pace. Transistor sizes have shrunk about 10000 times, component number has increased several billion times, absolute power consumption has increased 1000 times, performance has increased several dozens million times, etc. As a result, transition to the era of smart everything took place which gives rise to new challenges for semiconductor industry: silicon complexity, energy efficient design, reliability, etc. There are three main ways to overcome challenges: transistors’ further scaling to angstroms, transition to multi-die systems and intensive use of AI. The report describes the mentioned challenges and ways to overcome them.
Bio Vazgen is the Director of the Synopsys Armenia Educational Department (SAED). He is responsible for deploying the Synopsys University Program and overseeing cooperation with universities in Armenia. In his role, Vazgen leads all components of the educational process in partner universities including curricula development, teaching, internships, and training of professors. In addition, Vazgen organizes and he himself teaches in trainings for Synopsys employees worldwide. He is also the Head of Chairs in four partner universities.
Vazgen is the author of 13 monographs; more than 350 scientific and 150 methodological publications; more than 130 courses; and more than 170 reports. 80 Ph.D. dissertations have been defended under Vazgen’s supervision. He contributes at several local and international scientific conferences and contests serving as President and Member of Program Committees. He is the President of Program Committee of Annual International Microelectronics Olympiad. Vazgen is a member of the Presidium and corresponding member of National Academy of Sciences of Armenia as well as a full member of the International Academy of Engineering.
Vazgen has developed curricula for 2 specializations – IC Design and EDA which are used in about 2000 universities of 75 countries worldwide. Together with his team, he has also developed Educational Design Kits and Process Design Kits for various technology nodes – 90nm, 32/28nm, 14nm and 5 nm. They are deployed in hundreds of organizations and companies and thousands of universities of 80 countries worldwide.
Vazgen has received various awards including the title of Honorable Scientist of the Republic of Armenia, “President of the Republic Prize” in “Technical Sciences and Information Technologies,” National Academy of Sciences of Armenia Prize in the field of applied developments in physical-mathematical and technical sciences and “Best Paper” awards at international conferences. He is an Honorable Professor of several universities including National Research University MIET, Xidian University and European University of Armenia.
Vazgen holds a B.S. from Yerevan Polytechnic Institute and a Ph.D. from Moscow Engineering-Physics Institute. He is a Sc.D., Professor and a Corresponding Member of the National Academy of Sciences of Armenia.
Abstract: Physical Unclonable Functions (PUFs) are essential hardware security components that exploit inherent variations in manufacturing to generate unique and secure responses. A common variant, the Ring Oscillator (RO)-based PUF, relies on frequency comparisons between oscillator pairs to create its responses. However, maintaining consistent and reliable outputs remains a key challenge for this technology, especially under varying operating conditions.
In this presentation, we will explore how combining a processor with an FPGA within the same system can provide a flexible platform to address these reliability challenges. By leveraging the reconfigurability of the FPGA and the processing capabilities of the onboard system, it is possible to dynamically adapt and enhance the RO-based PUF’s reliability. This integration paves the way for innovative methods that not only improve the robustness of PUFs but also open up new avenues in adaptive hardware-based security solutions.
Bio: Giorgio Di Natale received the PhD in Computer Engineering from the Politecnico di Torino in 2003. He works as Director of Research for the French National Research Center (CNRS), and he is the director of the TIMA laboratory in Grenoble since 2021. His research interests include hardware security and trust, secure circuits design and test, reliability evaluation and fault tolerance, and VLSI testing. He has published 2 books and 9 book chapters, 60+ journal papers, and more than 150 conference and symposium papers in these domains. He served as chair of the IEEE Computer Society TTTC, he is Golden Core member of the Computer Society and Senior member of the IEEE.
The various levels of abstraction play a crucial role in the design and description of digital hardware systems. These levels, ranging from high-level system modeling to low-level hardware description, help manage the complexity and enhance the efficiency of the design process. RTL (Register Transfer Level) has matured and has been widely adopted in digital design and simulation. However, as systems have become more complex, the limitations of RTL have become apparent, necessitating a higher level of abstraction. Modern systems comprise numerous modules and intricate components, which require a more advanced approach to manage and design them efficiently. As a result, new methodologies and EDA (Electronic Design Automation) tools are required to address these challenges. These tools and methodologies aim to elevate the level of abstraction and simplify the design process, enabling engineers to use high-level modeling techniques, and high-level description languages to design complex systems. This not only reduces design time but also allows for more accurate system verification and simulation. EDA tools that support these methodologies offer enhanced capabilities and features for designing and optimizing digital circuits, helping engineers effectively tackle the challenges posed by increasingly complex designs. This talk initially gives a history of how digital system design has changed in the last 50 years and how significant RTL hardware description languages have helped this evolution. We then turn our attention to the new requirements for the design of electronic components and devices. New languages and design and evaluation platforms will be discussed. The discussion will focus on the need for future tools and methodologies.
Bio: Dr. Zainalabedin Navabi is a professor of ECE at the University of Tehran, and an independent EDA developer and consultant. He is the author of several textbooks and computer-based trainings on VHDL, Verilog and related tools and environments. Dr. Navabi’s involvement with hardware description languages (HDL) begins in 1976, when he started formal definition of a register transfer level HDL and development of a simulator for it. In 1981 he completed the development of a synthesis tool for that same HDL. The synthesis tool generated MOS layout from an RTL description. Since 1981, Dr. Navabi has been involved in the design, definition, and implementation of Hardware Description Languages and design methodologies. His work on HDLs has continued to languages used today for system-level design and modeling and language-based design space exploration (DSE) methodologies. New domain-specific languages and methodologies for AI and ML are part of his on-going work.
The keynote presents The Big Game, one of the most significant and effective Italian set of comprehensive and integrated actions aimed at: (a) raising cybersecurity awareness, education & training in the country, (b) tackling the cybersecurity skill shortage, (c) reducing the gender gap, increasing girls’ interests in the topic, and (d) creating and growing both a community of cyber defenders by investing in young talents, and, at the same time, a community of high school teachers more and more involved in cybersecurity issues.
Both the key features of the program and its several composing activities are analysed, pointing out its institutional recognitions and its fulfilment with the Italian National Cybersecurity Strategy 2022-2026.
Bio: Paolo Prinetto is the coordinator of the training program The Big Game of the Italian CINI Cybersecurity National Lab. He is a Member of the Scientific Committee of the French CNRS (Centre National de la Recherche Scientifique). Paolo is a former Full professor of Computer Engineering at Politecnico di Torino (50%) and at IMT - Institute for Advanced Studies Lucca (50%). His research activities mainly focused on Hardware Security, Digital Systems Design & Test, System Dependability. In 2012 he was honored of the title “Doctor Honoris Causa” of the Technical University of Cluj-Napoca (Romania). From 2013 to 2019 he was the President of CINI (Italian National Inter-University Consortium for Informatics). From 2019 to 2024 Paolo was a Director of the Italian CINI Cybersecurity National Lab. From 2013 to 2019 he was a Vice-chair of the IFIP (International Federation for Information Processing) Technical Committee TC 10 - Computer Systems Technology. From 2000 to 2003 Paolo was a Chair (and Vice-Chair from 1998 to 1999) of the IEEE Computer Society TTTC: Test Technology Technical Council.
Recent presentations from Google, Facebook and others have reported a subtle new kind of operational failure -significant levels of silent data corruption in their large data centers. These transient/intermittent errors, which can go undetected in operation for extended periods and are extremely difficult to diagnose and root cause, have been associated with specific processor cores in these large processor networks. This suggests faulty or unstable hardware, not environmental noise related random events. While infrequently activated defects that escape production test methods could well be contributing to such failures, there is growing evidence that suggests that many of these errors are caused by a different failure mechanism: extreme statistical outlier slow circuit paths that display marginal timing due to the increasing random parameter variations experienced in advanced manufacturing processes. Furthermore, these random delays are greatly accentuated at the low operating voltages commonly employed for power savings during dynamic operational thermal management in advanced processors. Since such worst-case switching delays are dependent on circuit state and environmental conditions, some of these marginal paths can escape detection during postproduction testing, while causing occasional errors in operation. In this talk, we attempt to understand this new test challenge through studying the impact of random process variations on the timing of CMOS gates and circuit paths through analytical models as well as simulation, and also by analyzing published volume production test data, e.g. from Intel’s 14nm FinFET technology published at ITC2018. In conclusion we suggest ways of leveraging the voltage and timing of the applied timing tests to enhance the detection of marginal timing parts during scan and system level testing. The ultimate goal of this research is to reliably screen out these marginal parts during postproduction testing and thereby prevent them from causing errors in operation.
Adit Singh is Godbold Endowed Chair Professor of Electrical and Computer Engineering at Auburn University. He earlier served on the faculties of the University of Massachusetts in Amherst, and Virginia Tech in Blacksburg, and has held visiting positions at the University of Tokyo, Japan, the Universities of Freiburg and Potsdam in Germany, the Indian Institute of Technology, and as a Fulbright scholar at the University Polytechnic of Catalonia in Barcelona, Spain. His technical interests span all aspects of VLSI technology, particularly integrated circuit test and reliability. He has published over three hundred research papers and holds international patents that have been licensed to industry. He has served as a consultant to several semiconductor and EDA companies, including as an expert witness for major patent litigation cases. He has had leadership roles as General Chair/Co-Chair/Program Chair for dozens of international VLSI design and test conferences. He served two terms (2007-11) as Chair of the IEEE Test Technology Technical Council (TTTC), and (2011-15) on the Board of Governors of the IEEE Council on Design Automation (CEDA). Singh received his B.Tech from IIT Kanpur, and the M.S. and Ph.D. from Virginia Tech, all in Electrical Engineering. He is a Life Fellow of IEEE.
In an era of exponential AI driven growth increasing demand for sustainable and scalable computational infrastructure from data center to the edge, the Open Compute Project Foundation's (OCP) collaborative community working with academic institutions, research labs, both established and startup companies, is pioneering innovations in Reliability, Availability, and Serviceability (RAS) to redefine data center efficiency and resiliency! This keynote will explore how RAS principles, combined with the design modularity, flexibility, and OPEN interfaces and interoperable architectures, with Chiplet Economy are transforming hyperscale computing by enhancing system robustness, reducing downtime, and optimizing sale and resource utilization.
Attendees will gain insights into the OCP Value Proposition, OCP Community's unique collaborative processes and open contributions for fostering Open Designs for development and deployment of common architectures and using chiplet standards, practical implementations of RAS techniques, and how these innovations can address the critical challenges of future at scale AI/HPC data centers ! We’ll also examine the latest advancements in chiplet economy technology within the OCP Community, where modular components are enabling customizable, energy-efficient designs from silicon to facilities that improve serviceability and accelerate repair times. With RAS as a foundational framework, these chiplet-based solutions offer unprecedented flexibility, allowing data center operators to build tailored configurations that adapt dynamically to workload demands, mitigate failures, and streamline maintenance workflows to scale.
Bio: George Tchaparian is a seasoned technology executive with more than 40 years of industry experience in all corporate levels and functions. He currently serves as the Chief Executive Officer of the Open Compute Project Foundation (OCP), a non-profit consortium initiated in 2011 with the mission to apply the benefits of Hyperscale open-source innovations to meet the market and seed the IT Ecosystem industry increased transformation pace in, near and around the data center. OCP’s strategies and the collaboration model are being applied beyond the data center, helping to advance the telecom, enterprise industries & EDGE infrastructure.
Prior to OCP, George was the longstanding President and CEO of Edgecore Networks Corporation, where he transformed the firm from a small, vertically integrated business into the industry’s Open Networking leader and globally recognized brand! He led the business end-to-end into a multi-million-dollar portfolio of Hyperscale data centers, telecom/managed service providers, and enterprise wired and wireless products. Within several worldwide Open Communities, George established strategic partnerships and Edgecore's software strategy, including forming high-caliber software R&D engineering teams to develop ecSONiC (hardened and supported SONiC) open-source NOS and value-added software applications, APIs, and custom software distributions. George also served as the General Manger for ALL Accton Group (Accton Technologies and Edgecore Networks Corporation), focusing on transformative next generation strategies for the group’s Open Disaggregated Networking Business.
Prior to leading Edgecore Networks, George served as senior vice president of worldwide research & development (R&D) for Accton Technology Corporation. Under his governance and tenure, Accton Group established a leadership position in the Open Networking industry, developing numerous first-to-market OCP, Telecom Infra Project (TIP) and Open Networking Foundation (ONF) networking products and securing the top position for the largest Open Hardware portfolio offering of OCP-certified products. George also established the largest open ecosystem in the industry - Trusted Partnerships - with the leading Hyperscalers, network operators, both open source and open commercial software start-ups, silicon leader, visionaries, and tier one OEMs. George also served on the Open Network Foundation (ONF) Board of Directors for two years.
George, in addition, held senior management positions at Hewlett–Packard Corporation (HP) for more than 29 years, leading HP R&D, manufacturing, and business development teams, including establishing HP overseas R&D design and business/sales centers.
In 2018, George was recognized as one of "The World's First Top 50 Edge Computing Influencers" (#EDGE50) and in 2019, one of "The World's Most Influential Data Economy Leaders" (#POWER200).
George holds an M.B.A. in Management of Technology from Lehigh University, where he also completed the HP- sponsored Executive / GM Leadership two-year program, as well.
Bio: Dr. Ugurdag has been a professor of EE and CS at Ozyegin University, Istanbul, Türkiye since 2010. He was an ASIC designer in Silicon Valley between 1997-2004. His last position there was with Nvidia. He has 13 years of full-time industry experience doing machine vision, EDA tool development, ASIC and RTL design. During his academic tenure, he has been a consultant to many companies including Synopsys. The companies his students have worked at include Google, Tesla, AMD, ARM, Nvidia, Intel. Dr. Ugurdag is a member of the Steering Committee of VLSI-SoC Conference. He organized VLSI-SoC in Istanbul in 2013 with a record number of attendees (almost 300). Dr. Ugurdag is also a member of IFIP WG 10.5. He was the chair of IEEE Türkiye Section between 2020-2023. His research interests include computer arithmetic, AI acceleration, digital design aspects of communications, networking, video processing, high frequency trading.
Many defects during manufacturing, operation or in the wear-out phase result in delay faults and particularly small delay faults (SDF) where the additionally delay of a component is smaller than the clock period. They are considered as a major source of Silent Data Corruption observed for instance in large server farms. SDFs are especially hard to detect in circuits suffering from variations in timing. Performance-optimized circuits usually have balanced path lengths, and static timing analysis shows a large number of possibly critical paths varying from instance to instance. A test pattern trying to propagate an error signal along one path in a circuit instance may be invalidated in other instances with other critical paths.
Major sources of timing variations are process variations (P), voltage fluctuations (V), and shifts in temperature (T). The process corner, the supply voltage and the temperature form the conditions under which a circuit is operated and a test set must be effective. When operated in the field, the circuit age (A) may be added, and the set of operational conditions is PxVxTxA. Testing under low voltages amplifies the effects of a defect, increases the size of an SDF, and, in the best case, turns SDF into easily detectable stuck-at faults. However, the standard deviation of gate delays under minimum voltage is a multiple of the nominal case. The increased variation under minimum voltage will invalidate tests generated under nominal voltage and significantly reduce their fault efficiency. Increased temperature reduces the circuit speed in planar field-effect transistors. FinFET and nanowire technology may show additionally the temperature effect inversion which leads to more complex delay distributions complicating test generation.
The parameters P,V,T, and A may lead to an intractable amount of test conditions. The talk discusses ways out of this curse of dimensionality by finding a moderate number of representative conditions (test time reduction), and finding a few conditions such that all test sets are covered (test memory reduction).
Bio: Hans‐Joachim Wunderlich is a professor emeritus of the University of Stuttgart. He has been a full professor since 1991, and from 2002 to 2018 he served as the director of the Institute of Computer Architecture and Computer Engineering of the University of Stuttgart, Germany. He is a Life Fellow of IEEE. He has been associated editor of various international journals and program committee member of a variety of IEEE conferences on design and test of electronic systems. He has published 11 books and book chapters and more than 300 reviewed scientific papers in journals and conferences. His research interests include test, reliability, fault tolerance and design automation of microelectronic systems.
With increasing system complexity and stringent runtime requirements for AI accelerators, high-performance computing and autonomous vehicles, reliable, safe and secure operation of electronic systems are still a major challenge, particularly, with the increased use of third party chiplets and multi-die systems. This keynote will focus on optimizing silicon health by using advanced solutions throughout the silicon life cycle stages, from chiplet design, to bring up, volume production, tmid-stack, 3D packaging and in-field operation. The advanced solutions for silicon lifecycle management (SLM) to be discussed will starts by embedding a range of monitoring engines in different levels of the design, access mechanisms and solutions for on-chip and across the chips network, as well as data analytics on the edge and in the cloud for fleet optimization.
Dr. Yervant Zorian is a Chief Architect and Fellow at Synopsys, as well as President of Synopsys Armenia. Formerly, he was Vice President and Chief Scientist of Virage Logic, Chief Technologist at LogicVision, and a Distinguished Member of Technical Staff AT&T Bell Laboratories. He is currently the President of IEEE Test Technology Technical Council (TTTC), the founder and chair of the IEEE 1500 Standardization Working Group, the Editor-in-Chief Emeritus of the IEEE Design and Test of Computers and an Adjunct Professor at University of British Columbia. He served on the Board of Governors of Computer Society and CEDA, was the Vice President of IEEE Computer Society, and the General Chair of the 50th Design Automation Conference (DAC) and several other symposia and workshops.
Dr. Zorian holds 35 US patents, has authored four books, published over 350 refereed papers and received numerous best paper awards. A Fellow of the IEEE since 1999, Dr. Zorian was the 2005 recipient of the prestigious Industrial Pioneer Award for his contribution to BIST, and the 2006 recipient of the IEEE Hans Karlsson Award for diplomacy. He received the IEEE Distinguished Services Award for leading the TTTC, the IEEE Meritorious Award for outstanding contributions to EDA, and in 2014, the Republic of Armenia's National Medal of Science.
He received an MS degree in Computer Engineering from University of Southern California, a PhD in Electrical Engineering from McGill University, and an MBA from Wharton School of Business, University of Pennsylvania.