Abstract :
In the processor functional verification field, pre-silicon verification and post-silicon validation have traditionally been divided into separate disciplines. With the growing use of high-speed hardware emulation, there is an opportunity to join a significant portion of each into a continuous workflow [2], [1]. Three elements of functional verification rely on random code generation (RCG) as a primary test stimulus: processor core-level simulation, hardware emulation, and early hardware validation. Each of these environments becomes the primary focus of the functional verification effort at different phases of the project. Focusing on random-code-based test generation as a central feature, and the primary feature for commonality between these environments, the advantages of a unified workflow include people versatility, test tooling efficiency, and continuity of test technology between design phases. Related common features include some of the debugging techniques - e.g., software-trace-based debugging, and instruction flow analysis; and some of the instrumentation, for example counters that are built into the final hardware. Three key use cases that show the value of continuity of a pre-/post-silicon workflow are as follows: First, the functional test coverage of a common test can be evaluated in a pre-silicon environment, where more observability for functional test coverage is available, by way of simulation/emulation-only tracing capabilities and simulation/emulation model instrumentation not built into actual hardware [3]. The second is having the the last test program run on the emulator the day before early hardware arrives being the first validation test program on the new hardware. This allows processor bringup to proceed with protection against simple logic bugs and test code issues, having only to be concerned with more subtle logic bugs, circuit bugs and manufacturing defects. The last use case is taking an early hardware lab observation and dropping i- seamlessly into both the simulation and emulation environments. Essential differences exist in the three environments, and create a challenge to a common workflow. These differences exist in three areas: The first is observability & controllability, which touches on checking, instrumentation & coverage evaluation, and debugging facilities & techniques. For observability, a simulator may leverage instruction-by-instruction results checking; bus trace analysis and protocol verification; and many more error-condition detectors in the model than in actual hardware. For hardware a fail scenario must defined, considering how behavior would propagate to checking point. For example “how do I know if this store wrote the wrong value to memory?” For hardware, an explicit check in code, a load and compare, would be required. The impact of less controllabilty is also that early hardware tests require more elaborate test case and test harness code, since fewer simulator crutches are available to help create desired scenarios. Where a simulator test may specify “let an asynchronous interrupt happen on this instruction”, a hardware test may have to run repeatedly with frequent interrupts until the interrupt hits on the desired instruction. The second difference is in speed of execution, which typically involves a 10,000x-100,000x difference between each of the environments. This affects both the wallclock time needed to create a condition - whether or not the condition can be observed or debugged - and also the “scale” of software that can be run on a given environment, from 1000 instruction segments up to a full operating system. The final difference is that much larger systems are built than simulated, and this is an issue going from the pre-silicon environments to early hardware, especially in testing scenarios that involve large numbers of caches and memories. These methodology issues, both in terms of taking adv
Keywords :
elemental semiconductors; integrated circuit design; integrated circuit testing; logic circuits; logic design; microprocessor chips; random codes; silicon; IBM power server processors; RCG; asynchronous interrupt; bus trace analysis; cache number; central feature; checking; checking point; circuit bugs; continuous workflow; coverage evaluation; debugging facilities; debugging techniques; design phases; early-hardware tests; early-hardware validation; emulation-only tracing capability; error-condition detectors; essential differences; execution speed; full-operating system; functional test coverage; high-speed hardware emulation; innovative practice session; instruction flow analysis; instruction segments; instruction-by-instruction result check; instrumentation; manufacturing defects; memories; observability; people versatility; post-silicon validation; post-silicon workflow; pre-silicon environment; pre-silicon validation; pre-silicon workflow; primary test stimulus; processor; processor core-level simulation; processor functional verification field; protocol verification; random code-based test generation; simulation-emulation model instrumentation; simulation-only tracing capability; simulator crutches; simulator test; software-trace-based debugging; subtle logic bugs; test case code; test code issues; test harness code; test technology continuity; test tooling efficiency; unified workflow; validation test program; wallclock time; Abstracts; Debugging; Emulation; Hardware; Instruments; Observability; System-on-chip;