Abstract :
The nature of test set programming can be tedious and repetitive. A test engineer can often fall victim to puffing blinders on when programming by overlooking errors when reviewing their own work. To avoid this, it makes sense to treat software like a published work where a reviewer, independent of the original programming team, checks the software for design, quality, and errors. This paper describes a disciplined and consistent process for reviewing Automatic Test Equipment (ATE) software. This type of independent review process is comprised of four major steps: Receiving, Processing, Reporting, and Following-Up. It can be conducted and repeated throughout the development life cycle to improve the quality of the software. Early involvement can influence design changes that could lead to simpler and more manageable software. Several errors can be detected prior to its release by reviewing the software with software tools such as PC-Lint™ or Understand for C++™. Having the discipline to follow this simple process can bring about software manageability for future modifications, easier to read software, and software that contains fewer errors.
Keywords :
automatic test equipment; error detection; software management; software quality; software reviews; automatic test equipment; error detection; quality improvement; software manageability; software review; Automatic test equipment; Corona; Documentation; Software design; Software packages; Software quality; Software reviews; Software testing; Software tools; Unified modeling language;