Guide to Reliability of Electrical/Electronic Equipment and Products--Robust Design Practices (part 5)

Home | Articles | Forum | Glossary | Books



The purpose of electrical testing is to detect and remove any ICs or PWAs that fail operational and functional specifications. Integrated circuits and PWAs fail specifications because of defects that may be introduced during the manufacturing process or during subsequent handling operations. Testing an IC or PWA involves applying the voltage, current, timing conditions, and functional patterns it would see in a real system and sequencing it through a series of states, checking its actual against its expected responses.

Testability is concerned with controlling all inputs simultaneously and then trying to observe many outputs simultaneously. It can be defined as a measure of the ease with which comprehensive test programs can be written and executed as well as the ease with which defects can be isolated in defective ICs, PWAs, subassemblies, and systems. A digital circuit with high testability has the following features:

The circuit can be easily initialized.

The internal state of the circuit can be easily controlled by a small input vector sequence.

The internal state of the circuit can be uniquely and easily identified through the primary outputs of the circuit or special test points.

Complicating the problem of testability at the IC level is the use of mixed analog and digital circuitry on the same chip. Table 15 lists some of the testability issues of analog and digital ICs. As can be seen, these issues become very complex and severely impact the ability to test an IC when functional structures incorporating both digital and analog circuits are integrated on the same chip. These same mixed-signal circuit issues are relevant at PWA test as well.

Shrinking product development cycles require predictable design methodologies including those for test, at both the individual IC and PWA levels. The pace of IC and PWA design and level of complexity is increasing so rapidly that The cost of developing an IC test program is approaching the cost of developing the IC itself.

These higher levels of IC and PWA complexity and packing density integration result in reduced observability and controllability (decreased defect coverage).

The task of generating functional test vectors and designing prototypes is too complex to meet time-to-market requirements.

Tossing a net list over the wall to the test department to insert test structures is a thing of the past.

Traditional functional tests provide poor diagnostics and process feedback capability.

Design verification has become a serious issue with as much as 55% of the total design effort being focused on developing self-checking verification programs plus the test benches to execute them.

========= TABLE 15 Testability Issues of Analog and Digital ICS

Analog circuitry Hard to test.

Use analog waveforms.

Apply a variety of test signals, wait for settling, and average several passes to reduce noise.

Affected by all types of manufacturing process defects.

Must check full functionality of the device within very precise limits.

Defect models not well defined.

Sensitive to external environment (60 Hz noise, etc.) ATE load board design, layout, and verification; calibration; and high-frequency calibration are critical issues.

Synchronization issues between device and tester.

Initialization results in going to an unknown state. This is a difference between analog and digital functions.

Digital circuitry More testable, less susceptible to manufacturing process defects, and easier to produce.

Allow testing at real system clock(s) using industry standard test methodologies.

Susceptible to spot defects but unaffected by global manufacturing defects.

Compatibility Must consider coupling effects of digital versus analog signals.

Lack of well-defined interface between digital and analog circuitry and technology.

Normally digital and analog circuitry are segmented.


TABLE 16 Design for Test Issues


Improved product quality.

Faster and easier debug and diagnostics of new designs and when problems occur.

Faster time to market, time to volume, and time to profit.

Faster development cycle.

Smaller test patterns and lower test costs.

Lower test development costs.

Ability to tradeoff performance versus testability.

Improved field testability and maintenance.


Initial impact on design cycle while DFT techniques are being learned.

Added circuit time and real estate area.

Initial high cost during learning period.


What's the solution? What is needed is a predictable and consistent design for test (DFT) methodology. Design for test is a structured design method that includes participation from circuit design (including modeling and simulation), test, manufacturing, and field service inputs. Design for test provides greater test ability; improved manufacturing yield; higher-quality product; decreased test generation complexity and test time; and reduced cost of test, diagnosis, trouble shooting, and failure analysis (due to easier debugging and thus faster debug time). Design for test helps to ensure small test pattern sets-important in reducing automated test equipment (ATE) test time and costs-by enabling single patterns to test for multiple faults (defects). The higher the test coverage for a given pattern set, the better the quality of the produced ICs. The fewer failing chips that get into products and in the field, the lower the replacement and warranty costs.

Today's ICs and PWAs implement testability methods (which include integration of test structures and test pins into the circuit design as well as robust test patterns with high test coverage) before and concurrent with system logic design, not as an afterthought when the IC design is complete. Designers are intimately involved with test at both the IC and PWA levels. Normally, a multi disciplinary design team approaches the technical, manufacturing, and logistical aspects of the PWA design simultaneously. Reliability, manufacturability, diagnosability, and testability are considered throughout the design effect.

The reasons for implementing a DFT strategy are listed in Table 16. Of these, three are preeminent:

Higher quality. This means better fault coverage in the design so that fewer defective parts make it out of manufacturing (escapes). However, a balance is required. Better fault coverage means longer test patterns.

From a manufacturing perspective, short test patterns and thus short test times are required since long test times cost money. Also, if it takes too long to generate the test program, then the product cycle is impacted initially and every time there is a design change new test patterns are required. Designs implemented with DFT result in tests that are both faster and of higher quality, reducing the time spent in manufacturing and improving shipped product quality level.

Easier and faster debug diagnostics when there are problems. As designs become larger and more complex, diagnostics become more of a challenge. In fact, design for diagnosis (with the addition of diagnostic test access points placed in the circuit during design) need to be included in the design for test methodology. Just as automatic test pattern generation (ATPG) is used as a testability analysis tool (which is expensive this late in the design cycle), diagnostics now are often used the same way.

Diagnosis of functional failures or field returns can be very difficult. An initial zero yield condition can cause weeks of delay without an auto mated diagnostic approach. However, diagnosing ATPG patterns from a design with good DFT can be relatively quick and accurate.

Faster time to market.

Design for test (boundary scan and built-in self-test) is an integrated approach to testing that is being applied at all levels of product design and integration, shown in Figure 17: during IC design, PWA (board) design and layout, and system design. All are interconnected and DFT eases the testing of a complete product or system. The figure shows built-in self-test (BIST) being inserted into large complex ICs to facilitate test generation and improve test coverage, primarily at the IC level but also at subsequent levels of product integration. Let's look at DFT from all three perspectives.

FIGURE 17 Applying BIST and boundary scan at various levels of product integration.

FIGURE 18 Design for test guidelines for the IC designer.

16.1 Design for Test at the IC Level

Integrated circuit designers must be responsible for the testability of their designs.

At Xilinx Inc. (an FPGA and PLD supplier in San Jose, CA.), for example, IC designers are responsible for the testability and test coverage of their designs, even for developing the characterization and production electrical test programs.

The different approaches used to achieve a high degree of testability at the IC level can be categorized as ad hoc, scan based, and built-in self-test methods.

In the ad hoc method, controllability and observability are maintained through a set of design-for-test disciplines or guidelines. These include

Partitioning the logic to reduce ATPG time

Breaking long counter chains into shorter sections

Never allowing the inputs to float

Electrically partitioning combinatorial and sequential circuits and testing them separately

Adding BIST circuitry

A more comprehensive list of proven design guidelines/techniques that IC designers use to make design more testable is presented in Figure 18. From an equipment designer or systems perspective there isn't a thing we can do about IC testing and making an IC more testable. However, what the IC designer does to facilitate testing and putting scan and BIST circuitry in the IC significantly impacts PWA testing. Since implementing DFT for the PWA and system begins during IC design, I will spend some time discussing DFT during the IC design discussion in order to give PWA and system designers a feel for the issues.

Implementing DFT requires both strong tools and support. Test synthesis, test analysis, test generation, and diagnostic tools must handle a variety of structures within a single design, work with various fault/defect models, and quickly produce results on multimillion gate designs. Design for test helps to speed the ATPG process. By making the problem a combinatorial one, ensuring non-RAM memory elements are scannable and sectioning off areas of the logic that may require special testing, the generation of test patterns for the chip logic can be rapid and of high quality.

FIGURE 19 IEEE 1149.1 boundary scan standard circuit implementation.

The use of scan techniques facilitates PWA testing. It starts with the IC itself. Scan insertion analyzes a design, locates on chip flip flops and latches, and replaces some (partial scan) or all (full scan) of these flip flops and latches with scan-enabled versions. When a test system asserts those versions' scan-enable lines, scan chains carry test vectors into and out of the scan compatible flip flops, which in turn apply signals to inputs and read outputs from the combinatorial logic connected to those flip flops. Thus, by adding structures to the IC itself, such as D flip flops and multiplexers, PWA testing is enhanced through better controllability and observability. The penalty for this circuitry is 5-15% in creased silicon area and two external package pins.

Scan techniques include level sensitive scan design (LSSD), scan path, and boundary scan. In the scan path, or scan chain, technique, DQ flip flops are inserted internally to the IC to sensitize, stimulate, and observe the behavior of combinatorial logic in a design. Testing becomes a straightforward application of scanning the test vectors in and observing the test results because sequential logic is transformed to combinational logic for which ATPG programs are more effective. Automatic place and route software has been adapted to make all clock connections in the scan path, making optimal use of clock trees.

The boundary scan method increases the testability over that of the scan path method, with the price of more on-chip circuitry and thus greater complexity.

With the boundary scan technique, which has been standardized by IEEE 1149.1, a ring of boundary scan cells surrounds the periphery of the chip (IC). The boundary scan standard circuit is shown in Figures 19 and 20, and the specific characteristics and instructions applicable to IEEE 1149.1 are listed in Tables 17 and 18, respectively.

Each boundary scan IC has a test access port (TAP) which controls the shift-update-capture cycle, as shown in Figure 19. The TAP is connected to a test bus through two pins, a test data signal, and a test clock. The boundary scan architecture also includes an instruction register, which provides opportunities for using the test bus for more than an interconnection test, i.e. component identity. The boundary scan cells are transparent in the IC's normal operating mode.

In the test mode they are capable of driving predefined values on the output pins and capturing response values on the input pins. The boundary scan cells are linked as a serial register and connected to one serial input pin and one serial output pin on the IC.

FIGURE 20 Boundary scan principles.


TABLE 17 IEEE 1149.1 Circuit Characteristics

Dedicated TAP pins (TDI, TDO, TMS, TCK, and TRST).

Dedicated Boundary scan cells. Includes separate serial-shift and parallel-update stages.

Finite-state machine controller with extensible instructions. Serially scanned instruction register.

Main target is testing printed circuit board interconnect. Philosophy is to restrict boundary cell behavior as necessary to safeguard against side effects during testing.

Second target is sampling system state during operation. Dedicated boundary scan cells and test clock (TCK).

Difficulties applying in hierarchical implementations.


TABLE 18 IEEE 1149.1 Instructions for Testability


Inserts a 1-bit bypass register between TDI and TDO.

Extest Uses boundary register first to capture, then shift, and finally to update I/O pad values.


Uses boundary register first to capture and then shift I/O pad values without affecting system operation.

Other optional and/or private instructions Defined by the standard or left up to the designer to specify behavior.


FIGURE 21 At-speed interconnection test. (From Ref. 4.)

FIGURE 22 Analog boundary scan. (From Ref. 4.)

FIGURE 23 Built-in self-test can be used with scan ATPG to enable effective system on-chip testing. (From Ref. 4.)

It is very easy to apply values at IC pins and observe results when this technique is used. The tests are executed in a shift-update-capture cycle. In the shift phase, drive values are loaded in serial into the scan chain for one test while the values from the previous test are unloaded. In the update phase, chain values are applied in parallel on output pins. In the capture phase, response values are loaded in parallel into the chain.

Boundary scan, implemented in accordance with IEEE 1149.1, which is mainly intended for static interconnection test, can be enhanced to support dynamic interconnection test (see Fig. 21). Minor additions to the boundary scan cells allow the update-capture sequence to be clocked from the system clock rather than from the test clock. Additional boundary scan instruction and some control logic must be added to the ICs involved in the dynamic test. The drive and response data are loaded and unloaded through the serial register in the same way as in static interconnection test. There are commercially available tools that support both static and dynamic test.

For analog circuits, boundary scan implemented via the IEEE 1149.4 test standard simplifies analog measurements at the board level (Fig. 22). Two (alter natively four) wires for measurements are added to the boundary scan bus. The original four wires are used as in boundary scan for test control and digital data.

Special analog boundary scan cells have been developed which can be linked to the analog board level test wires through fairly simple analog CMOS switches.

This allows easy setup of measurements of discrete components located between IC pins. Analog and digital boundary scan cells can be mixed within the same device (IC). Even though the main purpose of analog boundary scan is the test of interconnections and discrete components, it can be used to test more complex board level analog functions as well as on-chip analog functions.

After adding scan circuitry to an IC, its area and speed of operation change.

The design increases in size (5-15% larger area) because scan cells are larger than the nonscan cells they replace and some extra circuitry is required, and the nets used for the scan signals occupy additional area. The performance of the design will be reduced as well (5-10% speed degradation) due to changes in the electrical characteristics of the scan cells that replaced the nonscan cells and the delay caused by the extra circuitry.

Built-in self-test is a design technique in which test vectors are generated on-chip in response to an externally applied test command. The test responses are compacted into external pass/fail signals. Built-in self-test is usually implemented through ROM (embedded memory) code instructions or through built-in (on chip) random word generators (linear feedback shift registers, or LSFRs). This allows the IC to test itself by controlling internal circuit nodes that are otherwise unreachable, reducing tester and ATPG time and date storage needs.

In a typical BIST implementation (Fig. 23) stimulus and response circuits are added to the device under test (DUT). The stimulus circuit generates test patterns on the fly, and the response of the DUT is analyzed by the response circuit. The final result of the BIST operation is compared with the expected result externally. Large test patterns need not be stored externally in a test system since they are generated internally by the BIST circuit. At-speed testing is possible since the BIST circuit uses the same technology as the DUT and can be run off the system clock.

Built-in self-test has been primarily implemented for testing embedded memories since highly effective memory test algorithms can be implemented in a compact BIST circuit but at a cost of increased circuit delay. The tools for implementing digital embedded memory BIST are mature. Because of the unstructured nature of logic blocks, logic BIST is difficult to implement but is being developed. The implementation of analog BIST can have an impact on the noise performance and accuracy of the analog circuitry. The tools to implement analog BIST are being developed as well.

Both BIST and boundary scan have an impact on product and test cost during all phases of the product life cycle: development, manufacturing, and field deployment. For example, boundary scan is often used as a means to rapidly identify structural defects (e.g., solder bridges or opens) during early life debugging. Built-in self-test and boundary scan may be leveraged during manufacturing testing to improve test coverage, reduce test diagnosis time, reduce test capital, or all of the above. In the field, embedded boundary scan and BIST facilitate accurate system diagnostics to the field replacement unit (FRU, also called the customer replaceable unit, or CRU). The implementation of BIST tends to lengthen IC design time by increasing synthesis and simulation times (heavy computational requirements), but reduces test development times.

Design for test techniques have evolved to the place where critical tester (ATE) functions (such as pin electronics) are embedded on the chip being tested.

The basic idea is to create microtesters for every major functional or architectural block in a chip during design. A network of microtesters can be integrated at the chip level and accessed through the IEEE 1149.1 port to provide a complete test solution. Embedded test offers a divide and conquer approach to a very complex problem. By removing the need to generate, apply, and collect a large number of test vectors from outside the chip, embedded test promises to both facilitate test and reduce the cost of external testers (ATE). The embedded test total silicon penalty is on the order of 1-2% as demonstrated by several IC suppliers.

First silicon is where everything comes together (a complete IC) and where the fruits of DFT start to pay dividends. At this point DFT facilitates defect detection diagnostics and characterization. Diagnostics can resolve chip failures both quickly and more accurately. Whether it is model errors, test pattern errors, process (wafer fab) problems, or any number of other explanations, diagnostics are aided by DFT. Patterns can be quickly applied, and additional patterns can be generated if needed. This is critical for timely yield improvement before product ramp-up.

During chip production, DFT helps ensure overall shipped IC quality. High test coverage and small pattern counts act as the filter to judge working and nonworking wafers. For the working (yielding) wafers, the diced (separated) and packaged chips are tested again to ensure working product.

From an IC function perspective, circuits such as FPGAs due to their programmability and re-programmability can be configured to test themselves and thus ease the testing problem.


TABLE 19 Examples of Design Hints and Guidelines at the PWA Level to Facilitate Testing

Electrical design hints

Disable the clocks to ease testing.

Provide access to enables.

Separate the resets and enables.

Unused pins should have test point access.

Unused inputs may require pull-up or pull-down resistors.

Batteries must have enabled jumpers or be installed after test.

Bed-of-nails test fixture requires a test point for every net, all on the bottom side of the board.


PWA test point placement rules

All test points should be located on single side of PWA.

Distribute test points evenly.

Minimum of one test point per net.

Multiple VCC and ground test pads distributed across PWA.

One test point on each unused IC pin.

No test points under components on probe side of PWA.


Typical PWA test points

Through leads.

Uncovered and soldered via pads (bigger).


Card-edge connectors.

Designated test points.


16.2 Design for Test PWA Level

The previous section shows that DFT requires the cooperation of all personnel involved in IC design. However, from a PWA or system perspective all we care about is that the IC designers have included the appropriate hooks (test structures) to facilitate boundary scan testing of the PWA.

At the PWA level, all boundary scan components (ICs) are linked to form a scan chain. This allows daisy chained data-in and data-out lines of the TAP to carry test signals to and from nodes that might be buried under surface mount devices or be otherwise inaccessible to tester probes. The boundary scan chain is then connected to two edge connectors.

The best manner to present DFT at the PWA level is by means of design hints and guidelines. These are listed in Table 19.

Many board level DFT methods are already supported by commercially available components, building blocks, and test development tools, specifically boundary scan. Since many testers support boundary scan test, it is natural to use the boundary scan test bus (including the protocol) as a general purpose test bus at the PWA level. Several commercially supported DFT methods use the boundary scan bus for test access and control. Additionally, several new DFT methods are emerging that make use of the standardized boundary scan bus. This activity will only serve to facilitate the widespread adoption of DFT techniques.

The myriad topics involved with IC and board (PWA) tests have been discussed via tutorials and formal papers, debated via panel sessions at the annual International Test Conference, and published in its proceedings. It is suggested that the reader who is interested in detailed information on these test topics consult these proceedings.

16.3 Design for Test at the System Level

At the system level, DFT ensures that the replaceable units are working properly.

Often, using a BIST interface, frequently assessed via boundary scan, components can test themselves. If failures are discovered, then the failing components can be isolated and replaced, saving much system debug and diagnostics time.

This can also result in tremendous savings in system replacement costs and customer downtime.

In conclusion, DFT is a powerful means to simplify test development, to decrease manufacturing test costs, and to enhance diagnostics and process feed back. Its most significant impact is during the product development process, where designers and test engineers work interactively and concurrently to solve the testability issue. Design for test is also a value-added investment in improving testability in later product phases, i.e., manufacturing and field troubleshooting.


Sneak circuit analysis is used to identify and isolate potential incorrect operating characteristics of a circuit or system. A simplified example of a sneak circuit, which consists of two switches in parallel controlling a light, illustrates one type of unwanted operation. With both switches open, either switch will control the light. With one switch closed, the other switch will have no effect. Such problems occur quite often, usually with devastating results. Often the sneak circuit analysis is included in the various CAD libraries that are used for the design.


TABLE 20 BOM Review Process Flow

Component engineers work up front with the design team to understand their needs and agree on the recommended technology and part choices (see Table 4).

Determine who should be invited to participate in the review and develop an agenda stating purpose and responsibilities of the review team (see previously listed functional representatives).

Send out preliminary BOM, targeted suppliers, and technology choices.

Develop and use standard evaluation method.

Discuss issues that arise from the evaluation, and develop solutions and alternatives.

Develop an action plan based on the evaluation.

Meet periodically (monthly) to review actions and status as well as any BOM changes as the design progresses.



Many large equipment manufacturers conduct periodic bill of material reviews from conceptual design throughout the physical design process. The BOM review is similar to a design review, but here the focus is on the parts and the suppliers.

These reviews facilitate the communication and transfer of knowledge regarding part, function, supplier, and usage history between the component engineers and the design team. The purpose of periodic BOM reviews is to

Identify risks with the parts and suppliers selected

Communicate multifunctional issues and experiences regarding parts and suppliers (DFT, DFM, quality, reliability, and application sensitivities).

Identify risk elimination and containment action plans

Track the status of qualification progress

Typical BOM review participants include design engineering, component engineering, test engineering, manufacturing engineering, reliability engineering, and purchasing. The specific issues that are discussed and evaluated include Component (part) life cycle risk, i.e., end of life and obsolescence

Criticality of component to product specification

Availability of SPICE, timing, schematic, simulation, fault simulation, and testability models

Test vector coverage and ease of test

Construction analysis of critical components (optional)

Part availability (sourcing) and production price projections

Failure history with part, supplier, and technology

Supplier reliability data

Known failure mechanisms/history of problems

Responsiveness, problem resolution, previous experience with proposed suppliers

Financial viability of supplier

Part already qualified versus new qualification and technology risks

Compatibility with manufacturing process

Supplier qualification status

Application suitability and electrical interfacing with other critical components

A typical BOM process flow is presented in Table 20. A note of caution needs to be sounded. A potential problem with this process is that part and sup plier needs do change as the design evolves and the timeline shortens, causing people to go back to their non-concurrent over-the-wall habits (comfort zone).

An organization needs to have some group champion and drive this process.

At Tandem/Compaq Computer Corp., component engineering was the champion organization and owned the BOM review process.


Design reviews, like BOM reviews, are an integral part of the iterative design process and should be conducted at progressive stages throughout the design cycle and prior to the release of the design to manufacturing. Design reviews are important because design changes made after the release of a design to manufacturing are extremely expensive, particularly, where retrofit of previously manufactured equipment is required. The purpose of the design review is to provide an independent assessment (a peer review) to make sure nothing has been over looked and to inform all concerned parties of the status of the project and the risks involved.

A design review should be a formally scheduled event where the specific design or design methodology to be used is submitted to the designer's/design team's peers and supervisors. The members of the design review team should come from multiple disciplines: circuit design, mechanical design, thermal de sign, PWA design and layout, regulatory engineering (EMC and safety), test engineering, product enclosure/cabinet design, component engineering, reliability engineering, purchasing, and manufacturing. This ensures that all viewpoints receive adequate consideration. In small companies without this breadth of knowledge, outside consultants may be hired to provide the required expertise.

Each participant should receive, in advance, copies of the product specification, design drawings, schematic diagrams and data, the failure modes and effects analysis (FMEA) report, the component derating list and report, current reliability calculations and predictions, and the BOM review status report. The product manager reviews the product specification, the overall design approach being used, the project schedule, the design verification testing (DVT) plan, and the regulatory test plan, along with the schedules for implementing these plans.

Each peer designer (electrical, thermal, mechanical, EMC, and packaging) evaluates the design being reviewed, and the other team members (test engineering, EMC and safety engineering, manufacturing, service, materials, purchasing, etc.) summarize how their concerns have been factored into the design. The component engineer summarizes the BOM review status and open action items as well as the supplier and component qualification plan. The reliability engineer reviews the component risk report, the FMEA report, and the reliability prediction. Approval of the design by management is made with a complete understanding of the work still to be accomplished, the risks involved, and a commitment to providing the necessary resources and support for the required testing.

At each design review an honest, candid, and detailed appraisal of the de sign methodology, implementation, safety margins/tolerances, and effectiveness in meeting stated requirements is conducted. Each of the specified requirements is compared with the present design to identify potential problem areas for increased attention or for possible reevaluation of the need for that requirement. For example, one of the concerns identified at a design review may be the need to reapportion reliability to allow a more equitable distribution of the available failure rate among certain functional elements or components. It is important that the results of the design review are formally documented with appropriate action items as signed.

A final design review is conducted after all testing, analysis, and qualification tasks have been completed. The outcome of the final design review is concurrence that the design satisfies the requirements and can be released to manufacturing/production.

Small informal design reviews are also held periodically to assess specific aspects or elements of the design. These types of design reviews are much more prevalent in smaller-sized entrepreneurial companies.


Many of the techniques for optimizing designs that were useful in the past are becoming obsolete as a result of the impact of the Internet. Bringing a product to market has traditionally been thought of as a serial process consisting of three phases-design, new product introduction (NPI), and product manufacturing. But serial methodologies are giving way to concurrent processes as the number and complexity of interactions across distributed supply chains increase. Extended enterprises mean more companies are involved, and the resulting communications issues can be daunting to say the least. Original equipment manufacturers and their supply chain partners must look for more efficient ways to link their operations. Because 80% of a product's overall costs are determined in the first 20% of the product development process, the ability to address supply chain requirements up front can significantly improve overall product costs and schedules.

Today's OEMs are looking for the "full solution" to make the move to supply chain-aware concurrent design. The necessary ingredients required to make this move include:

1. Technology and expertise for integrating into multiple EDA environments.

2. Advanced Internet technologies to minimize supply chain latency.

3. Technologies that automate interactions in the design-to-manufacturing process.

4. Access to supply chain intelligence and other informational assets.

5. An intimate knowledge of customer processes.

The new services available in bringing a product to market collaboratively link the design and supply chain. OEMs and their supply chain partners will create new competitive advantages by integrating these technologies with their deep understanding of design-to-manufacturing processes.

Traditionally, interdependent constraints between design and supply chain processes have been addressed by CAD and material management functions with in-house solutions. As OEMs increasingly outsource portions of their supply chain functions, many in-house solutions that link design and supply chain functions need to be reintegrated. OEMs are working with supply chain partners to facilitate and streamline dialogue that revolves around product design, supply management, and manufacturing interdependencies. Questions such as the following need to be addressed: Which design decisions have the most impact on supply constraints? How will my design decisions have the most impact on supply constraints? How will my design decisions affect NPI schedules? What design decisions will result in optimizing my production costs and schedules?

20.1 Design Optimization and Supply Chain Constraints

As mentioned previously, the three phases of bringing a product to market (also called the product realization process) are design, new product introduction, and production manufacturing. Let's focus on design. The design phase consists of a series of iterative refinements (as discussed earlier in this section). These refinements are a result of successive attempts to resolve conflicts, while meeting product requirements such as speed, power, performance, cost, and schedules.

Once these requirements are satisfied, the design is typically handed off to supply chain partners to address material management or production requirements.

Iterative refinements are an integral part of the design process. These iterations explore local requirements that are resolved within the design phase. Constraints that are explored late in the process contribute to a majority of product realization failures. Design iterations that occur when materials management or manufacturing constraints cannot be resolved downstream must be avoided as much as possible. These iterative feedback or learning loops are a primary cause of friction and delay in the design-to-manufacturing process.

A change in the product realization process introduces the notion of concur rent refinement of design, NPI, and production manufacturing requirements. This process shift recognizes the value in decisions made early in the design process that consider interdependent supply chain requirements. In the concurrent pro cess, optimization of time to volume and time to profit occurs significantly sooner.

A big cause of friction in the product realization process is the sharing of incomplete or inconsistent design data. Seemingly simple tasks, such as part number cross-referencing, notification of part changes, and access to component life cycle information, become prohibitively expensive and time consuming to man age. This is especially true as the product realization process involves a greater number of supply chain partners.

This new process requires new technologies to collaboratively link design and supply chain activities across the distributed supply chain. These new technologies fall into three main categories:

1. Supply chain integration technology that provides direct links to design tools. This allows preferred materials management and manufacturing information to be made available at the point of component selection- the designer's desktop.

2. Bill of materials collaboration and notification tools that support the iterative dialogue endemic to concurrent methodologies. These tools must provide a solution that supports exploratory design decisions and allows partners to deliver supply chain information services early in the design-to-manufacturing process.

3. Data integration and data integrity tools to allow for automated sharing and reconciliation of design and supply chain information. These tools ensure that component selections represented in the bill of materials can be shared with materials management and manufacturing suppliers in an efficient manner.

20.2 Supply Chain Partner Design Collaboration

Electronics distributors and EMS providers have spent years accumulating supply chain information, building business processes, and creating customer relation ships. For many of these suppliers, the questions they now face include how to use this wealth of information to enhance the dynamics of the integrated design process and ensure that the content remains as up-to-date as possible.

With this in mind, distributors, suppliers, and EMS providers are combining existing core assets with new collaborative technologies to transform their businesses. The transformation from part suppliers and manufacturers to high-value product realization partners focuses on providing services which allow their customers to get products designed and built more efficiently. One such collaborative effort is that between Cadence Design Systems, Flextronics International, Hewlett-Packard, and Avnet, who have partnered with SpinCircuit to develop new technologies that focus on supply chain integration with the design desktop.

This collaboration was formed because of the need for a full solution that integrates new technology, supply chain services, information assets, and a deep understanding of design-to-manufacturing processes.

Electronic design automation (EDA) companies have provided concurrent design methodologies that link schematic capture, simulation, and PC board lay out processes. In addition, some of these companies also provide component information systems (CIS) to help designers and CAD organizations to manage their private component information.

However, what has been missing is concurrent access to supply chain information available in the public domain, as well as within the corporate walls of OEMs and their supply chain partners. Because the design process involves repeated refinements and redesigns, supply chain information must be embedded into design tools in an unencumbering manner or it will not be considered.

Flextronics International, a leading EMS provider, has installed Spin Circuit's desktop solution. This solution can be launched from within EDA design tools and allows design service groups to access supply chain information from within their existing design environment. SpinCircuit currently provides seamless integration with Cadence Design Systems and Mentor Graphics schematic capture environments and is developing interfaces to other leading EDA tools.

The desktop solution provides designers and component engineers with access to both private and public component information. The results are views, side-by-side, in a single component selection window. Designers and component engineers can access component information such as schematic symbols, foot prints, product change notifications (PCNs), pricing, availability, and online sup port. Users can also "punch out" to access additional information such as data sheets and other component-specific information available on supplier sites.

SpinCircuit's Desktop solution provides material management and manufacturing groups with the ability to present approved vendor lists (AVL's) and approved materials lists (AML's). Component selection preference filters can be enabled to display preferred parts status. These preference filters provide optimization of NPI and manufacturing processes at the point of design and prevent downstream "loopbacks." Another critical challenge faced by supply chain partners is access to BOM collaboration tools and automated notification technology. Early in the design phase, this solution links partners involved in new product introduction and allows OEMs to share the content of their BOMs with key suppliers. The transmission of a bill of materials from designers to EMS providers and their distribution partners is an issue that needs to be addressed. Typically, component engineers must manually cross-reference parts lists to make sense of a BOM. In other words, the supply chain may be connected electronically, but the information coming over the connection may be incomprehensible.

These gaps break the flow of information between OEMs, distributors, parts manufacturers, and EMS providers. They are a significant source of friction in the design-to-manufacturing process. Seemingly simple but time-consuming tasks are cross-referencing part numbers, keeping track of PCN and end-of-life (EOL) notifications, and monitoring BOM changes. These tasks are especially time consuming when they are distributed throughout an extended enterprise.

Avnet, a leading electronics distributor, uses solutions from SpinCircuit to accelerate their new product introduction services and improve their customer's ability to build prototypes. SpinCircuit provides tools and technology to stream line bill-of-materials processing. Each time a BOM is processed, SpinCircuit's BOM analysis tools check for PCNs, EOL notifications, and design changes to identify and reconcile inconsistent or incomplete data that may impact NPI processes.

Top of Page

PREV.   NEXT Article Index HOME