This post aims at describing the reasons why many electronics products are retested over and over during a typical manufacturing flow, and some problems associated with this behaviour.
It also serves as an introduction to our Retest Cost Calculator, which will provide you with a ballpark estimate for the costs that go into retesting, how it relates to unit costs and affects test system utilization or throughput.
Is retesting a problem?
“Perhaps the operator suspect that there is poor connectivity from the test interface board, so he or she apply some force to the board. A trick they have picked up over the years, that has worked many times before”
It is a well-established fact that most electronics manufacturing companies retest their products more than what is ideal. It is a very familiar scenario for companies with outsourced manufacturing, but also for OEM companies with in-house manufacturing. Unfortunately, it is also a problem that also is almost invisible. One of the use cases of Test Data Management for Electronics Manufacturing is to make these occurrences and trends explicit.
So that you can use this data to make informed decisions.
The consequences of retesting
The consequence of this practice can be both expensive and damaging.
On the less severe end of the spectrum, the only consequence of retesting a product is that it takes time and steals bandwidth from the test throughput. It could be something as simple as that the test operators have the impression that the test system itself is the source of the problems. If a test fails, then it is only natural to test it again and again. Hoping that whatever problem they suspect will ease off.
Maybe the test limits are sat incorrect, and the average measurements are operating very close to one of the limits.
Perhaps the operator suspects that there is poor connectivity from the test interface board, so he or she apply some force to the board. A trick they have picked up over the years, that has worked many times before.
And behold, the product passes the test. Ready to move to the next stage. Or out to the customer.
When does retesting become concerning?
On the more severe end of the spectrum, you will find the exact same scenario. The product is failing. All of the accumulated experience of the test operator tells that this is because of the test system.
Retesting it is the obvious action to that. And again. And again. Apply some pressure, then test again.
The problem now though is that there is nothing wrong with the connectivity of the interface board. It is one of the capacitors that is not properly connected.
As the test operator applies pressure to the system, the capacitance reaches just the right value to allow the unit to pass.
Ready for the next stage…
Ready to the shipped to a happy customer…
You would think that these scenarios are rare.
You would perhaps think that the operators can distinguish between a faulty unit and a poor test system.
Our experience, working with several industry-leading companies in this sector is that this is far from rare. It happens all over the place, from high volume consumer electronics to FDA regulated products. They lack the insight provided by the Retesting Chart found in WATS.
They simply lack the necessary data to bring the problem into the light.
You could think that implementing Forced Routing will fix this. But unless you have good control of your test systems accuracy you are dead wrong. More about that in a later post.
Fix one and the other follows.
Fortunately, though, the two sides of the severity spectrum are very entangled.
In simple terms, the common behaviour is a tendency to deploy manufacturing test systems, and not continuously optimize their accuracy. This in turn allows this undesired culture to manifest. New Product Introduction is a golden opportunity to make this improvement, to ensure that your limits are well suited to the actual unit specification.
But before the high volumes kick in. Before your focus inevitably shifts to other product releases.
In our top-down approach to quality assurance, the natural tool for this set of problems starts with Process Capability Analysis. In an early stage.
Making sure that you don’t have big rooms for measurement outliers.
That your measurements are not clustered towards one of the limits.
And that when a product fails the test it is most likely a product issue.
Only then can you make sure that you have solid grounds to kill the retesting culture and stay ahead of the curve.
If you suspect that this is a problem in your organization, make sure to sign up for a free trial to see how your test data looks like.