Skip to main content
Blog

Is Statistical Process Control still Relevant for Electronics Manufacturing?

By January 4th, 2023No Comments
OEM-Inc-Value-Loss

Statistical Process Control (SPC) has long been an essential tactic for companies to ensure high product quality. In modern electronics manufacturing, the complexities involved don’t meet the fundamental rule of process stability. Combined with the increasing amount of data collected, SPC is worthless as a high-level approach to quality management. An approach following Manufacturing Intelligence and Lean Six Sigma philosophy is superior in identifying and prioritizing relevant improvement initiatives.

In this post we will have a closer look at

Historical aspect

Statistical Process Control was introduced in the 1920s, designed to address the manufacturing of that era. The purpose was to get early detection of undesired behaviours, allowing for early intervention and improvements. Limitations of SPC were sat by available Information Technology, a landscape completely different from modern times. Back-tracking Moore’s Law, it is easy to accept that IT and product complexities and capabilities were different from today. In fact, the measurements from manufacturing operations have no common measure with today’s situation. Following this complexity, and combined with factors such as globalized markets driving up the volume of manufacturing, the result is that the amount of output data today is incomprehensible by 1920 standards.

 


Fundamental Limitations of SPC

Statistical Process Control (SPC) appears to still hold an important position at Original Electronics Manufacturers (OEM). It is found in continuous manufacturing processes, calculating control limits, and detecting out-of-order process parameters. In theory, such control limits help visualize if things turn from good to worse. A fundamental assumption of SPC is that you have removed the common cause variations from the process. Meaning that all variations remaining are special cause ones. The parameters you need to worry about when they start to drift.

An electronics product today can contain hundreds of components. It will experience many design modifications due to things such as component obsolescence. It will be tested in various stages during the assembly process, feature multiple firmware revision, test software versions; test operators; variance in environmental factors, and so forth.

 

Example of High Dynamics

An example of this is Aidon, manufacturer of Smart Metering products. According to their Head of Production, Petri Ounila, an average production batch

  • contains 10.000 units
  • has units containing over 350 electronics components each
  • experience more than 35 component changes throughout this build process

This gives them a “new” product or process every 280th unit. In addition come changes to the test process, fixtures, test programs, instrumentation and more. The result is an estimated average of a “new process” every 10th unit or less. Or put in other words, 1.000 different processes in manufacturing a single batch.

How would you begin to eliminate common cause variations here?

And even if you managed, how would you go about implementing the alarming system? A tool in Statistical Process Control, developed by Western Electric Company back in 1956, is known as Western Electrical Rules, or WECO. It specifies certain rules where violation justifies investigation, depending on how far the observation is from ranges of standard deviations. One problematic feature of WECO is that it on average will trigger a false alarm every 91,75 measurements.

False Alarms Everywhere with SPC!

Let’s say you have an annual production output of 10.000 units. Each gets tested through 5 different processes. Each process has an average of 25 measurements. Combining these you will on average get 62 false alarms per day, assuming 220 working days per year.

 

False alarmsLet’s repeat that; assuming you, against all odds and reason, were able to remove common cause variations, you would still be receiving 62 alarms every day. People receiving 62 emails per day from a single source would likely mute them, leaving important announcements unacknowledged, with no follow-up. SPC savvy users will likely argue that there are ways to reduce this by new and improved analytical methods. “There are Nelson Rules, we have AIAG, you should definitely use Juran Rules? What about this ground-breaking state-of-of-the-art chart developed in the early 2000s, given it a go yet?”

So what?? Even if we managed to reduce the number of false alarms to 5 per day, could it represent a strategic alarming system? Adding actual process dynamics to the mix, can SPC give a system manufacturing managers relies on, and that keeps their concerns and ulcers at bay?

Enter KPIs

What most does is to make assumptions on a limited set of important parameters to monitor, and carefully track these by plotting them in their Control Charts, X-mR Charts or whatever they use to try and separate the cliff from the wheat. These KPI’s are often captured and analyzed well downstream in the manufacturing process, often after multiple units are combined into a system.

Monitoring down-stream KPIs in Statistical Process ControlAn obvious consequence of this is that problems are not detected where they happen, as they happen.

The origin could easily come from one of the components upstream, manufactured one month ago in a batch that by now has reached 50.000 units. A cost-failure relationship known as the 10x rule says that for each step in the manufacturing process a failure is allowed to continue, the cost of fixing it increases by a factor of 10. A failure found at the system level can mean that technicians will need to pick apart the product, allowing for new problems to arise.
Should the failure be allowed to reach the field the cost implications can be catastrophic.

There are multiple examples from modern times where firms had to declare bankruptcy or protection against such due to the prospect of massive recalls. A recent example is Takata filing for bankruptcy after a massive recall of airbag components, that may exceed 100 million units.

10x cost rule of failures in manufacturingOne of the big inherent flaws of Statistical Process Control, according to standards of modern approaches such as Lean Six Sigma, is that it makes assumptions of where problems are coming from. This is an obvious consequence of assuming stability in what in reality are highly dynamic factors, as mentioned earlier. Trending and tracking a limited set of KPIs only enhance this flaw. This again kicks off improvement initiatives likely to fail at focusing on your most pressing or cost-efficient issues.

A modern alternative to SPCSPC Obsolete, SPC Electronics Manufacturing, Statistical Process Control

All this is accounted for in modern methods for Quality Management and Test Data Management. In electronics manufacturing, this starts with an honest recognition and monitoring of your First Pass Yield (FPY). True FPY to be more precise. By True, it means that any kind of failure must be accounted for, even if it only came from the test operator forgetting to plug in a cable. Every test after the first represents waste, resources the company could have spent better elsewhere. FPY represents your single most important KPI, still, most OEMs have no real clue what theirs is.

Using First Pass Yield rather than Statistical Process Control SPC to prioritizeReal-Time Dashboards and drill-down capabilities allow you to quickly identify the contributors to poor performance. Here is it apparent that Product B has a single failure contributing to around 50% of the waste. There is no guarantee that Step 4 is included in monitored KPIs within an SPC system, but it is critical that the trend is brought to your attention

Live Dashboards

Knowing your FPY, you can break this down parallel across different products, product families, factories, stations, fixtures, operators, and test operations. Having this data available in real-time as Dashboards gives you a powerful overview. It lets you quickly drill down to understand the real origin of poor performance and make informed interventions. Allocating this insight as live dashboards to all involved stakeholders also contributes to enhanced quality accountability.
A good rule of thumb for the dashboard is that it won’t act on it unless the information is given to you. We don’t have time to go looking for trouble.

As a next step, you must be able to quickly drill down to a Pareto view of your most occurring failures across any of these dimensions. By now, it could very well be that SPC tools become relevant to learn more details. But now you know that you are applying it on something of high relevance, not based on educated guesses. You suddenly find yourself in a situation where you can prioritize initiatives based on a realistic cost-benefit ratio.

Repair Data

The presence of repair data in your system is also critical, and it cannot exclusively contain in an MES system. Amongst other benefits, repair data supplies contextual data that improves root-cause analysis. From a human-resource point of view, it can also tell you if products are blindly retested, as sometimes normal process variations measure within the pass-fail limits. Or if the product is taken out of the standard manufacturing line and fixed as intended. In short, quality influencing actions come from informed decisions. Unless you have a data management approach that can give you the complete picture across multiple operational dimensions, you can never optimize your product and process quality or company profits.

You can’t fix what you don’t measure.

SPC in Electronics manufacturing?

BLOGG:

Don’t Be Misled by Statistical Process Control in Electronics Manufacturing.

Statistical Process Control (SPC) is great for basic products. But for complex electrical manufacturing, it doesn’t cut it and could cost you time and money.

Read more