Manufacturing Intelligence (MI), or Test Data Management (TDM), in the electronics manufacturing domain, is a discipline rapidly growing in popularity.
The popularity is largely due to an increasing competitive pressure to get better control and understanding of cost and product quality.
This in turn will allow companies to sell their product at lower prices, retain more earnings, or invest more money into research and development initiatives.
In this article, we will outline the key traits of Manufacturing Intelligence when used in the Manufacturing of Electronics. Looking at what fundamental framework components are required. How these components support and enable the various features such a solution require. And how these features support the day-to-day use cases that allow the OEM to experience significant cost and quality benefits.
In this post we will have a closer discussion around the following topics
The purpose of Test Data Management in Electronics Manufacturing
The main business reason why companies install a Test Data Management solution is as mentioned the need to better control costs and profits. You can get there by using TDM in combination with many different methods for improvement. Such as Lean Six Sigma. Or even ad-hoc data-informed problem resolution. And it has the potential to impact many parts of your company, not just manufacturing.
A common assumption is that test data primarily has relevance to quality assurance staff. But this is far from reality. A well-designed TDM solution serves a strategic purpose in companies looking to stay ahead of the curve.
- It helps R&D better understand the performance of their designs. And helping them increase their available Innovation Budget.
- It informs manufacturing on how well different test processes are performing, and the reasons for this.
- Quality Assurance can better cut through the clutter, to better understand what the real quality concerns are. It guides aftermarket services towards the underlying reasons for warranty claims and field failures.
- And it helps everyone to collaborate better, by using the same data to make informed decisions.
“The reason why a global understanding of manufacturing is needed is not limited to the need of corporate management looking to stay on top of things. “
Test Data Managment Benefits, Objectives, and the Occasional Problems
Using TDM for Global Overview of Electronics Manufacturing
The reason why a global understanding of manufacturing is needed is not limited to the need of corporate management looking to stay on top of things. Very often a distributed manufacturing setup has many common components. You might have a standard test-platform used across all sites. One specific product might be manufactured at many different locations. Or the sub-components used in an assembled module comes from different factories. The list goes on.
Unless staff-members working on fixing problems can see the full picture they will miss out on important information. That will restrict their ability to make effective decisions.
Tools such as Microsoft PowerBI or Tableau can also help to visualize certain aggregated global data parameters together with other data sources. But they should only be considered as a supplement to a Test Data Managment solution.
Data-supported Decision Making
The purpose of having this global overview is to use the data in deciding what problems or scenarios deserves your attention the most.
It is about taking the loudest voice out of the equation.
And giving all the stakeholders the opportunity to back their claims up by data that everyone can relate to. The loudest voice might work in getting something done within a group. But is that something the important thing? When there are several groups such as R&D, Manufacturing, Services and third-party people there is going to be many loud voices. Shouting in “different languages” at times.
The power of data-supported decision making includes reduction in time consumed by bureaucracy. If the data is good enough, arguments will almost magically shift to different contexts and ways of interpreting the results. It will lead to processes that are far more constructive and results oriented. Using data to second-guess yourself is as important as using data to convince others. After all, who haven’t ever jumped to conclusions, and defended that stance like it was a truth carved in stone?
The Holy Grail in Electronics Manufacturing
Having an accurate True First Pass Yield is close to a holy grail in manufacturing. Because it is a powerful indicator on how well you are doing. But this metric is very aggregated. From a matrix of different sources. It is first when you are able to disaggregate these statistics that you can use them to form your hypothesis’. And it is only when you are able to compare the underlying data from your hypothesis to the wider data set that you can argue that correlation or causality actually exists. As a simple example, how can you determine if the latest problem to surface is from the newest product revision, unless you have data? And not the test interface board that you introduced with it? Or the new higher resolution Digital Multi-Meter you needed to add to your ATE? Disaggregation and comparison is key here.
Fast Response Time from any Problem to Resolution
Global operations, outsourcing, or more complex supply-chains in general. These often moves the problem physically further away from people with key competencies for fixing them. Very often it is no longer as simple as knocking on their door. Or calling them up on Skype. They are in a different continent, or in a different organization. Often times the person who first becomes aware of the problem don’t even know of their existence. So the threshold of initiating a collaboration is extremely high. Mostly reserved to the problems most visible on the floor.
There are some key ingredients in an effective supply-chain integration. One is moving away from this state, to a place where all problems are instantly visible. And data is accessible by every relevant person. A state where everyone can identify and nominate a problem, and make sure it gets routed to the most appropriate person. The problems at a sub-contractors for example most definitely also belongs to the OEM. Who do you think actually picks up the bill at the end of the day?
Full Traceability in Manufacturing
The importance of traceability will likely differ depending on industries. Regulated industries might need detailed traceability during the manufacturing process. To ensure that a product going out the door meets all requirements. Others needs backwards traceability for all returned defective products. So that they can properly check if the cause of failure was from manageable factors during manufacturing or design. And make an assessment on the warranty costs linked to this problem. This traceability then feeds back into the ability to use data to make effective decisions.
The loudest voice might point to a high number of warranty claims for a given product revision. Arguing that they need to take one component supplier off the list. Whereas the person with the data might disagree. Pointing out, and visualizing, that the factory escapes was in fact due to inappropriate test limits introduced. And that the RMA volume roughly equals that of the manufacturing scrap or repair volumes for the other revisions. The outcomes of these two arguments might be very different.
The Benefits of Collaboration
There is never a single root-cause for problems in manufacturing. It comes from many different locations. Those familiar with Fishbone or Ishigawa diagrams, used in Lean Six Sigma, knows that the causal relationships can be many. And complex. And inter-departmental. That means that a critical Manufacturing Intelligence benefit has to be the ability to use the same data to investigate a problem from any angle in the Fishbone model.
If all the relevant data and meta-data lives in the same portal, then the foundation for you to agree and exchange constructive input will improve. Often this requires some enrichment of your data. You need to add meta-data addressing the different categories of causes. Such as product revision, Test Fixture ID, Test Operator, to name a few.
Standardized naming conversions for test steps makes cross-product comparisons much more powerful. We have also seen many cases where the new-found value from the test data results in manufacturing engineers becoming more involved in the early stages of product design. To contribute to the Design for Manufacturing aspect of product introduction.
Remember that Manufacturing Intelligence has a Cost?
As with anything in life, there is no such thing as a free lunch. Adding value comes at some kind of cost. Companies that decide to buy a out-of-box Manufacturing Intelligence solution will have to invest in licenses. If they decide to build the solution themselves they will need to carry the development costs. Costs for activities that most often are non-core to the company. And that often involves addressing a set of rather generic problems and challenges.
For starters, the number of different active file formats they have will be many. Different types of tests, different product groups, repair data, to name a few. Even for those that have standardized on a common data base, some relevant data will still not exist there. For instance data from a sub-contractor. Or repair records.
The data availability will often be manual or infrequent. Although there might be automatic collection of the data points, the degree of availability depends on when you have statistics or insights available. If you only have raw data available when the problem is first seen, then the delay before you can determine the root cause can be long and expensive.
Don’t Underestimate the Importance of Meta-data
Lack of meta-data is also presenting obstacles. In theory it should be as simple as adding that information. But very often the infrastructure in place don’t support what you are looking to add. Meaning that you need to invest a lot of time in justifying why you need it. Most often it will instead get left out, and you collect the minimal required meta-data.
When things do start to fail, or quality concerns comes up, you must be able to see all the test data. But often these are not in your possession. Most OEMs have some kind of outsourced manufacturing that also performs tests. Perhaps an In-Circuit Test. And the data you get for those are all roses and sunshine. But you and I know that this is not real. And whatever goes on up until the reported 100% first pass yield, you will pay for in the long run.
The inability to determine on a cross-company standard for managing test data are a manifestation of these problems. Solving everything is in reality impossible, so whatever is “agreed” on will become undermined by some. Meaning that the benefits you at best are aiming to achieve can’t reach the global and corporate level that we discussed earlier.
Basic Use Cases for WATS Test Data Management for Electronics Manufacturing
“Knowing your frequent failures are not the same as understanding them”
WATS is an off-the-shelf Test Data Management Solution for Electronics Manufacturing, that to a large degree mitigates the problems from above.
The different use-cases of WATS will most often have a tight tie into standard Key Performance Indicators, or KPIs, in manufacturing. Or various types of Quality Management and Continuous Improvement methods. Examples of such KPIs are reducing the percentage of warranty claims. Reducing scrap during the manufacturing process. Having better control over the variable unit costs in manufacturing. Or controlling test operator cost per unit. Making sure that your test asset utilization is not affected by unnecessary activities such as retesting. Or penalized by tests that never ever fails.
In addition to these comes KPIs that are more difficult to generalize. Such as R&D, Manufacturing and Administrative overhead costs. The use-cases are the levers you pull on a daily basis to achieve the outlined Test Data Management benefits.
Understanding Your Most Frequent Test Failures
Let’s start this by stating the basics. Knowing your frequent failures are not the same as understanding them. This is still an aggregated number. When looking at a single product, this still might be data for different revisions, test software, batches and so forth. If you start looking at the fail data for a specific product looks like, you might want to go wide. Then you can go back, and break down this statistics by product revision. Are all failing equally here? When you have an idea on what goes on, you can drill down into the underlying measurement data for the full data set. Here you can group the data based on your hypothesis to attempt to validate or discard it. Does anything stand out from the noise?
“I have stopped counting how many times we have seen massive outliers that are not detected by the limits in use. Where you then send the product downstream, or ship it to a customer”
Evaluating and Improving Your Test Coverage
It is important to point out a critical assumption of the “First Pass Yield” method of prioritizing. It is that your pass-fail data is representative of your test quality. And it rarely (we can probably say never) is. Some test limits are inappropriate, but does not cause problems. For instance, a limit sat far to wide, for a measurement that never deviates. You can assume that limits that are to restrictive or have a center-point far from the average will be caught in the example from above. Because they causes failures. But limits that are to wide needs special attention. For this, you can use Process Capability Analysis to better understand different ratios. And quickly drill down to visualize the data.
I have stopped counting how many times we have seen massive outliers that are not detected by the limits in use. Where you then send the product downstream, or ship it to a customer.
A great opportunity for this kind of improvement exercise is during New Product Introduction.
Tracing Individual Units
If you are in a regulated industry, you probably want to have full control on what happens to a product throughout the manufacturing phase. If a unit fails at a particular stage, you might want a notification. Before you ship the unit, you might want a validation check that everything has been performed as it should. And the same information for all sub-units. And if things are not ok, you want to be able to investigate why that is.
Tracing Groups of Units
For industries with higher volumes, traceability of groups of products has high relevance. This could be cross-process compares. Such as checking performance at system-level testing for units containing a specific PCB revision. Or it could be when analyzing RMA data. What were the commonalities of units with a warranty claim during manufacturing. Were they repaired? What kind of repairs was it? Do they have measurement outliers in test steps where the limits are sat to wide? How many time were they retested in the various stages?
The reason this is relevant is that you can most definitely assume that whatever problem you identify here is costing you both money and recognition. After all, you just had to replace a product to a customer.
Reducing Retests
Excessive retesting represents two problems. On one hand, it is a time-intensive task that often is representative of 10-30% of the total time spent testing. The impact is reduced test asset utilization. You might need to add new testers as you increase output volumes. And, it takes a paid employee to operate the tester. If you are interested in estimating your cost of retesting, read this article on the different costs of retesting, and check out this calculator.
On the more complex and severe end of this stick, you find the quality implications. Operators often retest because they suspect the test system is not as it should be. Things such as a bad connection to an instrument. Perhaps they apply some force to the connection board and test again. But what if the connection problem is on the unit itself? A bad soldering? Now, all of a sudden you apply pressure, and the electrical connectivity is ok. At least for the time, it takes to run the test. Who is to make sure that this product does not leave the factory?
It is not only the discretionary authority of the test operators that are a source of retesting. (For the record, we are not saying that this authority is a bad thing). These kinds of retests act as a steady “baseline”, accumulating costs on a daily basis. Incorrect test software and unit firmware also play an important role. And when you need to retest, potentially disassemble your products, due to incorrect firmware, this can be a massive spike in the costs.
Improving the Two Sources of Retesting
A specific Test Data Management benefit, from WATS, in particular, is that you can address retesting in two ways. First of is that you have full visibility into how many retests your products have.
You can drill down and dissect this data as you need, to figure out what is the likely reason. If you collect repair data you can cross-correlate these.
You can even receive automated notifications if products are retested more than a threshold. But at the end of the day, this is likely a culture-issue. You can’t fix this unless you are able to address the underlying assumptions of the operator who does the retesting. For example by improving your test limits.
Second, WATS also features a Software Distribution Module. You can use this to centralize the distribution of software and firmware packages. So that when a test initiates it checks the latest version on the server against what it is running. If it is outdated the operator can choose to upgrade directly or postpone it.
Core Features Needed for Test Data Management in Electronics Manufacturing
“A Test Data Management solution for electronics manufacturers acts as input to these improvement methods. By offering certain lenses to view your problems through.”
It is now time to look at the core components that build upon this framework. These are the tools and features you apply on a day-to-day basis. The tools you use to get value from your data. It is important to point out that this is not a 1+1=2 case. The significant benefit comes from the combination of these features.
Methodology
There are different schools of thought on how quality management and continuous improvement in manufacturing are best fueled. These methodologies build on various frameworks for improvement. Such as Lean, Six Sigma, Total Quality Management to name a few. A Manufacturing Intelligence solution for electronics manufacturing acts as input to these improvement methods. By offering certain lenses to view your problems through.
WATS offers a top-down approach.
Where we drill down from higher-level anomalies. To identify the important underlying issues.
This has the benefit that you prioritize based on occurrence and severity. Your focus is steered by economic guidance.
A different lens is offered by traditional Statistical Process Control (SPC). Here you most often start at a lower level of analysis. Looking to identify things such as instability in important measurements. For more details on this, read why SPC is not suitable for electronics manufacturing.
Analyzing Test Data from Electronics Manufacturing
The specific Test Data Management features for analytics available within the methodology are important. In a top-down approach, a critical metric is your True First Pass Yield. This differs from the more common First Time Yield, where a failure is recorded when a product goes to repair. Or out of the test process. Failing to recognize important fail-causes is an effect of this.
From the First Pass Yield, you should be able to drill down via relevant statistics, to your most frequent test step failures. And from here, further down to numerical values. Still with the ability to distinguish different meta-parameters. Such as product revision, test fixture or test sequence version. The rationale for this is simple. You should be able to build and test different hypotheses’ at every level.
On the flip side, you need to be sure that your yield is accurately representing the current state of manufacturing. Process Capability Analysis is a very useful tool to use during New Product Introduction. When used to optimize the test limits you deploy into volume production.
Another valuable piece of information all OEMs should have accessible is the number of retests for your products. In WATS this is available in the Periodic Yield Report.
Other relevant test analysis reports in WATS includes Rolled Throughput Yield, Total Process Yield, Product and Test Yield, Station Yield, Gauge R&R, Overall Equipment Efficiency and Connection & Execution Time Analysis. All of them having common data filters. Meaning that you can look at specific data sets through the lens of different reports. And the ability to filter data based on a virtual hierarchy of systems, and a virtual grouping of your products.
And last, WATS lets you do all of this directly from your web browser. No need for tedious application installations.
Repair Analysis for In-line Manufacturing and RMA Repairs
The reason why repair data is on this list is because it has a tight connection to test data. Many companies document repair data in a stand-alone system, MES system, or even ERP system. And doing so gives you access to a lot of useful statistics. You can get metrics the most frequent repair actions, what product you most often repair, problematic components, to name some. But unless you are able to provide a link between your test reports and repair reports you are not able to see what test steps you repair the most against. Or test operators, test interfaces or other test related parameters. So when you have a spike in your No Failure Found repairs you will be unable to find out what goes on in your product testing, and causing this.
Another benefit of having it in a common database is a more complete unit tracking history.
You can find a list of features in WATS on this page.
Repair Interface
To document the repair actions, you need a user interface. Unless you already have standardized on one, WATS comes with a HTML5 based Operator Interface that directly links the repair report to a corresponding test report. If you have a tool of choice already, you can design a one or two-way file exchange that syncs the necessary information between the two tools.
Manual Inspection Interface
Some research says that 15% of all paper documents are misplaced, and 7.5% are lost altogether.
Add to this the fact that paper documented manual inspection lists are not searchable. It then becomes obvious that the only reason why companies document inspections in this way is to approve them for delivery. But that is a pity because there can be significant value in statistics from these inspections as well. So when designing your Manufacturing Intelligence solution, you should account for how to digitize these inspections.
The same operator interface that you can use for repair documentation, combined with an in-product Manual Inspection Sequence Designer, lets you document these processes directly in the web browser. Even on a tablet device.
Distribution of Software and Firmware
It is important to ensure that your instrumentation is performing at it’s best at any time.
Making sure your test software and unit firmware are up to date are key activities of keeping a high level of integrity for your measurement and evaluation data. Finding out that you have used the wrong firmware after sending 1000 units through the build process is at best unfortunate. And likely very expensive.
Automated distribution for test software and unit firmware should be thoroughly considered when evaluating a Manufacturing Intelligence solution for Electronics Manufacturing. Some use Source Code Control Software. Although this might not be sufficient here. It might make it difficult to achieve the necessary granularity. You might for instance require that your test limits for certain product revisions are a bit different from the other ones. These package distribution mechanisms are natively maintained in WATS through the Software Distribution Module.
Manufacturing Asset Management
It is also unfortunate to experience a sudden drop in manufacturing throughput because tests are suddenly failing. By an instrument that has drifted outside of calibration. Or a fixture that has not been cleaned according to the schedule. The Asset Manager Module in WATS allows you to specify all of your different assets, along with the associated maintenance schedules. That way you have full control of your current status and upcoming maintenance tasks.
Automated Feedback from Triggers and Notifications
You will never come to a state of complete control over manufacturing.
The level of control will to a large degree depend on the monitoring systems in place. Some choose to implement a workflow system with forced routing. This however requires a lot from the organisation in terms of planning and training.
A soft control system is an approach that, for most organisations, will be more suitable.
One that does not remove the discretionary control of test operators. Or prevent them from doing mistakes. But that will let you know once a pre-defined scenario took place. For instance a product that is tested 7 times before passing. So that you can investigate and evaluate the severity.
Connecting your Test Data Management system to third-party solutions
Finally, it is worth pointing out that there will never be one magic solution that does everything you need. Test Data Management features will rarely address all your needs for using the data. Specialized tools with the optional integration and connectivity with third-party tools are becoming increasingly popular as it gives the user the possibility to configure a much more complete and comprehensive solution than a single supplier would be able to. WATS facilitates this integration either by officially supported connectivity to standard test systems. Or integration to other enterprise systems such as Enterprise Resource Planning (ERP) , Manufacturing Execution Systems (MES), Tableau, Microsoft PowerBI to name some, through the WATS RestAPI.
Framework Components of Test Data Management in Electronics Manufacturing
The framework elements are the critical features that you are not exposed to on a daily basis. They are the spark-plugs, transmission and oil. The components that, if excluded would cause your profit and quality improvement engine to grind itself to pieces.
“Technologies such as data lakes and BI Dashboards can add significant value, but they serve a fundamentally different purpose than Test Data Management in Electronics Manufacturing”
Making Sure Your Data is Uniformed
Technologies for storing data has not been idle in the recent years. Some companies have started to investigate tech such as data-lakes. Or simpler attempts at uniformed databases to build business dashboards on top off. At least in the enterprise and corporate sector.
Technologies such as data lakes and BI Dashboards can add significant value. But they serve a fundamentally different purpose than MI in Electronics Manufacturing.
The reality is that most companies have numerous different formats for their test and repair data. And that they are unable to effectively compare this data. Across test processes, across products. Sometimes even across product revisions. A Manufacturing Intelligence solution must provide two critical components to this. First it must provide technology to convert legacy data. Without intruding the source code of these legacy systems. This conversion must be in the same format as that of the next component, native connectivity to new test data sources.
Or said in simpler terms, the MI solution must be agnostic of the source of the data. And be able to show uniformed data from any kind of test process you have. Even third-party turn-key solutions such as ICT.
Another factor important to take into consideration is the use of standardized naming conventions for test steps during development. This makes it easier do test failure analysis across different products.
Trending technologies such as Data Lakes would then add on top of this. Giving you the option of investigating test data in the context of other, non-test related data sources.
Timely Accessible Data
Some have claimed that data is “the new oil”.
Maybe… There is still one major difference in the value characteristics between oil and test data that is relevant here.
The deterioration of value.
While the value of the energy from oil can be collected at a future time, a lot of the insight gained from test data only has value right now. If there is a problem at a sub-contractor, or a factory on the other side of the globe, an effective response dictates a Manufacturing Intelligence solution that collects and process data in real-time. And that can collect data from any relevant location regardless of IT architecture.
“While the value of the energy from oil can be collected at a future time, a lot of the insight gained from test data only has value right now.”
Does Your Test Data Contain Enough Meta-Data?
In order for your test data to have the wide functional applicability as we discussed earlier, it is critical that it contains the details and quality needed for it to make sense to these groups.
The meta-parameters that the R&D group needs to filter their data based on likely differs from the angle that the manufacturing team would like to investigate a problem from. And both of these again are likely different from the needs of the service group doing RMA analysis.
You are directly limiting your ability to collaborate effectively across organizational boundaries if you are not serving all these different needs for data quality.
Data Links and References
The test or repair reports that you generate will at many times be directly linked to things such as a customer order. Or an RMA reference, ticket ID in an internal system, or similar. In order to make most sense of the data, you need to be able to see them as one set of data. Whatever this common denominator is.
A good example of linked data in WATS is the automatic link between a test report, and a repair report.
It allows you to see consequences of failing tests, or primary causes of product repairs. But even linking to external elements must be accounted for. For instance by easily customizing your RMA test report, or repair report, to include the Service Ticket ID from your ERP system as a user-input. So that you can look up and analyze these specific records by a few key-strokes when needed.
MI Access Restrictions and User Management
A bureaucratic process that can kill any such initiatives often starts as soon as you intend to share data to external companies.
But this external company might be someone you absolutely need to be able to collaborate with on things such as root-cause analysis. Be that a contract manufacturer or an R&D consultancy firm. Naturally, it is then imperative that you are able to specify user-access restrictions. Based on where the data was generated, and what products the data relates to. So that you can control that relevant people get access only to the data they should have.
As collaboration increases, and more users are added to your solution, it gets increasingly difficult to ensure that people leaving the organisation no longer has user access. Technologies such as Single Sign On (SSO) can help streamline this.
Data Security in Test Data Management
Although your test reports most often make very little sense to outsiders, the aggregated data can tell a great deal about the state of your manufacturing. If IT doesn’t consider your solution to comply with their policies, it is unlikely that you will be able to end up with something that solves your problems. If you choose commercial solutions such as WATS, chances are that they natively address these policies by using transfer protocols such as HTTPS, and security technology available on Microsoft Azure.
Core Connectivity to Data Sources
There are several types of connectivity that your TDM solution should aim to include. The first, and most obvious is connectivity to test systems. This can be direct connectivity to solutions such as NI TestStand, or ATEasy. It can also be the more flexible support for different APIs. Such as NI LabVIEW or .NET. When you have good control of this core connectivity you will require very little overhead development to include future test systems in your overall MI solution. Native connectivity to off-the-shelf sequence software also ensures that future compatibility is maintained.
The second type of connectivity required serves the need to supplement the MI solution with important contextual information. Such as Bill-of-Material and component-vendor information to enrich your repair data with.
The third type of connectivity is information exchange with other business systems. For example business dashboard software such as Microsoft PowerBI. Or Resource Planning software such as SAP. WATS manages this enterprise connectivity through a RestAPI.
Hosting your Test Data Management Solution
All of this data must be stored on servers somewhere. Either on your own premises or on cloud platforms such as Microsoft Azure.
And since you are going to actively use the data it requires a certain level of system performance. As the size of your database grows you need to decide on how much data to keep in the quick-access database, and what can be considered legacy data. This will inform you on how you potentially need to scale server performance. A Test Data Management solution processing data from electronics manufacturing should aim to make performance enhancements as easy as possible. Such as CPU upgrades, additional memory, SSD storage and more.
It should also allow you to deploy sub-servers to facilitate faster response times for distant users. An example of this could be the deployment of a sub-server or data hub to Microsoft Azure Hong Kong, to speed up transfer speeds or to avoid challenges caused by governmental firewalls.
Making Sure Your TDM Solution has Scalability for Future Needs
Scalability is a summary category of several of the above paragraphs. A Test Data Management solution must provide flexibility as you scale up your volumes. As you scale up the number of test processes, you collect data from. It must have the flexibility to adapt to the necessary changes that you make in your supply chain and facilitate the required collaboration to keep you on the top of your game. It must be scalable as technology changes, for example, as IoT devices increasingly report back telemetry or self-test data to your organisation. Data that must be possible to see in the context of manufacturing test data if it is to maximize the added value.