Specifications are the standards or the minimally accepted requirements for important features (or characteristics) of a product. Many manufacturers also set their own specifications. The confusion between specification compliance and quality can lead to financial loss, wasted time, and so on. For example, a product can fall within specifications but still prove unsatisfactory for clients. Additionally, manufacturers that rely solely on meeting specifications can miss out on opportunities to create more cost-effective processes. Thus, applying several statistical principles can immensely help a company identify ways to positively reform a process and product. By moving beyond the gauge parameters of specifications, manufacturers can boost quality with an efficient, optimized, and cost-effective process that performs better and satisfies the customer base.
Employing Essential Statistical Methods for Optimization
First, let’s define what we mean by optimization and consider common mistakes to avoid. Let’s also reflect on opportunities for process and product optimization.
Optimization is as simple as making something as perfect, functional, or effective as possible. Goals of optimization include:
- Developing Reliable measurement systems
- Identifying appropriate ranges for Key input factors
- Ensuring Stability—consistent and predictable processes (by quickly identifying changes)
- Assuring Capability—a high level of conformance to specification
- Minimizing product development time while controlling risks
- Designing for proper performance for the life of the product (reliability)
We can define quality as closeness to a target. Many companies do not optimize and default to meeting product specifications. They believe that if the product is compliant, it is fine, and if it is not compliant, it needs correction. However, minimizing variation to an acceptable level would be more productive. A product within specifications and can still underperform for the client. Variation is unavoidable, but the goal should be to minimize variation in the factors that most negatively impact product performance.
Common Mistakes Manufacturers Make When Implementing Optimization
In the quest for optimization, companies can easily make mistakes. Here are a few mistakes you should avoid:
- Utilizing insufficient data or incorrect measurement tools, skewing results
- Remaining too reliant on trial and error investigation
- Misusing or misunderstanding common statistical tools
- Being Reliant on irrelevant or outdated models and charts
- Forming inaccurate assumptions about their data
- Wrongly assuming variation does not matter because a product is compliant
- Ignoring unusual variation that could create a noticeable change in the product, even though it meets specifications.
Opportunities for companies to seek optimization can lead to designing and developing products cost-effectively while also reducing waste and customer dissatisfaction. Setting optimization as a goal prompts companies to collect data more effectively and use it more productively, creating an environment for more intelligent decision-making. Companies can also detect potential manufacturing discrepancies before a blow-up. The great news is that applying these opportunities are relatively simple and can deliver net positive results quickly.
The process begins with collecting reliable data for a strong foundation. Several statistical methods can be used to minimize product failures and maximize performance in a cost-effective manner. These tools include:
- Measurement Systems Assessment
- Statistical Process Control
- Process Capability Assessment
- Design of Experiments
- Reliability Analysis
- Hypothesis Testing
- Predictive Modeling
Reliability Analysis uses statistical methods to predict how units will perform in the field and when and how those products will fail. For expediency here, this blog focus on the first four measurements listed above.
Using a Measurement Systems Assessment (MSA)
Measurement Systems Assessment (MSA) includes methods to identify discrimination, accuracy, precision, linearity, and measurement stability. Several quantitative data assessments depend on it, including most decisions, Design of Experiments (DOE), statistical process control, inspection activities, and process capability assessment. Companies can use MSA as a prerequisite for data reliance and assess for repeatability, and ensure adequate gage discrimination.
Statistical Process Control (SPC)
Since the 1920s, companies have used statistical process control (SPC) to track production and observe statistical changes before the product moves out of specifications. SPC provides proactive monitoring of systems and objective criteria for reacting and intervening. Products can meet all standards and still fail because companies often negotiate and broaden the specification limits or the specification range. Sometimes, the specification limits are not correct to begin with. Other factors that can affect results are a change in environment or a new batch. It is essential to spot which variations represent normal noise and which are signals of unexpected behavior.
Key SPC principles focus on prevention. Only stable processes can produce predictable outputs. Monitoring and ensuring system stability eliminates reliance on inspection activities. Inspection is costly, reactive, and does not affect or generate quality—it is just a hunt for garbage. Monitoring and controlling processes by minimizing variation requires statistical methods.
SPC indicates when things have changed statistically to help quickly identify causes, prevent further problems, or make improvements. Changes can be within specifications and be statistically different. Process Capability Analysis (PCA) is the proper method for assessing whether products will meet specifications consistently. Statistical snapshots can stand out as red flags against control charts to indicate something is better or worse.
Critical Aspects of SPC Implementation
Companies must focus on critical characteristics and choose the correct chart to implement SPC properly. It is also important to employ an effective sampling strategy and choose the right sampling size, which relies on the degree of change we want to detect and the amount of variation. Automating where appropriate and empowering operators to drive improvement is also a critical piece of this puzzle---your team should feel empowered by understanding how the process is behaving. However, to remove the stress of trying to measure everything---focus on measuring what matters and use relevant and appropriate charts.
Process Capability Assessment
Process capability estimates how well a characteristic meets specifications and what percentage of products will not meet them. Stability does not imply capability—the goal is to target both stable and capable products. If you are interested in learning the difference between process capability and process stability, check out this blog on the topic. Remember, capability indices have limitations. Furthermore, a company must establish stability prior to assessing capability. Process capability assessments are only informative and predictive for stable processes, and it is essential to identify one or more distributions that best describe the data.
Design of Experiments
Design of Experiments (DOE) is an efficient approach to produce predictive models describing cause and effect relationships. A DOE evaluates and crafts a model for the effects of variables and their interactions on responses. Implementing a DOE is the fastest way to learn, move through research and development, and develop new products while minimizing risk. This method can predict the product’s performance under an infinite number of scenarios, helping developers understand complex interactions that they cannot learn from trial and error. It identifies which factors to control and allows for the development of predictive models. With DOE, companies must only do a limited number of things to develop a model that predicts under an infinite number of conditions/scenarios.
More Ways DOE Is Useful
Design of Experiments (DOE) are great for determining key characteristics in the design and manufacturing operations and reducing variation in performance requirements. Companies can use a DOE to set specifications for design characteristics, process settings, and determining how several process variables interact. Factor dependencies are almost impossible to identify in naturally occurring data—but easily identified with certain designed experiments. Identifying interdependencies is critical for understanding and controlling product performance, and DOEs are a relatively inexpensive way to optimize multiple performance features simultaneously.
It is crucial to ensure products work in all target environments. That can be achieved by finding the process and product settings that make performance variation difficult to regulate applications. The bottom line is that using statistical methods can improve measurement systems, processes, product performance, and reliability. Products can meet all the specifications and still fail, but by understanding variation and dependencies, we can a product’s chances of success, pushing it closer to the quality target in the most cost-effective way.