Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The factories established in Mexico to assemble final products and supplied them to the USA market (since the 1960s) are characterized by their diversity of origin: from the USA, Japan, and European ownerships among others. These facilities are dedicated to producing electronic equipment, clothing, furniture appliances, auto-parts, and more products. During the last six decades, this manufacturing business in Mexico coined and developed the term “maquiladora” production system (derives from the Spanish word maquilar, “to process”). A maquiladora is an industrial plant that assembles imported components into products for export and owned by foreign or domestic entities (Wilson 2010). By taking advantage of the well-qualified and cheap labor most of these business uses as a main competitive strategy, generating large profits, measuring the efficiency of the management mainly by cost reduction. The competitiveness and advantages of Mexico as a destiny for establishing manufacturing business reached its maximum in 2001 when 3700 factories employed more than 1,300,000 employees. According to Contreras et al. (2006), after this peak, the pressure for finding more efficient solutions and stronger strategies increases because the global competition is growing mainly from China and Central American countries. These conditions created, in the first place, the need for using new strategies, including the use and the experimentation of different methodologies. And later, to use combinations of tools and best practices adapted to the reality of the new scenario. Managers in this environment now have to compete not only in regarding the classical production triad: time, quality, and costs measurements, but also to compete for giving better service and to reacting faster than their competitors to the needs of the customers.

In this environment, the managerial efficiency is highlighted and measured by the capacity to respond to the challenges that the system is offering. Making the “right” decisions is the main responsibility and for this, the manager has to quickly acquire and compile a set of tools and techniques that have helped him in the past or by taking the risk to experience a new set of tools. Some tools are simple and straightforward, others are complex and sophisticated, but the selection is based on the well-proven ones as first options. For instance, we can decide to use a simple control chart or a value stream map (VSM) to decide at the time in a critical meeting. Preparation and practice are the key factors for gaining experience in this decision-making process, especially for a restrictive environment like the one occurring in Latin America (LA). It is necessary to gain such abilities and respond to the daily challenges by improving the use of advanced tools, increase the experience, and (with the time) become the experts in obtaining the maximum benefit with the most useful and fast solution. If we consider that in the LA environment “one solution is valid only for one set of conditions and, every day a manager faces dynamic conditions where it is not possible to use plain recipes” and “must have the most of a set of tools available and have the habit of including the more efficient ones”. Then, it is possible to be more efficient managers by using more data, more statistical analysis, and less of speculation in our budgets, projects, and programs.

Historically, Henry Ford was who promoted the use of mass production, through the production in assembly lines to drive forward the competitiveness using as a basis the mass production. However, his contributions go further, was Ford the promoter of the principle of “keep everything moving” or “keep the flow of production” principles (Hopp and Spearman 2011). Everything has to keep moving at a steady pace and continue without disruptions: raw materials, work in process, and finished products must keep the same pace. For achieving such goals during the expansion and bonanza of the twentieth century, large corporations (mostly United States) focus their efforts on reaching technological innovation, high productivity rates and the costs reduction, this until the large economic crisis in the 1970s when it was necessary to find alternatives for achieving the same economic goals and their stabilization. An alternative was, to change the production processes and to use strategies for adapting the existing technologies. These conditions asked the large corporations and strategists to look into the continuous improvement paths. Schönberger (Schonberger 2008) points out that, since then, the improvement programs are not on the daily agenda but, when there are problems, and there is a risk in the financial health, a company looks for the initiatives of continuous improvement as a potential solution to get out of the risks.

Currently, continuous improvement strategies can be identified in two ways of thinking and structuring solutions: the path focused on quality and the path of deliveries on time. The quality path proposes to eliminate the root cause through a statistical analysis that measures, analyzes, and monitors the root cause of a problem, and by solving the root cause, generate a chain reaction for eliminating the problem (Deming 1986). This path has its best representative in the Six-Sigma methodology (Pande et al. 2000), which include a variety and extent set of tools and supporting manuals (Rath 2000). Six Sigma is based on a logical sequence known as DMAIC: Define, Measure, Analyze, Implementation, and Control.

In the other hand, the path for delivery on time is better represented by the methodology of Lean Manufacturing. Initially designed by Taiishi Onho in 1978 (Onho 1988) is based on the Toyota production system (TPS) but took its actual structure in the 1990s after Womack et al. (1990) publications. And it is based on the principles of “just do it” and “keep it simple” which makes the Lean concept a reactive methodology and an effective solution in critical conditions. Lean manufacturing is based on five principles: (1) Specify the added value of a product, (2) Identify the critical path, (3) Keep the flow, (4) implement a “pull” system, and (5) seek for perfection (Womack et al. 1990; Womack and Jones 2010). Based on these principles, the Lean Manufacturing uses tools to identify and eliminate waste, keep the production flow, reducing work times, and reduce costs to the extent that it is possible.

Recently, there have been proposals for integrating these strategic paths. George (2002) (George and George 2003; George et al. 2004) first proposed a robust model known as Lean-Six Sigma approach. This path uses the methodological DMAIC structure of Six Sigma with an approach to Lean when is necessary to propose some solutions. To combine them, there are a broad and robust set of tools that are available in each phase of the methodology. Lean-Six Sigma emphasizes on the completion of each phase before starting the next one. Also, it uses a validation review to ensure that all components of Lean and Six Sigma have been implemented. It is orientated to medium and long-term projects, focusing mainly on the improvement processes rather than solving short-term problems.

A different approach has been recently reported Alba-Baena and Estrada Orantes (Alba-Baena et al. 2016; Estrada-Orantes and Alba-Baena 2014), where the methodology of Lean-Sigma plays a crucial role. Lean-Sigma is based on the strategy of “do it at the speed of Lean with the depth of Sigma,” includes and uses the same methodologies, but it is focused on solving situations in the short term. Lean-Sigma as used by these authors is based on the rapid identification and fast solution to a problem that is affecting the process to keep the flow of the process. Worth is to mention that in general, these solutions may not achieve the statistical level of Six Sigma (or 3.4 ppm) but to bring back the process to a stable state and to give a quick respond to the changes in the process, or to achieve the main goal as projected.

The Latin American restrictive environment has been characterized by restrictions such as time, economic, and technological limitations. These conditions limits the use of solutions such as reengineering and other major technological investments. These restrictive characteristics sets the perfect scenario for the use of fast and efficient solutions to customer-focused challenges. Managers in LA have proved that in a restrictive environment it is possible to successfully implement state of the art methodologies for problem-solving and continuous improvements. Examples in continuous improvement implementations have been published by several authors such as from Coy et al. (2016) and Camacho et al. (2016), also in the product design (López et al. 2016) and Romo et al. (Romo et al. 2016) and optimization methodologies (Pérez et al. 2016). Implementations in this restrictive environment have been reported also for solving problems using Lean-Sigma as a fast response and effective methodology, applications in the automotive industry (de la Cruz Rodríguez et al. 2016), goods assembly (Garcia et al. 2016), and air conditioning filters assembly (Sifuentes et al. 2016) have successfully proven its efficiency.

Also Lean-Sigma methodology in the automotive industry has been previously reported by the authors as an effective problem-solving methodology in highly restrictive environments. Estrada-Orantes and Alba-Baena (2014), reported that the Lean-Sigma could be used to propose effective solutions, but also towards the improvement of the solving process itself. The Lean-Sigma approach, as used by the authors in their research as a problem-solving oriented methodology, more than an improvement project-oriented methodology. The efforts, using this approach, are focused on eliminating a waste or an obstacle to the continuous manufacturing flow, rather than concentrating on achieving annualized savings. Then is possible to extend the previous definition. Then, the objective of the Lean-Sigma methodology is to solve a problem in the shortest time possible, based on five rapid-improvement steps:

  1. 1.

    Identify and measure the problem. What and how big is the problem?

  2. 2.

    Root Cause Analysis. What is the root cause of the problem?

  3. 3.

    Develop Solution Alternatives. Identify the alternative that best solves the problem.

  4. 4.

    Verify the Solution. Make sure that the problem is eliminated by the proposed solution.

  5. 5.

    Control Plan. Make a quick and effective plan so that the problematic condition does not come back.

As an Example, in Estrada-Orantes and Alba-Baena (2014), the authors’ report describes a situation where a condition identified as “flash” is occurring in plastic components coming from an injection molding process. Data from a specific machine reports that approximately 99% of its production is exhibiting the “flash.” After applying Lean-Sigma, the results demonstrate that the process moved from a not stable and non-predictable performance with 99.62% defective to a state of statistical control, which is stable, and predictable, with an overall performance of 218 ppm, achieving such state in seven days. The same authors reported the use of Lean-Sigma during the period of ramp-up for an ink cartridge production process (Alba-Baena et al. 2016). In this case, two families of products were produced on separate production lines. The high-runner uses an automated process and is forecasted to reduce its production volumes gradually −12.52%/yr. The other family is assembled using a manual process and has a promising sales forecast of +12.53%/yr. The project integrated both family products into the existing automated process making any necessary adjustments, while keeping the productivity and quality levels the same. A total of 24 adjustment activities were used for the assembly, and 18 activities were added for the ink filling processes. The implementation functionality was measured combining the average productivity rate (103,490/day) which results slightly higher than the initial and, the quality levels that were measured below 6600 ppm for both Families, which is an improvement from the initial values of 7,957 ppm and 37,305 ppm for Families A and B respectively. Also, the results show that the changeover times reduced from 57 min/setup to 30 min/setup. Based on the described results, the authors claim that the Lean-Sigma methodology and its tools helped to achieve the quality and productivity goals in a short period with a strong statistical foundation.

Moreover, under restrictive conditions, the use of resources and time are critical and limited. Therefore, the initial stages of the solution process gain even more importance. The initial diagnosis will help to clarify the alarm values and conditions (customer complaints, requirement changes, or goals), followed by the alarm evaluation. A comparison between the alarm values and the product specifications will help in defining the actions to take. Also, a comparison of the data from the process at the time of the occurrence and from the actual process will reduce the possibility of overworking a solution. Such evaluation also helps in determining the degree of importance and steps to follow towards the solution. It will also give feedback to the customer or determine the feasibility of a proposal. In the initial analysis of a problem, it is necessary to determine the response variable that will be used for measuring the progress and target values for a giving problem. The input variables and data collection also have to be characterized for preparing the set of statistical procedures to use. Statistical tools such as control charts and descriptive statistics will help to understand the process behavior and historical tendencies. Data analysis includes the measuring and evaluation of the input variables and its effect on the response variable. Also, it is necessary to measure the effect of the relationship between them. These data have to be accompanied by a set of tools. Therefore, the manager will have the ability to modify the response variable and the validation process.

The initial diagnosis steps can be summarized as follows:

  1. 1.

    Clarify and understand the alarm.

  2. 2.

    Measure and compare the alarm data to the specification and process data.

  3. 3.

    Identify the response variable(s) to measure (productivity, quality, costs, etc.).

  4. 4.

    Identify the data type(s) and measure them.

  5. 5.

    Plot the collected data, while comparing to the specifications of the variable.

  6. 6.

    Analyze and understand the measured variations and tendencies.

  7. 7.

    Measure and track the effect of the changes in the response variables.

  8. 8.

    Identify the relationship between the response variable(s) and the input variables and those from the process.

  9. 9.

    Find the set of tools for modifying the process and the response variables’ levels.

  10. 10.

    Determine the target values and validation methods.

As mentioned before and summarizing, Lean-Sigma methodology embraces the rapid-improvement approach of Lean Manufacturing and the deep statistical analysis of Six Sigma. Additionally, Lean-Sigma may incorporate any needed tool from other methodologies, such as Design for Six Sigma (DFSS) and still maintain the fast solving pace, and the high statistical accuracy. This chapter presents a case study where the Lean-Sigma methodology is used to achieve a fast solution to a cosmetic quality problem occurring in the manufacturing process of exterior-home products located in Juarez, Mexico. The solution also includes the design of a qualitative measuring device, which was used for determining the acceptance parameter levels at the incoming inspection station, and later as a quality control device.

2 Case Study

2.1 Context of the Case

The case describes a productivity problem in the final assembly process of outdoor light fixtures at the Juarez (Mexico) facility. The company, for strategical reasons, decided to have the final assembly process in the Juarez facility, while the fabrication process of the components was assigned to several Asian facilities and different vendors overseas. The final assembly process started to experience color variations in the incoming components, and some customers filled several quality complaints.

One year after the startup, the customer service area recorded several customers’ complains, mainly about variation in the color tone and several quality issues such as stains, discolorations, broken, and malfunctioning products. Most of the complaints involved a part known as the chassis. This component is fabricated using a polymer, copper or brass material. 90% of the production uses the brass chassis. Brass components are colored by using a chemical process, called Patina. Coloring using the Patina process is susceptible to the chemical composition of the reactants to their concentrations, exposure time, and the neutralization process. The variability of the Patina process and the mentioned factors added to the customer complaints makes the brass products the target of this project.

The inspection of an incoming container shows that approximately 50% of the received chassis lots are acceptable according to the quality standards used for this purpose. Also, this data shows that the three main reasons for rejecting the chassis components are pieces with different tones, or discolored or stained. Figure 1b shows an example of a nominal colored chassis. Figure 1a shows the representative samples of a clear piece and Fig. 1c a dark piece.

Fig. 1
figure 1

Brass chassis tones a light, b nominal, and c dark pieces

2.2 Initial Statement

This case presents a common scenario faced by producers when combining different vendors located around the world. At arrival, the components are inspected for quality purposes using an AQL process at the incoming inspection area. For some of these components, the patinated chassis fixture is inspected for dimensions and visually for color. The levels of scrap observed on this component are around 30 to 50% of the received fixture-lots. Figure 2 shows details of the common defects found in the initial inspection, which includes cosmetic defects such as staining spots (Figs. 2a, b), non-patinated areas in pieces (discolored in Figs. 2c, d), and components with different coloring tones (Figs. 2e, f).

Fig. 2
figure 2

Rejected samples at the incoming inspection station: a, b stained sample; c, d discolored areas and e, f different colorations

Figure 3a shows the results of a selected container at the incoming inspection area. The acceptable components represent around 47% of the container, the components showing patina process defects are ranging around 38% (see the different tones and discolored bars at Fig. 3a), and stained components are about 12% and incomplete fixtures or other quality related issues account for the rest of the components. On the other hand, data from the quality control station of the assembly process recorded during a full year shows that approximately 20% of the product has some defects. 77% of the defective products are classified as stained (see Fig. 3b), 10% of them are incomplete products, 7% are the functionally defective ones (electrical, mechanical, etc.), and the rest is scrap parts related to non-repairable products (physical damages, broken lenses, etc.). Regarding the defective components, it was found at the incoming inspection area, that the vendor (located overseas) has responded with a temporary containment-action plan that includes the increment of lot sizes and the number of shipments. Additionally, he is absorbing the inspection, replacement, and management costs.

Fig. 3
figure 3

Defective-fixtures Pareto charts from a A specific container at the incoming inspection station, and b the collected by the inspection station at the assembly line

2.3 Problem Description

Figure 4 shows a Suppliers, Inputs, Process, Outputs, Customers or SIPOC diagram representing the assembly process at the Juarez facility. For confidentiality, the Suppliers and customers’ names from the original diagram were changed for generic ones. However, the playing roles are identified. In the “Inputs” column, the diagram shows only the six components needed for the assembly of the described product. The “customers” are classified by size (retail, wholesale, and special orders) and type (internal or external).

Fig. 4
figure 4

Graphical representation of the SIPOC and the outdoor lighting assembly process steps

For simplification, in this document, the assembly process is represented with five general stages (see Fig. 4): chassis assembly, wiring, optical assembly, inspection, and packaging. In the chassis assembly step, the chassis is prepared and inspected the mechanical functionality of the components, then, are assembled. The assembled components are then placed in a holding fixture that is transported to the wiring station, where the wiring and soldering are performed. The lens, O-rings and light bulb (or LEDs, depending on the model) are assembled to the chassis, then the product is tested for electrical safety. A final visual inspection and functionality test (electrical and mechanical testing) are performed at the inspection station. The product is finally moved to the packaging area where the operator performs a final visual inspection of the product while he packs according to the customer specifications.

2.4 Methodology

The Lean-Sigma approach used by the authors in this research, as explained earlier, is based on five rapid-improvement steps (depicted in Fig. 5): identify and measure the problem, perform the root cause analysis, develop a solution, verify the solution, and make a control plan. Also in the solution stage, DFSS steps are shown as used for this case. For the design of the solution, first, the key considerations for having the design specifications are based in the voice of the costumer (VoC) that in this case combines the final customer requirements and the quality control user’s needs. The specifications for designing the color testing device are based in the Quality Function Deployment (QFD) technique. The QFD is used for defining the conceptual and functional designs and for refining and defining the product specifications. In this case, the QFD data was used for listing the specifications for the color testing device. A series of simulations and some partial prototypes were developed with the purpose to validate the final design in the next step of the Lean-sigma sequence.

Fig. 5
figure 5

The methodology used for solving the problem presented in this case study

2.5 Results

The methodology presented in Fig. 5 is useful to dealing with the problem requirements and finding a solution as shown in the next points which describes the most relevant results.

2.5.1 Step One. Identify and Measure the Problem

The step one is a careful inspection of the incoming chassis components, but it is conducted very quickly. The collected data are useful to identify the problem as a color variation of the brass chassis component reaching the customer site. Using initial measuring methodology, it is determined that 38% of the parts have color variation issues.

2.5.2 Step 2. Root Cause Analysis

Step number two is conducted using a brainstorm session. After considering a large list of ideas, the causes were reduced to two, namely:

1. Color variation at the customer site caused by the inability to effectively separate the parts in ranges of color intensity. This cause was directly related to the current visual measuring system used.

2. Color variation caused by the patina process at the supplier facility.

2.5.3 Step 3. Develop a solution

The first solution was directed towards the measuring system inside the manufacturing facility. A proposed solution was to design a measuring device with the ability to quantify any color deviation from the customer specified color. Six DFSS tools are used to design the device, as depicted in Fig. 5, and is described as follow:

2.5.3.1 Step 3.1 Voice of the Customer (VoC)

The customer is using the outdoor-light product in batches of six and more parts. The customer is expecting to have a standardized color pattern for all of the parts in a batch, and stay like that for the warranted life of the product. Additionally, if it is needed to replace a component at a later date, the customer expects to have a chassis with the same color as the installed ones.

On the other hand, the user of the measuring system identifies the following characteristics as requirements for the device: The device must show the data to the user (inspection); it has the capability to connect and transfer data to a computer and data center; it has to control the tested component and the surrounding illumination; it must be heavy duty and reliable for long therm use; it must have quick response and reading signals; it must use a quick testing procedure; and it must be easy to operate.

2.5.3.2 Step 3.2 Conceptual and functional design

The purpose of these two steps is to have the design specifications that will be used as guidelines for having the product requirements, in this case, the testing device characteristics. The goal of the conceptual design is to determine the features and characteristics as the customer envisions them. First, make a list and rank the characteristics according to the customer’s appreciation and his determined importance, and give a weight for each of the listed characteristics. Then, a conceptual drawing and integration are presented to the customers for review. After the conceptual design step is concluded, the functional design requires the team to define the engineering parameters and to define the tolerances of each of the measured parameter. The main parameter is that the device has to measure the color of the piece. To accomplish this purpose, three components of the color have to be sensed or measured: hue, saturation, and brightness. Such data has to be used for defining the color identification (based on a predefined code). Data are also useful for identify and discriminate the acceptable, light, from the dark pieces. Also, in the functional design step, it is necessary to define the working conditions for the device to perform with robustness. Here, and for an accurate measurement of the color of the piece, the signal reaching the measuring device has to become as clean as possible. Then, other sources of color must be reduced or eliminated. The device also must have robust components for the working load at the factory and flexible to be integrated into the production process in their current state or with minor modifications. Last, in this step it is important to analyze the relationship among the different functions for establishing restrictions and functional requirements.

Finally, a QFD diagram (quality function deployment) is used to merge and use the gathered information, and to rank the main product characteristics as given by the customers. Also, the information considers to enlist and rank the engineering functions of the color measurement device. The ranges, units, and values also have been determined, then, the requirements and relationships are used for defining the product design specifications or engineering requirements. The resultant values and considerations are seen in the bottom of Fig. 6 where is possible to see the product design specifications and requirements for designing the Color Measuring Equipment (CME). Along with this exercise, the team benchmarks the market for the available devices (not shown in Fig. 6) for making a feasibility study and determine the more efficient components as used.

Fig. 6
figure 6

QFD or the design of the color measuring equipment to determine chassis coloration

2.5.3.3 Step 3.3 Evaluation of Alternatives

The goal of this evaluation is to determine the more suitable components and materials for the defined specifications. For this task, there are several decision tools, such as the CES EduPack® that helps in choosing the materials (see Ashby 2008), also for the selection the mechanical and physical characteristics, CO2 profiles, and the costs among others parameters. Also for this step, other weighing tools are possible to use the data from the buying system at the company or to use the advice from the suppliers and consultants for making the selection. From such evaluation, a list of materials and components is generated including the programming software and information technologies to be used.

2.5.3.4 Step 3.4 Simulations

The goal of this simulation step is to design and integrate the components of the device. A CAD software is used for making scaled to real size drawings of the components and pieces as designed and shown in Fig. 7b. Some features of the software helped in the simulated assembly and functionality of the device. Mechanical characteristics and other properties can be simulated using several available software options, which allow to make adjustments and to prepare the drawings for manufacturing the final pieces. In this case, the software SolidWorks® was successfully used for such tasks. For the electronics and programming, there are also several options, but the idea is similar, to integrate the electronic components and to program the features and interfaces of the components and to the final user. In this case, the programming was based on an Arduino® platform (see Fig. 7a).

Fig. 7
figure 7

a Scheme of the device’s system and b Prototype for detecting the color components of the receiving product at the incoming inspection station

2.5.3.5 Step 3.5 Materials and programming

Once the components, pieces, electronics, and programs are simulated, the next step is to acquire and integrate partial prototypes for a physical evaluation. The advantage of having partial prototypes is that is possible to make corrections and adjustments for a specific function or program without compromise the rest of the functions of the final prototype. After testing the different functions and prototypes, it is possible to move to the next step.

2.5.3.6 Step 3.6 a Prototype description

The final prototype integrates all the functions, and the challenge in this step, is to make them work together. Figure 7 shows the integration of the color measuring device; Fig. 7a shows the basic schema of the integration. The functionality of the prototype is described as follow: the prototype is based on a series of sensors that measures frequency for the hue component of the color readings. A lux-meter is used for measuring the brightest and saturation color components and, finally, a micro-controller board (Arduino UNO) is used to calculate and record the data. Later, with the recorded, the data is displaying using a portable touch screen where the user can interact with the system using the same screen. In the prototype, the sensors are located in an enclosed chamber that insulates the system from external light (see Fig. 7b). The prototype includes a fixture for holding and moves the component; the device is also connected to an external monitor (not included in the Figure) as a visual interface to the user.

The operation and use of the device shown in Fig. 7 is described as follows: in the dark chamber, after placing a chassis for evaluation, the white light is received by the sensors. Then, the detected readings and data variations of the RGB-W colors (Red, Green, Blue, and White) are converted to colorimetric values: hue (in degrees), saturation (in percentage), and brightness (in percentage). The color is identified and the data is stored, and used for statistics then displayed to the user. Then, the station opens the chamber for exchanging the chassis for the next inspection. The next step in the solution process is to consider the vendors and customers for determining the characteristics of the chassis color in the final product. The required color is agreed with the customers and vendors to be defined as antique brass color that is determined by a selection of masterpieces that serves as reference for a qualitative classification. Thus, along with the nominal masterpieces, the team agreed to classify the “clear” pieces to be lighter than the nominal antique brass and the “dark” pieces to be darker than the nominal ones.

2.5.3.7 Step 3.6 b Prototype validation

Gage Repeatability & Reproducibility (Gage R&R) studies were used during the calibration and validation process. Three factors were identified as the main source of variation: Geometry of the piece to measure, the use of a trigger button attached to the prototype, and a combination of coding pins for transmitting the collected data to the Arduino. For the calibration, the team used 100 selected masterpieces matching the antique brass color as defined and described previously. The measuring process required at least six recordings for each piece and position. After several interactions the changes in the fixtures and adjustments in the sensors’ position, the variation was measured in less than 10% for each of the color variables (8.67% as seen in the example for the hue in Fig. 8), which was considered as acceptable for these prototypes. After the calibration process, the team approved the prototype for its integration and replication for the final inspection stations as they were adapted to the inspection operation in the production process.

Fig. 8
figure 8

Gage R&R study for the Hue during the calibration process

2.6 Step 4. Verify Solution

After the calibration of the prototypes and by measuring the nominal masterpieces, a nominal value of 272.5 Hz was presented as the target for the Hue component, with 8.35% and 32% for the saturation and brightness, respectively. Also, the data from the accepted pieces (labeled as nominal) or acceptable were used for establishing the range and limit values for determining saturation, brightness, and hue and analyze the capability of the process. For achieving this purpose, the collected data in Fig. 9 shows that it is possible to use the given groups and limits for differentiating the colored pieces and their groups. Then, the next step is to measure the possibility of having a piece in an in-between zone.

Fig. 9
figure 9

Masterpieces data used for determining the coloring components and ranges of variation: a Saturation, b Brightness and c Hue data

The central group has a mean of 275 Hz for the Hue values, which is close to the targeted (272.5 Hz) but is also close to the darker ones which mean is far as ~7 Hz. But seems to be different enough from the light-colored pieces which mean shows a difference of ~15 Hz. Figures 9b and c shows the values and basic statistics for the saturation (Fig. 9b) and brightness (see Fig. 9c). The normal pieces show saturation values with a mean of ~8%, with a variation close to 1% value difference between group means. Here, the dark pieces group seems to be well defined by the other two groups while the light and normal groups may have an overlapping at the extreme points. The nominal (or normal) percentage values for the brightness shown to be around 31.7% (see Fig. 9c). Here, the data is overlapping, the clear and acceptable groups’ data shown to be part of the same population, overlapping and interacting with the dark pieces data. For determining the degree of overlapping and variation of the color components in each of the groups the team used an ANOVA. Using the shown data and understanding the behavior of the color components variation, the team decided to establish the nominal limits and present the initial standardized values. For the hue values, the nominal frequency was determined in 272.5 Hz with a tolerance of ±10 Hz. The saturation limits were delimitated between 6.97 and 9.71%, and the brightness between 31.74 and 32.26%. Then, the limits and range values for the color components were added to the programming of the controller program of the CME for labeling and identifying the color of the chassis in testing. The screen image shown in Fig. 10a exemplifies the user interface screen and the readings where the values for each component of the color are displayed to the user. The same Fig. 10a shows the suggested identification for one testing piece, where is displayed the choosing from one of the three color groups. Also, the screen shows the options for saving the data to the database or for deleting the displayed results.

Fig. 10
figure 10

a screen that serves as the interface with the user and boxplots after the variance testing for the selected masterpieces b Hue, c Saturation and d Brightness

After the calibration and establishing the limits, the team exercises ANOVA in the three coloring components’ data (Hue, Saturation, and Brightness). This data is used for calculating the confidence level in differentiating the nominal groups and pieces. Figures 10b and d shows the boxplots accompanying the ANOVA results: Fig. 10b shows the boxplots for the Hue data, where are compared the light and dark masterpieces data. From the ANOVA results is possible to establish a statistical difference to the nominal color data with a confidence level of 98.95%. Comparable results are shown for the saturation factor, as seen in Fig. 10c. Here it is possible to differentiate the saturation values from the masterpiece groups and could be used for identifying the group where a chassis belongs. For this data the confidence level in detecting statistical differences among the masterpiece groups is calculated in 96.72%. However, in the case of the data plotted from the brightness groups (Fig. 10d), data shows similar distribution values for the clear and nominal groups and values separated from the dark pieces’ box. The confidence level for using the brightness as a color identifier factor has been calculated in 56.05%, which is expected and acceptable due to the polishing process is the same for all the pieces. Nevertheless, the team has determined that in the decision process and for classifying a piece to be nominal or not, the brightness is less critical and will be considered as a reference value. Then the  brightness is a parameter to consider for identifying a difference in the color and quality of an incoming chassis.

Figure 11 shows the initial capability analysis (as seen in Fig. 11a), at the incoming inspection station for each of the measured variables. Hue data shows that the received lots have an average of 50% of conformant materials, 47.5% are dark colored materials, and light pieces represents 7.5% of the samples. In consequence, the process capability (Cpk) calculated for this  study is in 0.04, and the out-layer proportions are expected to be 11.3% below the lower limit. The dark pieces are expected to be 46.9% and the pieces in the nominal range in of 58.2% of the received materials. For the saturation data shown in Fig. 11b, the results gives a calculated Cpk of 0.27 with 28% of the sample materials out of the nominal values mainly at the lower side expecting for the population a total of 32.5%. However, from the study, there is expected to have a 27.8% of the received chassis to have lower saturation percentages and 0.2% pieces with over saturation. In the same way, the brightness Cpk is calculated in 0.25, and the nominal brightness accounts for 56.2%. Data show to be centered with a slight tendency (24.4%) to be brighter than the nominal (see Fig. 11c). but the calculated percentages out limiting data is expected to be 23.1% and 25% at the opaque and over bright sides respectively.

Fig. 11
figure 11

Initial process capability analysis (Cpk) using the calibrated equipment for data of each of the color components a Hue, b saturation and c brightness from a selected batch

2.7 Step 5. Control Plan

Changes in the process have been proposed for including the findings of this research. The received chassis lots are now inspected and separated into three groups: Light, Nominal and Dark colored. After, for the non-comforming components a reworking process starts with a sandblasting step. This is followed by a second Patina chemical processing which is described as follow: The prepared mixture (described above) is heated to 95 °C, the cleaned pieces are then hanged for 1 h, while the mixture is stirred every 5 min. After, a visual inspection and a wax polishing finish the reworking step is ended. Without considering data of the reworked pieces, the efficiency in the use of the color testing device is monitored and a learning curve is plotted. Once the users of the CME in the production area reaches an 85% efficiency in their learning curve in the use of the testing device, the team collects data samples of the receiving lots. Then, for consider that the Lean-Sigma process is completed, the team uses this data for comparing to the initial receiving data (at the time of starting this project). The comparison of the conditions of the chassis batches and their color distribution, the behavior of the mean, and the variance values for the hue are shown in Fig. 12. For the comparison to the actual process, Fig. 12 shows the two-sample test results (Fig. 12a) and a representative boxplot graphs with the established USL and LSL limits marked for reference (Fig. 12b). The test (p-value == 0.00) shows that there is a significant difference between the initial mean and the actual (final) data. The boxplot shows that the initial data has a median and mean closer to the USL, while the same reference points for the after-filtering data are close each other with the tendency to the LSL.

Fig. 12
figure 12

a Two-sample T test results for the initial recorded Hue values vs. data after using the color testing device and b the corresponding boxplot comparison

With the limits and controlled process, the use of I-MR chart is proposed for monitoring the process. The I-MR chart in Fig. 13 plots the individual color observations and moving ranges for the initial and after the implementation of the solution. As seen in the Figure, the initial data shows a clear dispersion of the chassis’ color data, observing that most of them are out of the specified limits. However, the data after implementation of the solution shows a controlled color data means and moving ranges as seen at the right part of the chart in Fig. 13. The control acquired by using the color testing device is observed in the boxplot comparison in Fig. 14a. The quartiles showed to be more equilibrated as compared to the initial data plot. In the same Figure, the comparison shows that the median for the hue values is closer to the targeted value in the after implementation data. Also, for determine and compare the capability of the actual process data to the initial capability, Fig. 14b shows the capability analysis and comparisons. From this Figure (14b) it can be noticed that the Cpk moved from a 0.036 to 0.65, this value is low as compared to the conventionally acceptable Cpk (1.33). However, the use of the color testing device helped in moving the outcome of the nominal colored pieces from 50 to 97.24% and provides the initial value for the next steps in the continuous improvement process.

Fig. 13
figure 13

I-MR chart is showing for comparison the initial and data after implementation of the inspection for the Hue color component

Fig. 14
figure 14

a Boxplot and b process capability graphs comparing the initial and after implementation data

Finally, the next step in this project is to analyze and adjust and optimize the Patina formula and its process to achieve a reduced variation and more controlled color values (target) as established by the design specifications. Followed by the analysis of the cleaning process after the Patina processing, for reducing the stains and cosmetic marks in the final product.

3 Conclusions

The case presents the solution procedure in a final assembly facility located in Juarez, Mexico, with the described characteristics of a LA restrictive conditions. These solution efforts are focused in the component “chassis” which is the main component of an outdoor lighting fixture. The problem was detected from a customer’s complaint and is described as a difference in color between the received products and the later replacing fixtures. As a containment plan, the components received from an overseas supplier are subject to a 100% incoming inspection. For finding a solution, the team decided to combine Lean-Sigma methodology and DFSS tools for designing and implementing the use of a color testing device in the incoming and final inspection stations. The inspection apparatus was designed and validated by the team using the qualitative inspection data for determining a standardized target and limit values for the color components. As result, the qualitative color-tone inspection, based on a visual separation of the chassis is substituted by a quantitative inspection process using the designed measuring device. The measuring is based in the three color components (hue, saturation, and brightness) and are used for determining the color values. The obtained readings are later compare to the ranges calculated and determined for accepting the incoming chassis. After implementation, the process capability moves the outcome from a 50% probability of sending to the customer mismatching colored products to a 97.2% probability of sending chassis and products colored in the acceptable color range.

These results are included in the continuous improvement process in the facilities and at the time of this report, a mixture design was proposed for correcting and adjusting the patina process. The results of the mixture design will be useful for controlling the chemical coloring during the reworking procedure. Also, it is proposed to make changes in the cleaning and neutralizing process and reduce with this the second important problem cause (stained chassis). Finally, as a secondary result of this project, the administration of this company ventures in the use of other coloring tones’ for the same product, expecting to increase the business outcome by offering options during the bidding for similar products by diversifying their offers to the market.

4 Final Remarks

Managers in a restrictive environment have the challenge to solve problems at hi-speed but with limited resources. For such conditions, experts have developed tools and methods which combine different elements of efficient methodologies. Such is the case of Lean-Sigma approach that has been proven to be one an efficient way to solve problems under these conditions. The case study shows that it is possible to use Lean-Sigma in combination with other methodologies such as DFSS, for problem-solving under the mentioned conditions by taking advantage of the fast response of Lean while having the deep statistical analysis of Six-Sigma. In this case, the use of Lean-Sigma with Design for Six Sigma boost the solution process and complement the improvement, and corrective actions as the process move from 50% defective to ~2% at the end of this report.