Hypothesis Testing

Hypothesis testing helps an organization determine whether making a change to a process input (x) significantly changes the output (y) of the process. It statistically determine if there are differences between two or more process outputs. Hypothesis testing is used to help determine if the variation between groups of data is due to true differences between the groups or is the result of common cause variation, which is the natural variation in a process.

This tool is most commonly used in the Analyze step of the DMAIC method to determine if different levels of a discrete process setting (x) result in significant differences in the output (y). An example would be “Do different regions of the country have different defect levels?” This tool is also used in the Improve step of the DMAIC method to prove a statistically significant difference in “before” and “after” data. It Identifies whether a particular discrete x has an effect on the y. This also checks for the statistical significance of differences. In other words, it helps determine if the difference observed between groups is bigger than what you would expect from common-cause variation alone. This gives  a p-value, which is the probability that a difference you observe is as big as it is only because of common-cause variation. It can be  used to compare two or more groups of data, such as “before” and “after” data.

Hypothesis testing assists in using sample data to make decisions about population parameters such as averages, standard deviations, and proportions. Testing a hypothesis using statistical methods is equivalent to making an educated guess based on the probabilities associated with being correct. When an organization makes a decision based on a statistical test of a hypothesis, it can never know for sure whether the decision is right or wrong, because of sampling variation. Regardless how many times the same population is sampled, it will never result in the same sample mean, sample standard deviation, or sample proportion. The real question is whether the differences observed are the result of changes in the population, or the result of sampling variation. Statistical tests are used because they have been designed to minimize the number of times an organization can make the wrong decision. There are two basic types of errors that can be made in a statistical test of a hypothesis:

  1. A conclusion that the population has changed when in fact it has not.
  2. A conclusion that the population has not changed when in fact it has.

The first error is referred to as a type I error. The second error is referred to as a type II error. The probability associated with making a type I error is called alpha (α) or the α risk. The probability of making a type II error is called beta (β) or the β risk. If the α risk is 0.05, any determination from a statistical test that the population has changed runs a 5% risk that it really has not changed. There is a 1 – α, or 0.95, confidence that the right decision was made in stating that the population has changed. If the β risk is 0.10, any determination from a statistical test that there is no change in the population runs a 10% risk that there really may have been a change. There would be a 1 – β, or 0.90, “power of the test,” which is the ability of the test to detect a change in the population. A 5% α risk and a 10% β risk are typical thresholds for the risk one should be willing to take when making decisions utilizing statistical tests. Based upon the consequence of making a wrong decision, it is up to the Black Belt to determine the risk he or she wants to establish for any given test, in particular the α risk. β risk, on the other hand, is usually determined by the following:

  • δ: The difference the organization wants to detect between the two population parameters. Holding all other factors constant, as the δ increases, the β decreases.
  • σ: The average (pooled) standard deviation of the two populations. Holding all other factors constant, as the σ decreases, the β decreases.
  • n: The number of samples in each data set. Holding all other factors constant, as the n increases, the β decreases.
  • α: The alpha risk or decision criteria. Holding all other factors constant, as the α decreases, the β increases.

Most statistical software packages will have programs that help determine the proper sample size, n, to detect a specific δ, given a certain σ and defined α and β risks.


How does an organization know if a new population parameter is different from an old population parameter? Conceptually, all hypothesis tests are the same in that a signal (δ)-to-noise (σ) ratio is calculated (δ/σ) based on the before and after data. This ratio is converted into a probability, called the p-value, which is compared to the decision criteria, the α risk. Comparing the p-value (which is the actual α of the test) to the decision criteria (the stated α risk) will help determine whether to state the system has or has not changed.
Unfortunately, a decision in a hypothesis can never conclusively be defined as a correct decision. All the hypothesis test can do is minimize the risk of making a wrong decision. Conducting a hypothesis test is analogous to a prosecuting attorney trying a case in a court of law. The objective of the prosecuting attorney is to collect and present enough evidence to prove beyond a reasonable doubt that a defendant is guilty. If the attorney has not done so, then the jury will assume that not enough evidence has been presented to prove guilt; therefore, they will conclude the defendant is not guilty. If one want to wants to make a change to an input (x) in an existing process to determine a specified improvement in the output (y), he or she will need to collect data after the change in x to demonstrate beyond some criteria (the α risk) that the specified improvement in y was achieved.

The following steps describe how to conduct a hypothesis test

  1.  Define the problem or issue to be studied.
  2.  Define the objective.
  3. State the null hypothesis, identified as H0.
    The null hypothesis is a statement of no difference between the before and after states (similar to a defendant being not guilty in court).
    H0: μbefore = μafter
    The goal of the test is to either reject or not reject H0.
  4. State the alternative hypothesis, identified as Ha.
    • The alternative hypothesis is what one is trying to prove and can be one of the following:
    • Ha: μbefore    μafter (a two-sided test)
    • Ha: μbefore < μafter (a one-sided test)
    • Ha: μbefore > μafter (a one-sided test)
    • The alternative chosen depends on what one is trying to prove. In a two-sided test, it is important to detect differences from the hypothesized mean, μbefore, that lie on either side of μbefore. The α risk in a two-sided test is split on both sides of the histogram. In a one-sided test, it is only important to detect a difference on one side or the other.
  5. Determine the practical difference (δ).
    The practical difference is the meaningful difference the hypothesis test should detect.
  6. Establish the α and β risks for the test.
  7. Determine the number of samples needed to obtain the desired β risk. Remember that the power of the test is (1-β).
  8. Collect the samples and conduct the test to determine a p-value.
    Use a software package to analyze the data and determine a p-value.
  9. Compare the p-value to the decision criteria (α risk) and determine whether to reject H0 in favor of Ha, or not to reject H0.
    • If the p-value is less than the α risk, then reject H0 in favor Ha.
    • If the p-value is greater than the α risk, there is not enough evidence to reject H0.

Depending on the population parameter of interest there are different types of hypothesis tests; these types are described in the following table.The table is divided into two sections: parametric and non-parametric. Parametric tests are used when the underlying distribution of the data is known or can be assumed (e.g., the data used for t-testing should subscribe to the normal distribution). Non-parametric tests are used when there is no assumption of a specific underlying distribution of the data.1

Terminology used in Hypothesis Testing

A number of commonly used hypothesis test terms are presented below.

  1. Null Hypothesis

    This is the hypothesis to be tested. The null hypothesis directly stems from the problem statement and is denoted as H0. Examples:-

    • If one is investigating whether a modified seed will result in a different yield/acre, the null hypothesis (two-tail) would assume the yields to be the same H0: Ya = Yb.
    •  If a strong claim is made that the average of process A is greater than the average of process B, the null hypothesis (one-tail) would state that process A ≤ process B. This is written as H0: A ≤ B.

    The procedure employed in testing a hypothesis is strikingly similar to a court trial. The hypothesis is that the defendant is presumed not guilty until proven guilty. However, the term innocent does not apply to a null hypothesis. A null hypothesis can only be rejected, or fail to be rejected, it cannot be accepted because of a lack of evidence to reject it. If the means of two populations are different, the null hypothesis of equality can be rejected if enough data is collected. When rejecting the null hypothesis, the alternate hypothesis must be accepted.

  2. Test Statistic

    In order to test a null hypothesis, a test calculation must be made from sample information. This calculated value is called a test statistic and is compared to an appropriate critical value. A decision can then be made to reject or not reject the null hypothesis.

  3. Types of Errors

    When formulating a conclusion regarding a population based on observations from a small sample, two types of errors are possible:

    • Type I error: This error occurs when the null hypothesis is rejected when it is, in fact, true. The probability of making a type I error is called α (alpha) and is commonly referred to as the producer’s risk (in sampling). Examples are:
      incoming products are good but called bad; a process change is thought to be different when, in fact, there is no difference.
    • Type II error: This error occurs when the null hypothesis is not rejected when it should be rejected. This error is called the consumer’s risk (in sampling) , and is denoted by the symbol β (beta). Examples are: incoming products are bad, but called good; an adverse process change has occurred but is thought to be no different.

    The degree of risk (α) is normally chosen by the concerned parties (α is normally taken as 5%) in arriving at the critical value of the test statistic. The assumption is  that a small value for α is desirable. Unfortunately, a small α risk increases the β risk. For a fixed sample size, α and β are inversely related. Increasing the sample size can reduce both the α and β risks.1

    Any test of hypothesis has a risk associated with it and one is generally concerned with the or risk (a type I error which rejects the null hypothesis when it is true). The level of this α risk determines the level of confidence (1 – α) that one has in the conclusion. This risk factor is used to determine the critical value of the test statistic which is compared to a calculated value.

  4. One-Tail Test

    If a null hypothesis is established to test whether a sample value is smaller or larger than a population value, then the entire or risk is placed on one end of a distribution curve. This constitutes a one-tail test.

    • A study was conducted to determine if the mean battery life produced by a new method is greater than the present battery life of 35 hours. In this case, the entire or risk will be placed on the right tail of the existing life distribution curve.
      H0: new< or = to present                   H1: new>present
      1Determine if the true mean is within the or critical region.
    • A chemist is studying the vitamin levels in a brand of cereal to determine if the process level has fallen below 20% of the minimum daily requirement. It is the manufacturer’s intent to never average below the 20% level. A one-tail test would be applied in this case, with the entire at risk on the left tail.
      H0: level > or = 20%                   H1: level < 20%
      one tail test 1
      Determine if the true mean is within the α  critical region.
  5. Two-Tail Test

    If a null hypothesis is established to test whether a population shift has occurred, in either direction, then a two-tail test is required. The allowable α error is generally divided into two equal parts. Examples:

    • An economist must determine if unemployment levels have changed significantly over the past year.
    • A study is made to determine if the salary levels of company A differ significantly from those of company B.

    H0: levels are =                                                     H1: levels are ≠
    1Determine if the true mean is within either the upper or lower α critical regions.

  6. Practical Significance vs. Statistical Significance

    The hypothesis is tested to determine if a claim has significant statistical merit. Traditionally, levels of 5% or 1% are used for the critical significance values. If the calculated test statistic has a p-value below the critical level then it is deemed to be statistically significant. More stringent critical values may be required when human injury or catastrophic loss is involved. Less stringent critical values may be advantageous when there are no such risks and the potential economic gain is high. On occasion, an issue of practical versus statistical significance may arise. That is, some hypothesis or claim is found to be statistically significant, but may not be worth the effort or expense to implement. This could occur if a large sample was tested to a certain value, such as a diet that results in a net loss of 0.5 pounds for 10,000 people. The result is statistically significant, but a diet losing 0.5 pounds per person would not have any practical significance. The  issues of practical significance will often occur if the sample size is not adequate. A power analysis may be needed to aid in the decision- making process.

  7. Power of Test H0 : μ = μ0

    Consider a null hypothesis that a population is believed to have mean μ0= 70.0 and σx = 0.80. The 95% confidence limits are 70±(1.96)(0.8) = 71.57 and 68.43. One accepts the hypothesis μ = 70 if (X-bar)s are between these limits. The alpha risk is that  sample means will exceed those limits. One can ask “what if” questions such as, “What if” μ shifts to 71, would it be detected?” There is a risk that the null hypothesis would be accepted even if the shift occurred. This risk is termed β. The value of β is large if μ is close to μ0 and small if μ is very different from μ0. This indicates that slight differences from the hypothesis will be difficult to detect and large differences will be easier to detect. The normal distribution curves below show the null and alternative hypotheses. If the process shifts from 70 to 71, there is a 76% probability that it would not be detected.1

    To construct a power curve, 1 – β is plotted against alternative values of μ. The power curve for the process under discussion is shown below. A shift in a mean away from the null increases the probability of detection. In general, as alpha increases, beta decreases and the power of 1 – β increases. One can say that a gain in power can be obtained by accepting a lower level of protection from the alpha error. Increasing the sample size makes it possible to decrease both alpha and beta and increase power.


    The concept of power also relates to experimental design and analysis of variance.
    The following equation briefly states the relationship for ANOVA.
    1 – β = P(Reject H0 /H0 is false)
    1 – β = Probability of rejecting the null hypothesis given that the null hypothesis is false.

  8. Sample Size

    In the statistical inference discussion thus far, it has been assumed that the sample size (n) for hypothesis testing has been given and that the critical value of the test statistic will be determined based on the α error that can be tolerated. The ideal procedure, however, is to determine the α and β error desired and then to calculate the sample size necessary to obtain the desired decision confidence.

    The sample size (n) needed for hypothesis testing depends on:

    • The desired type I (α) and type II (β) risk
    • The minimum value to be detected between the population means (μ – μ0)
    • The variation in the characteristic being measured (S or σ)

    Variable data sample size, only using a, is illustrated by the following: Assume in a pilot process one wishes to determine whether an operational adjustment will alter the process hourly mean yield by as much as 4 tons per hour. What is the minimum sample size which, at the 95% confidence level (Z=1.96), would confirm the significance of a mean shift greater than 4 tons per If hour? Historic information suggests that the standard deviation of the hourly output  is 20 tons. The general sample size equation for variable data (normal distribution) is:1

    Obtain 96 pilot hourly yield values and determine the hourly average. If this mean deviates by more than 4 tons from the previous hourly average, a significant change at the 95% confidence level has occurred. If the sample mean deviates by less than 4 tons/hr, the observable mean shift can be explained by chance cause.

    For binomial data, use the following formula:1

  9. Estimators

    In analyzing sample values to arrive at population probabilities, two major estimators are used: point estimation and interval estimation. For example, Consider the following tensile strength readings from 4 piano wire segments: 28.7, 27.9, 29.2 and 26.5 psi. Based on this data, the following expressions are true:

    1. Point estimation: If a single estimate value is desired (i.e., the sample average), then a point estimate can be obtained.128.08 psi is the point estimate for the population mean.
    2. Interval Estimate or Cl (Confidence Interval): From sample data one can calculate the interval within which the population mean is predicted to fall. A Confidence intervals are always estimated for population parameters and, in general, are derived from the mean and standard deviation of sample data. For small samples, a critical value from the t distribution is required and for 95% confidence, t = 3.182 for n-1 degrees of freedom. The CI equation and interval would be:
      If the population sigma is known (say σ = 2 psi), the Z distribution is used. The critical Z value for 95% confidence is 1.96. The CI equation and interval would be:1A confidence interval is a two-tail event and requires critical values based on an alpha/2 risk in each tail. The central limit theorem term,point estimation2is necessary because the confidence interval is for a population mean and not individual values. Other confidence interval formulas exist. These include percent nonconforming, Poisson distribution data and very small sample size data.
  1. Confidence Intervals for the Mean

    1. Continuous Data – Large Samples 

      Use the normal distribution to calculate the confidence interval for the mean.1Example: The average of 100 samples is 18 with a population standard deviation of 6. Calculate the 95% confidence interval for the population mean.1

    2. Continuous Data – Small Samples

      If a relatively small sample is used (<30) then the t distribution must be used.1Example : Use the same values as in the prior example except that the sample size is 25.1

  2. Confidence Intervals for Variation

    The confidence intervals for the mean were symmetrical about the average. This is  not true for the variance, since it is based on the chi square distribution. The formula is :1Example: The sample variance for a set of 25 samples was found to be 36. Calculate the 90% confidence interval for the population variance.1

  3. Confidence Intervals for Proportion

    For large sample sizes, with n(p) and n(1-p) greater than or equal to 4 or 5, the normal distribution can be used to calculate a confidence interval for proportion. The following formula is used:1Example: If 16 defectives were found in a sample size of 200 units, calculate the 90% confidence interval for the proportion.

Hypotheses Tests for Comparing  Single Population

We begin by considering hypothesis tests to compare parameters of a single population, such as , and fraction defective p, to specified values. For example, viscosity may be an important characteristic in a process validation experiment and we may want to determine if the population standard deviation of viscosity is less than a certain value or not. Additional examples of such comparisons are suggested by the following questions.

  1. Is the process centered on target? Is the measurement bias acceptable?
  2. Is the measurement standard deviation less than 5% of the specification width? Is the process standard deviation less than 10% of the specification width?
  3. Let p denote the proportion of objects in a population that possess a certain property such as products that exceed a certain hardness, or cars that are domestically manufactured. Is this proportion p greater than a certain specified value?

Comparing Mean (Variance Known)

  1. Z Test

    When the population follows a normal distribution and the population standard deviation, σx, is known, then the hypothesis tests for comparing a population mean, μ, with a fixed value, μ0, are given by the following:

    • H0: μ = μ0                  H1: μ ≠ μ0
    • H0: μ ≤ μ0                  H1: μ> μ0
    • H0: μ ≥ μ0                  H1: μ < μ0

    The null hypothesis is denoted by H0 and the alternative hypothesis is denoted by H1. The test statistic is given by:1

     where the sample average is X-bar, the number of samples is n and the standard  deviation of the mean is σx. Note, if n > 30, that the sample standard deviation, s, is  often used as an estimate of the population standard deviation, σx. The test statistic, Z, is compared with a critical value Zα  or Zα/2, which is based on a significance level,α, for a one—tailed test or α/2 for a two-tailed test. If the H1 sign is ≠, it is a two-tailed test. If the H1 sign is >, it is a right, one-tailed test, and if the H1 sign is <, it is a left, one-tailed test.
    Example: The average vial height from an injection molding process has been 5.00″ with a standard deviation of 0.12″. An experiment is conducted using new material which yielded the following vial heights: 5.10″, 4.90″, 4.92″, 4.87″, 5.09″, 4.89″, 4.95″, and 4.88″. Can one state with 95% confidence that the new material is producing shorter vials with the existing molding machine setup? This question involves an inference about a population mean with a known sigma. The Z test applies. The null and alternative hypotheses are:

    H0: μ ≥ μ0                  H1: μ < μ0

    H0: μ ≥ 5.00″                  H1: μ <5.00″

    The sample average is (X-bar) = 4.95″ with n = 8 and the population standard deviation is σx = 0.12″. The test statistic is:1Since the H, sign is <, it is a left, one-tailed test and with a 95% confidence, the level of significance, α = 1 – 0.95 = 0.05. Looking up the critical value in a normal distribution or Z table, one finds Z0.05 = -1.645. Since the test statistic, -1 .18, does not fall in the reject (or critical) region, the null hypothesis cannot be rejected. There is insufficient evidence to conclude that the vials made with the new material are shorter.
    If the test statistic had been, for example -1.85, we would have rejected the null hypothesis and concluded the vials made with the new material are shorter.1

  2. Student’s t Test

    This technique was developed by W. S. Gosset and published in 1908 under the pen name “Student.” Gosset referred to the quantity under study as t. The test has since been known as the student’s t test. The student’s t distribution applies to samples drawn from a normally distributed population. It is used for making inferences about a population mean when the population variance, σ2, is unknown and the sample size, n, is small. The use of the t distribution is never wrong for any sample size. However, a sample size of 30 is normally the crossover point between the t and Z tests. The test statistic formula is:1
    The null and alternative hypotheses are the same as were given for the Z test. The test statistic, t, is compared with a critical value, tα  or tα/2, which is based on a significance level,α, for a one—tailed test or α/2 for a two-tailed test and the number of degrees of freedom, d.f. The degrees of freedom is determined by the number of samples, n, and is simply: dt=n-1

    Example: The average daily yield of a chemical process has been 880 tons (μ = 880 tons). A new process has been evaluated for 25 days (n = 25) with a yield of 900 tons (X-bar) and sample standard deviation, s = 20 tons. Can one say with 95% confidence that the process has changed?

    The null and alternative hypotheses are:

    H0: μ = μ0                  H1: μ ≠ μ0

    H0: μ = 880 tons                  H1: μ ≠ 880 tons

    The test statistic calculation is:

    Student t test example

    Since the H1 sign is ≠, it is a two-tailed test and with a 95% confidence, the level of significance, α = 1 – 0.95 = 0.05. Since it is a two-tail test, α/2 is used to determine the critical values. The degrees of freedom d.f. = n – 1 = 24. Looking up the critical values in a t distribution table, one finds t0.025.= -2.064 and t0.975 = 2.064. Since the test statistic, 5, falls in the right-hand reject (or critical) region, the null hypothesis is rejected. We conclude with 95% confidence that the process has changed.



One underlying assumption is that the sampled population has a normal probability distribution. This is a restrictive assumption since the distribution of the sample is unknown. The t distribution works well for distributions that are bell-shaped.

Comparing Standard Deviations/ Variance

Chi Square (χ2) Test

Standard deviation (or variance) is fundamental in making inferences regarding the population mean. In many practical situations, variance (σ2) assumes a position of greater importance than the population mean. Consider the following examples:

  1. A shoe manufacturer wishes to develop a new sole material with a more stable wear pattern. The wear variation in the new material must be smaller than the variation in the existing material.
  2. An aircraft altimeter manufacturer wishes to compare the measurement precision among several instruments.
  3. Several inspectors examine finished parts at the end of a manufacturing process. Even when the same lots are examined by different inspectors, the number of defectives varies. Their supervisor wants to know if there is a significant difference in the knowledge or abilities of the inspectors.

The above problems represent a comparison of a target or population variance with an observed sample variance, a comparison between several sample variances, or a comparison between frequency proportions. The standardized test statistic is called the Chi Square (χ2)test. Population variances are distributed according to the chi square distribution. Therefore, inferences about a single population variance will be based on chi square. The chi square test is widely used in two applications.
Case I. Comparing variances when the variance of the population is known.
Case ll. Comparing observed and expected frequencies of test outcomes when there is no defined population variance (attribute data).
When the population follows a normal distribution, the hypothesis tests for comparing a population variance, 0:, with a fixed value, 0:, are given by the following:

  • H0: σx2 = σ02                  H1: σx2 ≠ σ02
  • H0: σx2 ≤ σ02                  H1: σx2> σ02
  • H0: σx2 ≥ σ02                  H1x2 < σ02

The null hypothesis is denoted by H0 and the alternative hypothesis is denoted by H1. The test statistic is given by:1Where the number of samples is n and the sample variance is s2. The test statistic, A χ2, is compared with a critical value χα2, or χα/22, which is based on a significance level, α, for a one-tailed test or α/2 for a two-tailed test and the number of degrees of freedom, d.f. The degrees of freedom is determined by the number of samples, n, and is simply:  d,f.=n-1

If the H1 sign is≠, it is a two-tailed test. If the H1 sign is >, it is a right, one-tailed test, and if the H1 sign is <, it is a left, one-tailed test.

The χ2 distribution looks like so:1

Please note, unlike the Z and t distributions, the tails of the chi square distribution are non-symmetrical.

  • Chi square Case I. Comparing Variances When the Variance of the Population Is Known.

    Example: The R & D department of a steel plant has tried to develop a new steel alloy with less tensile variability. The R & D department claims that the new material will show a four sigma tensile variation less than or equal to 60 psi 95% of the time. An eight sample test yielded a standard deviation of 8 psi. Can a reduction in tensile
    strength variation be validated with 95% confidence?

    Solution: The best range of variation expected is 60 psi. This translates to a sigma of 15 psi (an approximate 4 sigma spread covering 95.44% of occurrences).

    H0: σx2 ≥ σ02                  H1x2 < σ02

    H0: σx2 ≥ 152                  H1x2 < 152

    From the chi square table: Because S is less than σ, this is a left tail test with n – 1 = 7. The critical value for 95% confidence is 2.17. That is, the calculated value will be less than 2.17, 5% of the time. Please note that if one were looking for more variability in the process a right tail rejection region would have been selected and the critical value would be 14.07.
    The calculated statistic is:1=(7)(8)2/(15)2=1.99
    Since 1.99 is less than 2.17, the null hypothesis must be rejected. The decreased variation in the new steel alloy tensile strength supports the R & D claim.1

  • Chi square Case ll. Comparing Observed and Expected Frequencies of Test Outcomes. (Attribute Data)

    It is often necessary to compare proportions representing various process conditions. Machines may be compared as to their ability to produce precise parts. The ability of inspectors to identify defective products can be evaluated. This application of chi square is called the contingency table or row and column analysis.
    The procedure is as follows:

    1. Take one subgroup from each of the various processes and determine the  observed frequencies (0) for the various conditions being compared.
    2. Calculate for each condition the expected frequencies (E) under the assumption that no differences exist among the processes.
    3.  Compare the observed and expected frequencies to obtain “reality.” The following calculation is made for each condition:


    4. Total all the process conditions:
    5. A critical value is determined using the chi square table with the entire level of significance, σ, in the one-tail, right side, of the distribution. The degrees of freedom is determined from the calculation (R-1)(C-1) [the number of rows minus 1 times the number of columns minus 1 ].
    6. A comparison between the test statistic and the critical value confirms if a ) significant difference exists (at a selected confidence level).

    Example: An airport authority wanted to evaluate the ability of three X-ray inspectors to detect key items. A test was devised whereby transistor radios were placed in ninety pieces of luggage. Each inspector was exposed to exactly thirty of the pre selected and “bugged” items in a random fashion. The observed results are summarized below.1 Is there any significant difference in the abilities of the inspectors? (95%  confidence)
    Null hypothesis:
    There is no difference among  three inspectors, H0: p1 = p2 = p3
    Alternative hypothesis:
    At least one of the proportions is different, H1: p1 ≠ p2 ≠ p3
    The degrees of freedom = (rows – 1)(columns – 1) = (2-1)(3-1) = 2
    The critical value of χ2 for DF = 2 and d = 0.05 in the one-tail, right side of the distribution, is 5.99 . There is only a 5% chance that the calculated value of χ2 will exceed 5.99.1

    chi square 21

     = 0.220 + 0.004 + 0.289 + 1.019 + 0.020 + 1.333
    χ2 = 2.89

Since the calculated value of χ2 is less than the previously calculated critical value of 5.99 and this is a right tail test, the null hypothesis cannot be rejected. There is insufficient evidence to say with 95% confidence that the abilities of the inspectors differ.

Comparing Proportion

p Test

When testing a claim about a population proportion, with a fixed number of independent trials having constant probabilities, and each trial has two outcome possibilities (a binomial experiment), a p test can be used. When np < 5 or n(1-p) < 5, the binomial distribution is used to test hypotheses relating to proportion.
If conditions that np ≠ 5 and n(1-p)≠5 are met, then the binomial distribution of sample proportions can be approximated by a normal distribution. The hypothesis tests for comparing a sample proportion, p, with a fixed value, po, are given by the following:

  • H0: p = p0                  H1: p ≠ p0
  • H0: p ≤ p0                  H1: p> p0
  • H0: p ≥ μ0                  H1: μ < p0

The null hypothesis is denoted by H0 and the alternative hypothesis is denoted by H1. The test statistic is given by: 1Where the number of successes is x and the number of samples is n. The test statistic, Z, is compared with a critical value Zα  or Zα/2, which is based on a significance level,α, for a one—tailed test or α/2 for a two-tailed test .If the H1 sign is >, it is a right, one-tailed test, and if the H1 sign is <, it is a left, one-tailed test.

Example. A local newspaper stated that less than 10% of the rental properties did not allow renters with children. The city council conducted a random sample of 100 units and found 13 units that excluded children. Is the newspaper statement wrong based upon this data? In this case H0: p ≤ 0.1 and H1: p> 0.1 In this case p0 = 0.1 and the computed Z value is 


For α= 0.05, Z = 1.64 and the newspaper statement cannot be rejected based upon this data at the 95% level of confidence.

Hypotheses Test for Comparing  two Population

Here we considers hypothesis tests to compare parameters of two populations with each other. For example, we may want to know if after a process change the process is different from the way it was before the change. The data after the change constitute one population to be compared with the data prior to change, which constitute the other population. Some specific comparative questions are: Has the process mean changed? Has the process variability reduced? If the collected data are discrete, such as defectives and non defectives, has percent defective changed?

Comparing Two Means (Variance Known)

Z Test.

The following test applies when we want to compare two population
means and the variance of each population is either known or the sample size is large (n > 30). Let two means denote the population mean, sample size, sample average and population standard deviation for the first population and let two means 1 represent the same quantities for the second population. The hypotheses being compared are H0: μ1 = μ2  and             H1: μ1 ≠μ2.  Under the null hypothesistwo means 2

Therefore, the test statistic Z =two means 3

has a standard normal distribution. If the computed value of Z exceeds the critical value, the null hypothesis is rejected.

Example. We want to determine whether the tensile strength of products from two suppliers are the same. Thirty samples were tested from each supplier with the following results:two means 4  and Z =twomeans

The Z value for  α= 0.001 is 3.27; hence, the two means are different
with 99.9% confidence.

Comparing Two Means (Variance Unknown but Equal)

Independent t-Test.

This test is used to compare two population means when the sample sizes are small, the population variances are unknown but may be assumed to be equal. In this situation, a pooled estimate of the standard deviation is used to conduct the t-test. Prior to using this test, it is necessary to demonstrate that the two variances are not different, which can be done by using the F-test   The hypotheses being tested are The hypotheses being compared are H0: μ1 = μ2  and H1: μ1 ≠μ2. . A pooled estimate of variance is obtained by weighting the two variances in proportion to their degrees of freedom as follows:


The test statistic t has has a tn1+n2–2 distribution. If the computed value of t exceeds the critical value, H0 is rejected and the difference is said to be statistically significant.

Example. The following results were obtained in comparing surface soil pH at two different locations:1

Do the two locations have the same pH?
Assuming that the two variances are equal, we first obtain a pooled
estimate of variance:1

Spooled = 0.24. Then the t statistic is computed:1
For a two-sided test with = 0.05 and (n1 + n2 – 2) = 18 degrees of freedom, the critical value of t is t0.025,18 = 2.1. Since computed value of t exceeds the critical value 2.1, the hypothesis that the two locations have the same pH is rejected.

Comparing Two Means (Variance Unknown and Unequal)

Independent t-Test.

This test is used to compare two population means when the sample sizes are small (n < 30), the variance is unknown, and the two population variances are not equal, which should first be demonstrated by conducting the F-test to compare two variances. The hypotheses being compared are H0: μ1 = μ2  and H1: μ1 ≠μ2. The test statistic t and the degrees of freedom ν are1

If the computed t exceeds the critical t, the null hypothesis is rejected.
Example. The following data were obtained on the life of light bulbs made by two manufacturers:1

Is there a difference in the mean life of light bulbs made by the two manufacturers? Assuming  that the F-test in  shows that the two standard deviations are not equal. The computed t and ν are:1

For α= 0.05, tα/2,ν = 2.16. Since the computed value exceeds the critical t value, we are 95% sure that the mean life of the light bulbs from the two manufacturers is different.

Comparing Two Means (Paired t-test)

This test is used to compare two population means when there is a physical reason to pair the data and the two sample sizes are equal. A paired test is more sensitive in detecting differences when the population standard deviation is large. The hypotheses being compared are H0: μ1 = μ2  and H1: μ1 ≠μ2. The test statistic t1

d = difference between each pair of values
d-bar = observed mean difference
sd = standard deviation of d

Example. Two operators conducted simultaneous measurements on
percentage of ammonia in a plant gas on nine successive days to find the extent of bias in their measurements.  Since the day-to-day differences in gas composition were larger than the expected bias, the tests were designed to permit paired comparison.1


For α= 0.05, t0.025,8 = 2.31. Since the computed t value is less than the critical t value, the results do not conclusively indicate that a bias exists.

Comparing Two Standard Deviations


This test is used to compare two standard deviations and applies for all sample sizes. The hypotheses being compared are H0: σ1 = σ2  and  H1: σ1 ≠σ2.1

F distribution, which is a skewed distribution and is characterized by the degrees of freedom used to estimate S1 and S2, called the numerator degrees of freedom (n1 – 1) and denominator degrees of freedom (n2 – 1), respectively. Under the null hypothesis, the F statistic becomes  S12/S22

In calculating the F ratio, the larger variance is in the numerator, so that the calculated value of F is greater than one. If the computed value of F exceeds the critical value Fα/2,n1–1,n2–1 the null hypothesis is rejected.11


Since the calculated F value is in the critical region, the null hypothesis is rejected. There is sufficient evidence to indicate a reduced variation and more consistency of strength after aging for 1 year.




 Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion is also welcome.



Regression Analysis

y = f (x) Formula

To determine what factors in your process (as indicated by a measure) you can change to improve the CTQs(Critical to Qualities ) and, ultimately, the key business measures. It helps us to illustrates the causal relationship among the key business measures (designated as Y), the process outputs directly affecting the Y’s (designated as CTQ or y), and the factors directly affecting the process outputs (designated as x). It enables members of your improvement team to communicate the team’s findings to others in a simple format and also highlights the factors the team wants to change and what impact the change will have. It provides a matrix that can be used in the Control step of the DMAIC method for ongoing monitoring of the process after the team’s improvement work is complete Many people understand the concept of y = f (x) from mathematical education. The x, y, Y matrix is based on this concept. If it confuses team members to use these letters, simply use the terms key business measures, CTQ or process outputs, and causal factors instead; they represent the same concepts.1

Gather the key business measures for your project either from  the team charter or check with your sponsor). Gather the CTQs that the improvement team selects as the most important for your project.  List the key business measure and the CTQ operational definition in a matrix. As your team progresses through the Measure and Analyze steps of the DMAIC method, add the causal-factor definitions (x’s) you discover

Guidelines for Filling Out an x, y, Y Matrix


A sample x,y, Y Matrix



Correlation is used to determine the strength of linear relationships between two process variables. It allows the comparison of an input to an output, two inputs against each other, or two outputs against each other .Correlation measures the degree of association between two independent continuous variables. However, even if there is a high degree of correlation, this tool does not establish causation. For example, the number of skiing accidents in Colorado is highly correlated with sales of warm clothing, but buying warm clothes did not cause the accidents. Correlation can be analyzed by calculating the Pearson product moment correlation coefficient (r). This coefficient is calculated as follows:1

Where Sx and Sy are the sample standard deviations. The resulting value will be a number between -1 and +1. The higher the absolute value of r, the stronger the correlation. A value of zero means there is no correlation. A strong correlation is characterized by a tight distribution of plotted pairs about a best-fit line. It should be noted that correlation does not measure the slope of the best-fit line; it measures how close the data are to the best-fit line. A negative r implies that as one variable (x2) increases, the other variable (x1) decreases.1

A positive r implies that as one variable (x3) increases, the other variable (x1) also increases.1

A strong relationship other than linear can exist, yet r can be close to zero.1


Regression measures the strength of association between independent factor(s) (also called predictor variable(s) or regressors) and a dependent variable (also called a response variable). For simple or multiple linear regression, the dependent variable must be a continuous variable. Predictor variables can be continuous or discrete, but must be independent of one another. Discrete variables may be coded, discrete levels (dummy variables (0, 1) or effects coding (-1, +1)).Regression is used to  investigate suspected correlations by generating an equation that quantifies the relationship. It explains the relationship through an equation for a line, curve, or surface. It explains the variation in y values and  helps to  predicts the impact of controlling a process variable (x). It helps to  predict future process performance for certain values of x. It also help to  identify the vital few x’s that drive y and also helps you to manipulate process conditions to generate desirable results (if x is controllable) and/or avoid undesirable results.
For linear regressions (i.e., when the relationship is defined by a line), the regression equation is represented as y = ao + a1x, where ao = intercept (i.e., the point where the line crosses x = 0) and a1 = slope (i.e., rise over run, or change in y per unit increase in x).

  • Simple linear regression relates a single x to a y. It has a single regressor (x) variable and its model is linear with respect to coefficients (a).
    y = a0 + a1x + error
    y = a0 + a1x + a2 x2 + a3 x3 + error.

    “Linear” refers to the coefficients a0, a1, a2, etc. In the second example, the relationship between x and y is a cubic polynomial in nature, but the model is still linear with respect to the coefficients.

  • Multiple linear regression relates multiple x’s to a y. It has multiple regressor (x) variables such as x1, x2, and x3. Its model is linear with respect to coefficients (b).
    y = b0 + b1x1 + b2x2 + b3x3 + error
  • Binary logistic regression relates x’s to a y that can only have a dichotomous value (one of two mutually exclusive outcomes such as pass/fail, on/off, etc.)
  • Least squares method: Use the least squares method, where you determine the regression equation by using a procedure that minimizes the total squared distance from all points to the line. This method finds the line where the squared vertical distance from each data point to the line is as small as possible (or the “least”). This means that the method minimizes the “square” of all the residuals.
Steps in Regression Analysis
  1. Plot the data on a Scatter Diagram: Be sure to plot your data before doing regression. The charts below show four sets of data that have the same regression equation: y = 3 + 0.5x.
    Obviously, there are four completely different relationships.1
  2. Measure the vertical distance from the points to the line
  3.  Square the figures
  4. Sum the total squared distance
  5.  Find the line that minimizes the sum
    Generally a computer program is used to generate the “best fit” line that represents the relationship between x and y.  The following sets of terms are often used interchangeably:

    • Regression equation and regression line.
    • Prediction equation and prediction line.
    • Fitted line, or fits, and model.

    When two variables show a relationship on a scatter plot, they are said to be correlated, but this does not necessarily mean they have a cause/ effect relationship. Correlation means two things vary together. Causation means changes in one variable cause changes in the other.
    The residual is the leftover variation in y after you use x to predict y. The residual represents common-cause (i.e., random and unexplained) variation. You determine a residual by subtracting the predicted y from the observed y
    Residuals are assumed to have the following properties:

    • Not related to the x’s.
    • Stable, independent, and not changing over time.
    • Constant and not increasing as the predicted y’s increase.
    • Normal (i.e., bell-shaped) with a mean of zero.

    check for  each of these assumptions. If the assumptions do not hold, the regression equation might be incorrect or misleading.`

Simple Linear Regression Model

Consider the problem of predicting the test results (y) for students based upon an input variable (x), the amount of preparation time in hours using the data presented in Table below.



Study times (hours)

Test Results (%)


60h 67%

























10 55h


Study Time Versus Test Results

An initial approach to the analysis of the data  is to plot the points on a graph known as a scatter diagram. Observe that y appears to increase as x increases. One method of obtaining a prediction equation relating y to x is to place a ruler on the graph and move it about until it seems to pass through the majority of the points, thus providing what is regarded as the “best fit” line.1

The mathematical equation of a straight line is:

Y = β0 + β1x

Where β0 is the y intercept when x = 0 and β1 is the slope of the line. Here the x axis does not go to zero so the y intercept appears too high.  The equation for a straight line in this example is too simplistic. There will actually be a random error which is the difference between an observed value of y and the mean value of y for a given value of x. One assumes that for any given value of x, the observed value of y varies in a random manner and possesses a normal probability distribution.

1The probabilistic model for any particular observed value of y is:
Mean value of y for
y = (mean value of y for a given value of x )+ (random error)

Y = β0 + β1x+ε

The Method of Least Squares

The statistical procedure of finding the “best-fitting” straight line is, in many  respects, a formalization of the procedure used when one fits a line by eye. The objective is to minimize the deviations of the points from the prospective line. If one denotes the predicted value of y obtained from the fitted line as  the prediction equation is:1


Having decided to minimize the deviation of the points in choosing the best fitting line, one must now define what is meant by “best.”

The best fit criterion of goodness known as the principle of least squares is employed:
Choose, as the best fitting line, the line that minimizes the sum of squares of the deviations of the observed values of y from those predicted. Expressed mathematically, minimize the sum of squared errors given by:1
1The least square estimator of β0  and  β1, are calculated as follows:



One may predict y for a given value of x by substitution into the prediction equation. For example, if 60 hours of study time is allocated, the predicted test score would be:1
While doing Regression analysis be careful of rounding errors. Normally, the calculations should carry a minimum of six significant figures in computing sums of squares of deviations. Note that the prior example consisted of convenient whole numbers which does not occur often. Always plot the data points and graph the least squares line. If the line does not provide a reasonable fit to the data points, there may be a calculation error. Projecting a regression line outside of the test area can be risky. The above equation suggests, without study, a student would make 31% on the test. The odds favor 25% if answer a is selected for all questions. The equation also
suggests that with 100 hours of study the student should attain 100% on the examination – which is highly unlikely.

Calculating Sε2 , an Estimator of σε2

Recall, the model for y assumes that y is related to x by the equation:

Y = β0 + β1x+ε

If the least squares line is used:


A random error, 5, enters into the calculations of  β0 and β1. The random errors affect the error of prediction. Consequently, the variability of the random errors (measured by σε2 plays an  important role when predicting by the least squares line.

The first step toward acquiring a boundary on a prediction error requires that one estimates σε2. It is reasonable to use SSE (sum of squares for error) based on (n – 2) degrees of freedom, one for each variable (x and y).

An Estimator for σε2

1SSE = Sum of squared errors1
SSE may also be written:1


Example : Calculate an estimated σε2 for the data in Table given above.1The existence of a significant relationship between y and x can be tested by whether β1 is equal to 0. If β1≠ 0 there is a linear relationship. The null hypothesis and  alternative hypothesis are:1The test statistic is a t distribution with n – 2 degrees of freedom:1

Example: From the data in Table above, determine if the slope results are significant at a 95% confidence level.1
For a 95% confidence level, determine the critical values of t with  α= 0.025 in each tail, using n – 2 = 8 degrees of freedom:                t0.025,8 = -2.306 and  t0.025,8 = 2.306. Reject the null hypothesis           if t > 2.306 or t < -2.306, depending on whether the slope is positive  or negative. In this case, the null hypothesis is rejected and we conclude that β1 ≠ 0 and there is a linear relationship between y and x.

Confidence Interval Estimate for the Slope β1

 The confidence interval estimate for the slope B, is given by:1

For example by Substitute previous data into the above formula to obtain the confidence interval around the slope of the line.1
Intervals constructed by this procedure will enclose the true value of β1 95% of the time. Hence, for every 10 hours of increased study, the expected increase in test scores would fall in the interval of 3.86 to 10.05 percentage points.

Correlation Coefficient

The population linear correlation coefficient, p, measures the strength of the linear relationship between the paired x and y values in a population. p is a population parameter. For the population, the Pearson product moment coefficient of correlation, pm is given by:1Where cov means covariance. Note that -1 ≤ρ≤ +1

The sample linear correlation coefficient, r, measures the strength of the linear relationship between the paired x and y values in a sample. r is a sample statistic. For a sample, the Pearson product moment coefficient of correlation, rx,y is given by:1

For Example , Using the study time and test score data reviewed earlier, determine the correlation coefficient.  sxy = 772, sx = 1,110, sy = 696.9
1The numerator used in calculating r is identical to the numerator of the formula for the slope β1. Thus, the coefficient of correlation r will assume exactly the same sign as β1 and will equal zero when      β1 = 0.

  • A positive value for r implies that the line slopes upward to the right.
  • A negative value for r implies that the line slopes downward to the right.
  • Note that r = 0 implies no linear correlation, not simply “no correlation.” A pronounced curvilinear pattern may exist.

When r = 1 or r = -1, all points fall on a straight line; when r = 0, they are scattered and give no evidence of a linear relationship. Any other value of r suggests the degree to which the points tend to be linearly related. If x is of any value in predicting y, then SSE can never be larger than:1

Coefficient of Determination (R2)

The coefficient of determination is R2. The square of the linear correlation coefficient is r2. It can be shown that: R2 = r2 .

1The coefficient of determination is the proportion of the explained variation divided by the total variation, when a linear regression is performed. r2 Iies in the interval of 0 ≤r2≤1. r2 will equal +1 only when all the points fall exactly on the fitted line. That is, when SSE equals zero.

For Example: Using the data from Example above , determine the coefficient of determination.1

One can say that 77% of the variation in test scores can be explained by variation in study hours.1

1Where SST = total sum of squares (from the experimental average) and SSE = total sum of squared errors (from the best fit). Note that when SSE is zero, r2 equals one and when SSE equals SST, then r2 equals zero.

Correlation Versus Causation

In the above example, there is strong evidence of a correlation between car weight and gas milage. The student should be aware that a number of other factors (carburetor type, car design, air conditioning, passenger weights, speed, etc.) could also be important. The most important cause may be a different or a collinear variable. For example, car and passenger weight may be collinear. There can also be such a thing as a nonsensical correlation, i.e. it rains after my car is washed.

Simple Linear Regression In nutshell

  1. Determine which relationship will be studied.
  2. Collect data on the x and y variables.
  3. Set up a fitted line plot by charting the independent variable on the x axis and the dependent variable on the y axis.
  4. Create the fitted line. If creating the fitted line plot by hand, draw a straight line through the values that keep the least amount of total space between the line and the individual plotted points (a “best fit”).If using a computer program, compute and plot this line via the “least squares method.”
  5. Compute the correlation coefficient r.
  6. Determine the slope or y intercept of the line by using the equation y = mx + b. The y intercept (b) is the point on the y axis through which the “best fitted line” passes (at this point, x = 0). The slope of the line (m) is computed as the change in y divided by the change in x (m = Δy/ Δx). The slope, m, is also known as the coefficient of the predictor variable, x.
  7. Calculate the residuals. The difference between the predicted response variable for any given x and the experimental value or actual response (y) is called the residual. The residual is used to determine if the model is a good one to use. The estimated standard deviation of the residuals is a measure of the error term about the regression line.
  8. To determine significance, perform a t-test (with the help of a computer) and calculate a p-value for each factor. A p-value less than α (usually 0.05) will indicate a statistically significant relationship.
  9. Analyze the entire model for significance using ANOVA, which displays the results of an F-test with an associated p-value.
  10.  Calculate R2 and R2 adj. R2, the coefficient of determination, is the square of the correlation coefficient and measures the proportion of variation that is explained by the model. Ideally, R2 should be equal to one, which would indicate zero error.

    R2 = SSregression / SStotal
    = (SStotal – SSerror ) / SStotal
    = 1-[SSerror / SStotal ]
    Where SS = the sum of the squares.
    R2 adj is a modified measure of R2 that takes into account the number of terms in the model and the number of data points.
    R2 adj = 1- [SSerror / (n-p)] / [SStotal / (n-1)]
    Where n = number of data points and p = number of terms in the model. The number of terms in the model also includes the constant.
    Note: Unlike R2, R2 adj can become smaller when added terms provide little new information and as the number of model terms gets closer to the total sample size. Ideally, R2 adj should be maximized and as close to Ras possible. Conclusions should be validated, especially when historical data has been used.

Multiple Linear Regression

Multiple linear regression is an extension of the methodology for linear regression to more than one independent variable. By including more than one independent variable, a higher proportion of the variation in y may be explained.

First-Order Linear Model

Y = β0 + β1x12x2 +…… + βkxk + ε

A Second-Order Linear Model (Two Predictor Variables)

Y = β0 + β1x1 + β2x23x1x2 + β4x125x22 + ε

1Just like r2 (the linear coefficient of determination) R2 (the multiple coefficient of determination) take values in the interval:  0≤R2≤11

Attributes Data Analysis

The analysis of attribute data is organized into dichotomous values, categories, or groups. Applications involve decisions such as yes/no, pass/fail, good/bad, poor/fair/good/super/ excellent, etc. Some of the techniques used in nonlinear regression models include: logistic regression analysis, logit regression analysis, and probit regression analysis. A description of the three models follows:

  •  Logistic regression relates categorical, independent variables to a single dependent variable. The three models described within Minitab are binary, ordinal, and nominal.
  • Logit analysis is a subset of the log-linear model. It deals with only one dependent variable, using odds and odds ratio determinations.
  • Probit analysis is similar to accelerated life testing. A unit has a stress imposed on it with the response being pass/fail, good/bad, etc. The response is binary (good/bad) versus an actual failure time.

Log-linear models are nonlinear regression models similar to linear regression equations. Since they are nonlinear, it is necessary to take the logs of both sides of the equation in order to produce a linear equation. This produces a log-linear model. Logit models are subsets of this model.

Logistic Regression

Logistic regression  is used to establish a y = f (x) relationship when the dependent variable (y) is binomial or dichotomous. Similar to regression, it explores the relationships between one or more predictor variables and a binary response. Logistic Regression helps us to predict the probability of future events belonging to one group or another (i.e., pass/fail, profitable/nonprofitable, or purchase/not purchase).  Logistic regression relates one or more independent variables to a single dependent variable. The independent variables are described as predictor variables and the response is a dependent variable. Logistic regression is similar to regular linear regression, since both have regression coefficients, predicted values, and residuals. Linear regression assumes that the response variable is continuous, but for logistic regression, the response variable is binary. The regression coefficients for linear regression are determined by the ordinary least squares approach, while logistic regression coefficients are based on a maximum likelihood estimation.
Logistic regression can provide analysis of the two values of interest: yes/no, pass/fail, good/bad, enlist/not enlist, vote/no vote, etc. A logistic regression can also be described as a binary regression model. It is nonlinear and has a S-shaped form. The values are never below 0 and never above 1. The general logistic regression equation can be shown as:
y=b0+b1x1+e                   where y=0,1

The probability of results being in a certain category is given by: 1

The predictor variables (x’s) can be either continuous or discrete, just as for any problem using regression. However, the response variable has only two possible values (e.g., pass/fail, etc.). Because regression analysis requires a continuous response variable that is not bounded, this must be corrected. This is accomplished by first converting the response from events (e.g., pass/fail) to the probability of one of the events, or p. Thus if p = Probability (pass), then p can take on any value from 0 to 1. This conversion results in a continuous response, but one that is still bounded. An additional transformation is required to make the response both continuous and unbounded. This is called the link function. The most common link function is the “logit,” which is explained below.

Y = β0 + β1x
We need a continuous, unbounded Y.1


Logistic regression also known as Binary Logistic regression(BLR) fits sample data to an S-shaped logistic curve. The curve represents the probability of the event. At low levels of the independent variable (x), the probability approaches zero. As the predictor variable increases, the probability increases to a point where the slope decreases. At high levels of the independent variable, the probability approaches 1. The following two examples fit probability curves to actual data. The curve on the top represents the “best fit.” The curve through the data on the bottom contains a zone of uncertainty where events and non-events (1’s and 0’s) overlap.1

If the probability of an event, p, is greater than 0.5, binary logistic regression would predict a “yes” for the event to occur. The probability of an event not occurring is described as (1-p). The odds, or p/(1-p),compares the probability of an event occurring to the probability of it not occurring. The logit, or “link” function, represents the relationship between x and y.

Step for Logistic regression

  1. Define the problem and the question(s) to be answered.
  2. Collect the appropriate data in the right quantity.
  3. Hypothesize a model.
  4. Analyze the data. Many statistical software packages are available to help analyze data.
  5. Check the model for goodness of fit.
  6. Check the residuals for violations of assumptions.
  7. Modify the model, if required, and repeat.

A example will be used to compare the number of hours studied for exam versus pass/fail responses. The data is provided  for 50 students. The number of hours a student spends studying  is recorded. In addition, the end result, the dependent (pass/fail) variable, is noted. In logistic regression, because of the use of attribute data, there should be 50 data points per variable. An analysis will be made for the regular linear regression model. This result will then be compared to the logistic regression model. Logistic regression can be used  to predict the probability that an observation belongs to one of two groups.1

For the logistic regression example, Minitab is used to determine the regression coefficients. Using Excel to calculate the probabilities, the logistic regression curve is displayed in Figure1

An S-shaped curve can be used to smooth out the data points. The curve moves from the zero probability point up to the 1.0 line. The probabilities in the logistic curve were calculated from the  equation:1Using Minitab, the regression coefficients and the equation can be determined. After determining the regression coefficients, the probability of a student passing the exam, after studying 80 hours, can be calculated.1
It appears that there is a 54.5% probability of passing after 80 hours of study. At 100 hours or more, the probability of passing increases to more than 90%.
Minitab 13 has three logistic regression procedures, as described1
Grimm  provides the following logistic regression assumptions:

  • There are only two values (pass/fail) with only one outcome per event
  • The outcomes are statistically independent
  • All relevant predictors are in the model
  • It is mutually exclusive and collectively exhaustive (one category at a time)
  • Sample sizes are larger than for linear regression

The individual regression coefficients can be tested for significance through comparison of the coefficient to its standard error. The 2 value will be compared to the value obtained from the normal distribution.

z=b1/Se .

The logistic regression model can be tested via several goodness-of-fit tests. Minitab will automatically test three different methods: Pearson, Deviance, and Homer-Lemeshow. The simple logistic regression model can be extended to include several other predictors (called multiple logistic regression). If the model contains only categorical variables, it can be classified as a log-linear model.1

Logit Analysis .

Logit analysis uses odds to determine how much more likely an observation will be a member of one group versus another group (pass/fail, etc.). A probability of p = 0.80 of being in group A (passing) can be expressed in odds terms as 4:1. There are 4 chances to pass versus 1 chance to fail, or odds of 4:1. The ratio 4/1 or p/(1-p) is called the odds, and the log of the odds, L=ln(p/(1-p)) is called the Logit.
The logit ranges from 0 to 1: 0 < L < 1. The probability for a given L value is provided by the equation:

p =eL/(1 + eL)

Example: From the previous data, there were 50 students who took the exam, but only 27 passed. What are the odds of passing?
Odds = p/(1-P) = 0.54/0.46 = 1.17 or 1.17:1

Example  From the previous data, a student studying 80 hours has a 54.5% chance of passing. What are the odds and the accompanying logit probability?
Odds =P/(1-P)=0.545/(1-0.545) =0.545/0.455 =1.198 or 1.198:1
Logit = In(p/(1 – p)) = ln(1.198) = 0.1809
To find the probability, use the logit equation:
p = eL/(1 + eL)=  e0.1809/(1 + e0.1809)=1.198/2.198=0.545
If the student studies 80 hours, the probability of passing is 54.5%. This is the same result as before, but represents another way to calculate it.

The odds ratio is the change in the odds of moving up or down a level in a group for a one-unit increase or decrease in the predictor. The exponent, e, and slope of coefficient, b,, are used to determine the odds. lf b1 = 0.10821, then the odds of moving to another level is:  eb1 = e0.10821 = 1.1
Positive effects are greater than 1, while negative effects are between 0 and 1.

Logit Regression Model

In cases where the values for each category are continually increasing or continually decreasing, a log transform should be performed to obtain a near straight line. Expanding the logit formula to obtain a straight linear model results in the following formula:
L = logit = ln(p/(1-p)) = ln(eb0+b1x1) = ln(eb0eb1x1) = b0 + b1x1
The equation expanded for multiple predictor variables, x1, x2, …, Xn is:     L = b0 + b1x1 + b2x2 + + bnxn

Logit Regression Example

A medical researcher, with an interest in physical fitness, conducted a long-term walking plan for weight loss. She was able to enroll 1,120 patients in a 2 year walking program. The results were positive and weight loss appeared to accelerate as the number of steps walked increased. The data is presented in Table below.1 A linear regression of the data produced a good R2 value of 93.8%. However, the graph indicated nonlinear results.1
A logit transformation of the data values was performed. The logit value was obtained by dividing the “number lost > 30 lb” by “number lost < 30 lb”, and then taking the natural log. A regression was performed on the steps walked and logit to obtain a new equation.

The resulting R2 was 98.7%. The equation is: L = -5.307 + 0.00067053 x1
Probit Analysis

Probit analysis is similar to accelerated life testing and survivability analysis. An item has a stress imposed upon it to see if it fails or survives. The probit model has an expected variance of 1 and a mean of zero. The logit model has an expected variance of π2 / 3 = 3.29 and an expected mean of zero. This probit model is close to the logit model, since it requires extremely large sample sizes to realize a difference from the logit model.

The probit model is: Φ-1(p) = α + βx = bo + b1x

Where; b0=-µ/σ and b1=-1/σ   or σ=1/Ib1I

In comparing the logit to the probit models, the b coefficients of the logit compared to the probit differ by 1.814. That is: bL = -1.814 bp

For Example: A circular plate is welded to a larger plate to form a supporting structure. There is a need to validate the structure’s capability to resist a torque force. This is a destructive test of the weldment, with a binary response consisting of success or failure. A torque wrench will be used to twist the structure. The levels of applied force are 50, 100, 150, and 200 Ibf-in. A total of 100 samples will be tested at each level of force. 1
The probit analysis, using Minitab, indicates the normal model was nonlinear and the logistic model would be a better choice. The coefficients are:
b0= 4.0058, b1 = -0.0313681The probit model has 7 life distributions to choose from: normal, lognormal (base e), lognormal (base 10), logistic, loglogistic, Weibull, and extreme value. Using Minitab, the best fit for the above data was the logistic distribution. The percentiles, survival probabilities, and plots can also be obtained using Minitab.
For the weldment example, a table of percentiles (Minitab) provides the percentage of surviving parts at various levels of torque. The plot in Figure  shows that the 50% level to be about 130 lbf-in. The 5% survival level would be about 220 lbf-in.
Refer to Table below for a listing of low survival percentages.1

The 5% success level (or 95% failure level) indicates the torque force to be 221.5752 lbf-in. The 95% confidence interval is also included.

 Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion is also welcome.

Process Capability

Process capability refers to the capability of a process to consistently make a product that meets a customer specified specification tolerance. Capability indices are used to predict the performance of a process by comparing the width of process variation to the width of the specified tolerance. It is used extensively in many industries and only has meaning if the process being studied is stable (in statistical control). Capability indices allow calculations for both short term (Cp and Cpk ) and/or long-term (Pp and Ppk) performance for a process whose output is measured using variable data at a specific opportunity for a defect.

The determination of process capability requires a predictable pattern of statistically stable behavior (most frequently a bell-shaped curve) where the chance causes of variation are compared to the engineering specifications. A capable process is a process whose spread on the bell-shaped curve is narrower than the tolerance range or specification limits. USL is the upper specification limit and LSL is the lower specification limit.
It is often necessary to compare the process variation with the engineering or specification tolerances to judge the suitability of the process. Process capability analysis addresses this issue. A process capability study includes three steps:

  • Planning for data collection
  • Collecting data
  • Plotting and analyzing the results

The objective of process quality control is to establish a state of control over the manufacturing process and then maintain that state of control through time. Actions that change or adjust the process are frequently the result of some form of capability study. When the natural process limits are compared with the specification range, any of the following possible courses of action may result:

  • Do nothing. If the process limits fall well within the specification limits, no action may be required.
  • Change the specifications. The specification limits may be unrealistic. In some cases, specifications may be set tighter than necessary. Discuss the situation with the final customer to see if the specifications may be relaxed or modified.
  • Center the process. When the process spread is approximately the same as the specification spread, an adjustment to the centering of the process may bring the bulk of the product within specifications.
  • Reduce variability. This is often the most difficult option to achieve. It may be possible to partition the variation (stream-to-stream, within piece, batch-to-batch, etc.) and work on the largest offender first. For a complicated process, an experimental design may be used to identify the leading source of variation.
  • Accept the losses. In some cases, management must be content with a high loss rate (at least temporarily). Some centering and reduction in variation may be possible, but the principal emphasis is on handling the scrap and rework efficiently.

Other capability applications:

  •  Providing a basis for setting up a variables control chart
  •  Evaluating new equipment
  • Reviewing tolerances based on the inherent variability of a process
  • Assigning more capable equipment to tougher jobs
  • Performing routine process performance audits
  • Determining the effects of adjustments during processing

Identifying Characteristics

The identification of characteristics to be measured in a process capability study should meet the following requirements:

  • The characteristic should be indicative of a key factor in the quality of the product or process.
  • It should be possible to adjust the value of the characteristic.
  • The operating conditions that affect the measured characteristic should be defined and controlled

If a part has ten different dimensions, process capability would not normally be performed for all of these dimensions. Selecting one, or possibly two, key dimensions provides a more manageable method of evaluating the process capability. For example in the case of a machined part, the overall length or the diameter of a hole might be the critical dimension. The characteristic selected may also be determined by the history of the part and the parameter that has been the most difficult to control or has created problems in the next higher level of  assembly. Customer purchase order requirements or industry standards may also determine the characteristics that are required to be measured. In the automotive industry, the Production Part Approval Process (PPAP)  states “An acceptable level of preliminary process capability must be determined prior to submission for all characteristics designated by the customer or supplier as safety, key, critical, or significant, that can be evaluated using variables (measured) data.” Chrysler, Ford and General Motors use symbols to designate safety and/or government regulated characteristics and important performance, fit, or appearance characteristics.

Identifying Specifications/Tolerances

The process specifications or tolerances, are determined either by customer requirements, industry standards, or the organization’s engineering department. The process capability study is used to demonstrate that the process is centered within the specification limits and that the process variation predicts the process is capable of producing parts within the tolerance requirements. When the process capability study indicates the process is not capable, the information is used to evaluate and improve the process in order to meet the tolerance requirements. There may be situations where the specifications or tolerances are set too tight in relation to the achievable process capability. In these circumstances, the specification must be reevaluated. If the specification cannot be opened, then the action plan is to perform 100% inspection of the process, unless inspection testing is destructive.

Developing Sampling Plans

The appropriate sampling plan for conducting process capability studies depends upon the purpose and whether there are customer or standards requirements for the study. Ford and General Motors specify that process capability studies for PPAP submissions be based on data taken from a significant production run of a minimum
of 300 consecutive pieces.
If the process is currently running and is in control, control chart data may be used to calculate the process capability indices. If the process fits a normal distribution and is in statistical control, then the standard deviation can be estimated from:
1For new processes, for example for a project proposal, a pilot run may be used to estimate the process capability. The disadvantage of using a pilot run is that the estimated process variability is most likely less than the process variability expected from an ongoing process. Process capabilities conducted for the purpose of improving the process may be performed using a design of experiments (DOE) approach in which the optimum A values of the process variables which yield the lowest process variation is the  objective.

Verifying Stability and Normality

If only common causes of variation are present in a process, then the output of the process forms a distribution that is stable over time and is predictable. If special causes of variation are present, the process output is not stable over time.

1The Figure  depicts an unstable process with both process’ average and variation out-of-control. Note, the process may also be unstable if either the process average or variation is out-of-control. Common causes of variation refer to the many sources of variation within a process that has a stable and repeatable distribution over time. This is called a state of statistical control and the output of the process is predictable. Special causes refer to any factors causing variation that are not always acting on the process. If special causes of variation are present, the process distribution changes and the process output is not stable over time. When plotting a process on a control chart, lack of process stability can be shown by several types of patterns including: points outside the control limits, trends, points on one side of the center line, cycles, etc. The validity of the normality assumption may be tested using the chi square hypothesis test. To perform this test, the data is partitioned into data ranges. The number of data points in each range is then compared with the number predicted from a normal distribution. Using the hypothesis test with a selected confidence level, a conclusion can be made as to whether the data follows a normal distribution.
The chi square hypothesis test is:
Ho: The data follows a specified distribution
H1: The data does not follow a specified distribution
and is tested using the following test statistic:
1Continuous data may be tested using the Kolmogorov-Smirnov goodness-of-fit test. It has the same hypothesis test as the chi square test, and the test statistic is given
1Where D is the test statistic and F is the theoretical cumulative distribution of the continuous distribution being tested. An attractive feature of this test is that the distribution of the test statistic does not depend on the underlying cumulative distribution function being tested. Limitations of this test are that it only applies to continuous distributions and that the distribution must be fully specified. The location, scale, and shape parameters must be specified and not estimated from the data. The Anderson-Darling test is a modification of the Kolmogorov-Smirnov test and gives more weight to the tails of the distribution.  If the data does not fit a normal distribution, the chi square hypothesis test may also be used to test the fit to other distributions such as the exponential or binomial distributions.

Capability index Failure Rates

There is a direct link between the calculated Cp (and Pp values) with the standard  normal (Z value) table. 1A Cp of 1.0 is the loss suffered at a Z value of 3.0, ppm equals parts per million of nonconformance (or failure) when the process:

  • is centered on Y
  • Has a two-tailed specification
  • Is normally distributed
  • Has no significant shifts in average or dispersion

When the Cp, Cpk, Pp, and Ppk values are 1.0 or less, 2 values and the standard normal table can be used to determine failure rates. With the drive for increasingly dependable products, there is a need for failure rates in the Cp range of 1.5 to 2.0.

Process Capability Indices

To determine process capability, an estimation of sigma is necessary:
(IR is an estimate of process capability sigma and comes from a control chart.
The capability index is defined as:1As a rule of thumb:

  • CR > 1.33 Capable
  •  CR = 1.00 to 1.33 Capable with tight control
  • CR < 1.00 incapable

The capability ratio is defined as:1As a rule of thumb: 

  • CR < 0.75 Capable
  • CR = 0.75 to 1.00 Capable with tight control
  • CR > 1.00 incapable

Note, this rule of thumb logic is somewhat out of step with the six sigma assumption of a ±1.5 sigma shift. The above formulas only apply if the process is centered, stays centered within the specifications, and CR = CPR.

Cpk is the ratio giving the smallest answer between:

For Example, For a process with  (Xbar)= 12, σR = 2 an USL =16 and LSL = 4, determine Cp and Cpk min:

Cpm index

The Cpm index is defined as:
1Where: USL = upper specification limit
LSL = lower specification limit
μ = process mean
T = target value
σ = process standard deviation
Cpm is based on the Taguchi index, which places more emphasis on process centering on the target.

For example for a process with μ = 12, σ = 2, T = 10, USL = 16 and LSL = 4, determine Cpm:

Process Performance indices

To determine process performance, an estimation of sigma is necessary:
1σi is a measure of total data sigma and generally comes from a calculator or computer.
The performance index is defined as:
1The performance ratio is defined as:
1Ppk is the ratio giving the smallest answer between:1

Short-Term and Long-Term Capability

Up to this point, process capability has been discussed in terms of stable processes, with assignable causes removed. In fact, the process average and spread are dependent upon the number of units measured or the duration over which the process is measured.
When a process capability is determined using one operator on one shift, with one piece of equipment, and a homogeneous supply of materials, the process variation is relatively small. As factors for time, multiple operators, various lots of material, environmental changes, etc. are added, each of these contributes to increasing the process variation. Control limits based on a short-term process evaluation are closer together than control limits based on the long-term process. A short run can be described with respect to time and a small run, where there is a small number of pieces produced. When a small amount of data is available, there is generally less variation than is found with a larger amount of data. Control limits based on the smaller number of samples will be narrower than they should be, and control charts will produce false out-of-control patterns. Smith suggests a modified X(bar) and R chart for short runs, running an initial 3 to 10 pieces without adjustment. A calculated value is compared with a critical value and either the process is adjusted or an initial number of subgroups is run. Inflated D4 and A2 values are used to establish control limits. Control limits are recalculated after additional groups are run. For small runs, with a limited amount of data,  X and MR chart can be used. The X represents individual data values, not an average, and the MR is the moving range, a measure of piece-to-piece variability. Process capability or Cpk values determined from either of these methods must be considered preliminary information. As the number of data points increases, the calculated process capability will approach the true capability. When comparing attribute with variable data, variable data generally provides more information about the process, for a given number of data points. Using variables data, a reasonable estimate of the process mean and variation can be made with 25 to 30 groups of five samples each. Whereas a comparable estimate using attribute data may require 25 groups of 50 samples each. Using variables data is preferable to using attribute data for estimating process capability.

Short-Term Capability Indices

The short-term capability indices Cp and Cpk are measures calculated using the short-term process standard deviation. Because the short-term process variation is used, these measures are free of subgroup drift in the data and take into account only the within subgroup variation. Cp is a ratio of the customer-specified tolerance to six standard deviations of the short-term process variation. Cp is calculated without regard to location of the data mean within the tolerance, so it gives an indication of what the process could perform to if the mean of the data was centered between the specification limits. Because of this assumption, Cp is sometimes referred to as the process potential. Cpk is a ratio of the distance between the process average and the closest specification limit, to three standard deviations of the short-term process variation. Because Cpk takes into account location of the data mean within the tolerance, it is a more realistic measure of the process capability. Cpk is sometimes referred to as the process performance.

Long-Term Capability Indices

The long-term capability indices Pp and Ppk are measures calculated using the long-term process standard deviation. Because the long-term process variation is used, these measures take into account subgroup drift in the data as well as the within subgroup variation. Pp is a ratio of the customer-specified tolerance to six standard deviations of the long-term process variation. Like Cp, Pp is calculated without regard to location of the data mean within the tolerance. Ppk is a ratio of the distance between the process average and the closest specification limit, to three standard deviations of the long-term process variation. Like Cpk, Ppk takes into account the location of the data mean within the tolerance. Because Ppk uses the long-term variation in the process and takes into account the process centering within the specified tolerance, it is a good indicator of the process performance the customer is seeing.

Because both Cp and Cpk are ratios of the tolerance width to the process variation, larger values of Cp and Cpk are better. The larger the Cp and Cpk, the wider the tolerance width relative to the process variation. The same is also true for Pp and Ppk. What determines a “good” value depends on the definition of “good.” A Cp of 1.33 is approximately equivalent to a short-term Z of 4. A Ppk of 1.33 is approximately equivalent to a long-term Z of 4. However, a Six Sigma process typically has a short term Z of 6 or a long-term Z of 4.5.1

Where σst = short-term pooled standard deviation.
And σlt = long-term standard deviation.

Manufacturing Example:

Suppose the diameter of a spark plug is a critical dimension that needs to conform to lower and upper customer specification limits of 0.480″ and 0.490″, respectively. Five randomly selected spark plugs are measured in every work shift. Each of the five samples on each work shift is called a subgroup. Subgroups have been collected for three months on a stable process. The average of all the data was 0.487″. The short-term standard deviation has been calculated and was determined to be 0.0013″. The long-term standard deviation was determined to be 0.019″.

To Calculate Cp and Cpk:
Cp = (0.490 – 0.480)/(6 x 0.0013) = 0.010/0.0078 = 1.28
Cpl = (0.487 – 0.480)/(3 x 0.0013) = 0.007/0.0039 = 1.79
Cpu = (0.490 – 0.487)/(3 x 0.0013) = 0.003/0.0039 = 0.77
Cpk = min (Cpl, Cpu)
Cpk = min (1.79, 0.77) = 0.77

To Calculate Pp and Ppk:
Pp = (0.490″ – 0.480″)/(6 x 0.019) = 0.0100/0.114 = 0.09
Ppl = (0.487 – 0.480)/(3 x 0.019) = 0.007/0.057 = 0.12
Ppu = (0.490 – 0.487)/(3 x 0.019) = 0.003/0.057 = 0.05
Ppk = min (Ppl, Ppu)
Ppk = min (0.12, 0.05) = 0.05

In this example, Cp is 1.28. Because Cp is the ratio of the specified tolerance to the process variation, a Cp value of 1.28 indicates that the process is capable of delivering product that meets the specified tolerance (if the process is centered). (A Cp greater than 1 indicates the process can deliver a product that meets the specifications at least 99.73% of the time.) Any improvements to the process to increase our value of 1.28 would require a reduction in the variability within our subgroups. Cp, however, is calculated without regard to the process centering within the specified tolerance. A centered process is rarely the case so a Cpk value must be calculated.
Cpk considers the location of the process data average. In this calculation, we are comparing the average of our process to the closest specification limit and dividing by three short-term standard deviations. In our example, Cpk is 0.77. In contrast to the Cp measurement, the Cpk measurement clearly shows that the process is incapable of producing product that meets the specified tolerance.
Any improvements to our process to increase our value of 0.77 would require a mean shift in the data towards the center of the tolerance and/or a reduction in the within subgroup variation. (Note: For centered processes, Cp and Cpk will be the same.) Our Pp is 0.09. Because Pp is the ratio of the specified tolerance to the process variation, a Pp value of 0.09 indicates that the process is incapable of delivering product that meets the specified tolerance. Any improvements to the process to increase our value of 0.09 would require a reduction in the variability within and/or between subgroups. Pp, however, is calculated without regard to  he process centering within the specified tolerance. A centered process is rarely the case so a Ppk value, which accounts for lack of process centering, will surely indicate poor capability for our process as well. (Note: For both Pp and Cp, we assume no drifting of the subgroup averages.) Ppk represents the actual long-term performance of the process and is the index that most likely represents what customers receive. In the example, Ppk is 0.05, confirming our Pp result of poor process performance. Any improvements to the process to increase our value of 0.05 would require a mean shift in the data towards the center of the tolerance and/or a reduction in the within subgroup and between subgroup variations.

Business Process Example:

Suppose a call center reports to its customers that it will resolve their issue within fifteen minutes. This fifteen minute time limit is the upper specification limit. It is desirable to resolve the issue as soon as possible; therefore, there is no lower specification limit. The call center operates twenty-four hours a day in eight-hour shifts. Six calls are randomly measured every shift and recorded for two months. An SPC chart shows the process is stable. The average of the data is 11.7 minutes, the short-term pooled standard deviation is 1.2 minutes, and the long-term standard deviation is 2.8 minutes.

To Calculate Cp and Cpk:

Cp = cannot be calculated as there is no LSL
Cpl = undefined
Cpu = (15 – 11.7)/(3 x 1.2) = 3.3/3.6 = 0.92
Cpk = min (Cpl, Cpu) = 0.92

To Calculate Pp and Ppk:
Pp = cannot be calculated as there is no LSL
Ppl = undefined
Ppu = (15 – 11.7)/(3 x 2.8) = 3.3/8.4 = 0.39
Ppk = min (Pplk, Ppu) = 0.39

In this example, we can only evaluate Cpk and Ppk as there is no lower limit. These numbers indicate that if we can eliminate between subgroup variation, we could achieve a process capability (Ppk) of 0.92, which is our current Cpk.

Process Capability for Non-Normal Data

In the real world, data does not always fit a normal distribution, and when it does not, the standard capability indices does not give valid information because they are based on the normal distribution. The first step is a visual inspection of a histogram of the data. If all data values are well within the specification limits, the process would appear to be capable. One additional strategy is to make non-normal data resemble normal data by using a transformation. The question is which one to select for the specific situation. Unfortunately, the choice of the “best” transformation is generally not obvious.
The Box-Cox power transformations are given by: 1Given data observations x1, x2,……. xn, select the power λ that maximizes the logarithm of the likelihood function:
1Where the arithmetic mean of the transformed data is:1

Process capability indices and formulas described elsewhere in this Post are based on the assumption that the data are normally distributed. The validity of the normality assumption may be tested using the chi square hypothesis test. One approach to address the non-normal distribution is to make transformations to “normalize” the data. This may be done with statistical software that performs the Box-Cox transformation. As an alternative approach, when the data can be represented by a probability plot (i.e. a Weibull distribution) one should use the 0.135 and 99.865 percentiles to describe the spread of the data.

It is often necessary to identify non-normal data distributions and to transform them  into near normal distributions to determine process capabilities or failure rates Assume that a process capability study has been conducted. Some 30 data points from a non-normal distribution are shown in Table  below. An investigator can check the data for normality using techniques such as the dot plot, histogram, and normal probability plot. 1

A histogram displaying the above non-normal data indicates a distribution that is skewed to the right


A probability plot can also be used to display the non-normal data, The data points are clustered to the left with some extreme points to the right. Since this is a non- normal distribution, a traditional process capability index is meaningless.1

If the investigator has some awareness of the history of the data, and knows it to follow a Poisson distribution, then a square root transformation is a possibility. The standard deviation is the square root of the mean. Some typical data transformations include:

  • Log transformation (log x)
  • Square root or power transformation (x y)
  • Exponential (e y)
  • Reciprocal (1/x)

In order to find the right transformation, some exploratory data analysis may be required. Among the useful power transformation techniques is the Box-Cox procedure. The applicable formula is:
y ’ =yλ
Where lambda, λ, is the power or parameter that must be determined to transform the data. For λ = 2, the data is squared. For λ = 0.5, a square root is needed.

One can also use Excel or Minitab to handle the data calculations and to draw the normal probability plot. With the use of Minitab, an investigator can let the Box-Cox tool automatically find a suitable power transform. in this example, a power transform of 0.337 is indicated. All 30 transformed data points from Table  above, using   y’ = y0.337, are shown in Table below.1

A probability plot of the newly transformed data will show a near normal distribution.1
Now, a process capability index can be determined forthe data. However, the investigator must remember to also transform the specifications. If the original specifications were 1 and 10,000, the new limits would be 1 and 22.28.

Process Capability for Attribute Data

The control chart represents the process capability, once special causes have been identified and removed from the process. For attribute charts, capability is defined as the average proportion or rate of nonconforming product.

  • for p charts, the process capability is the process average nonconforming, ̅p and is preferably based on 25 or more in-control periods. If desired, the proportion conforming to specification, 1-̅p may be used.
  • for np charts, the process capability is the process average nonconforming, ̅p and is preferably based on 25 or more in-control periods.
  • for c charts, the process capability is the process average nonconforming, ̅c in a sample of fixed size n.
  • for u charts, the process capability is the process average nonconforming per reporting unit ̅u.

The average proportion of nonconformities may be reported on a defects per million opportunities scale by multiplying ̅p times 1,000,000.

Process Performance Metrics

  • A defect is defined as something that does not conform to a known and accepted customer standard.
  • A unit is the product, information, or service used or purchased by a customer.
  • An opportunity for a defect is a measured characteristic on a unit that needs to conform to a customer standard (e.g., the ohms of an electrical resistor, the diameter of a pen, the time it takes to deliver a package, or the address field on a form).
  • Defective is when the entire unit is deemed unacceptable because of the nonconformance of any one of the opportunities for a defect.
  • Defects = D
  • Opportunities (for a defect) = O
  • Units = U
  • Yield = Y

Defect Relationships

Defects per million opportunities (DPMO) helps to determine the capability of a process. DPMO allows for the calculation of capability at one or more opportunities and ultimately, if desired, for the entire organization.

Calculating DPMO depends on whether the data is variable or attribute, and if there is one or more than one opportunity for a defect. If there is:

  • One opportunity with variable data, use the Z transform to determine the probability of observing a defect, then multiply by 1 million.
  •  One opportunity with attribute data, calculate the percent defects, then multiple by 1 million.
  • More than one opportunity with both variable and/or attribute data, use one of two methods to determine DPMO.
  • To calculate DPO, sum the defects and sum the total opportunities for a defect, then divide the defects by the total opportunities and multiply by 1 million. For eg If there are eight defects and thirty total opportunities for a defect, then
    DPMO = (8/30) x 1,000,000 = 266,667
  • When using this method to evaluate multiple opportunity variable data, convert the calculated DPMO into defects and opportunities for each variable, then sum them to get total defects and opportunities. For eg. If one step in a process has a DPMO of 50,000 and another step has a DPMO of 100,000, there are 150,000 total defects for 2 million opportunities or 75,000 DPMO overall.
  1. Total opportunities: T0 = TOP = U x O
  2. Defects per unit: DPU = also = D/U= -ln (Y)
  3. Defects per normalized unit: = -In (Ynorm)
  4. Defects per unit opportunity= DPO = DPU/O=D/(Ux0)
  5. Defects per million opportunities: DPMO = DPO x 106

for example  a matrix chart indicates the following information for 100 production units. Determine DPU. Assume that each unit in had 6 opportunities for a defect (i.e characteristics A, B, C, D, E, and F). Determine DPO and DPMO.
11One would expect to find an average of 0.47 defects per unit.
DPO = DPU/O=0.47/6 = 0.078333
DPMO = DPO x 106 = 78,333

Rolled Throughput Yield

Rolled Throughput Yield (RTY) is used to assess the true yield of a process that includes a hidden factory. A hidden factory adds no value to the customer and involves fixing things that weren’t done right the first time. RTY determines the probability of a product or service making it through a multistep process without being scrapped or ever reworked.

There are two methods to measure RTY:
Method 1 assesses defects per unit (dpu), when all that is known is the final number of units produced and the number of defects. Shown in the following diagram are six units, each containing five opportunities for a defect.1

Given that any one defect can cause a unit to be defective, it appears the yield of this process is 50%. This, however, is not the whole story. Assuming that defects are randomly distributed, the special form of the Poisson distribution formula
RTY = e-dpu
can be used to estimate the number of units with zero defects (i.e., the RTY). The previous figure showed eight defects over six units, resulting in 1.33 dpu. Entering this into our formula:
RTY = e-1.33
RTY = 0.264

According to this calculation, this process can expect an average of 26.4% defect-free units that have not been reworked (which is much different than the assumed 50%).

Method 2 determines throughput yield (Ytp), when the specific yields at each opportunity for a defect are known. If, on a unit, the yield at each opportunity for a defect is known (i.e., the five yields at each opportunity in the previous figure), then these yields can be multiplied together to determine the RTY. The yields at each opportunity for a defect are known as the throughput yields, which can be calculated as
Ytp = e-dpu
for that specific opportunity for a defect for attribute data, and
Ytp = 1- P(defect)
for variable data, where P(defect) is the probability of a defect based on the normal distribution. Shown in the following figure is one unit from the previous figure in which the associated Ytp’s at each opportunity were measured for many units.1

Multiplying these yields together results in the RTY:
RTY = Ytp1 x Ytp2 x Ytp3 x Ytp4 x Ytp5
RTY = 0.536 x 0.976 x 0.875 x 0.981 x 0.699
RTY = 0.314
According to this calculation, an average of 31.4% defect free
units that have not been reworked can be expected.

 Yield Relationships

Note, the Poisson equation is normally used to model defect occurrences. If there is a historic defect per unit (DPU) level for a process, the probability that an item contains X flaws (PX) is described mathematically by the equation:
1Where:    X is an integer greater or equal to 0
DPU is greater than 0
Note that 0! (zero factorial) = 1 by definition.

If one is interested in the probability of having a defect free unit (as most of us are), then X = 0 in the Poisson formula and the math is simplified:
P(0) = e-dpu
Therefore, the following common yield formulas follow:
Yield or first pass yield: Y = FPY = e-dpu .
Defects per unit: DPU = – ln (Y) (In means natural logarithm)1

Total defects per unit: TDPU = -ln (Ynorm)

For example the yield for  a process has a DPU of 0.47 is
Y = e-dpu = e-0.47 = 0.625 = 62.5%
For example the DPU for a process with a first pass yield of 0.625 is
DPU = -ln(Y) = -ln 0.625 = 0.47
Example: A process consists of 4 sequential steps: 1, 2, 3, and 4. The yield of each step is as follows: Y1 = 99%, Y, =98%, Y3 = 97%, Y4 = 96%. Determine the rolled throughput yield and the total defects per unit.
Yrt  = (0.99)(0.98)(0.97)(0.96) = 0.90345 = 90.345%
TDPU = -ln(RTY) = —ln 0.90345 = 0.1015

Rolled throughput yield is defined as the cumulative calculation of yield or defects through multiple process steps. The determination of the rolled throughput yield (RTY) can help a team focus on serious improvements.

  • Calculate the yield for each step and the resulting RTY
  • The RTY for a process will be the baseline metric
  • Revisit the project scope
  • Significant yield differences can suggest improvement opportunities

Sigma Relationships

Probability of a defect = P(d)
P(d)=1-Y or 1 – FPY
also P(d) = 1 – Yrt (for a series of operations)

P(d) can be looked up in a Z table (using the table in reverse to determine Z).

The Z value determined  is called Z long-term or Z equivalent.
Z short-term is defined as: Zst = Zlt + 1.5 shift

For example  the Z short-term for   Z long-term = 1.645,  is
Zst = Zlt +1.5 =1.645 +1.5 = 3.145

Schmidt and Launsby  report that the 6 sigma quality level (with the 1.5 sigma shift) can be approximated by:
6 Sigma Quality Level = 0.8406 + SQRT(29.37 – 2.221 x In (ppm))

Example: If a process were producing 80 defectives/million, what would be the 6 sigma quality level?
6σ = 0.8406 + SQRT(29.37 – 2.221 x In (80))
6σ = 0.8406 + SQRT(29.37 – 2.221 (4.3820))
6σ= 0.8406 + 4.4314 = 5.272 (about 5.3)

Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: preteshbiswas@gmail.com . You can also contribute to this discussion and we shall be very happy to publish them in this blog. Your comment and suggestion is also welcome.

One-piece flow

One-Piece Flow is a fundamental element of becoming lean. To think of processing one unit at a time usually sends a shudder through the organisation which has batch manufacturing as its lifeblood. The word “one” does not necessarily have a literal meaning. It should be related to the customers’ requirements and could be one unit of order. However, what it does mean is that the organisation should only process what the customer wants, in the quantity he wants and when he wants it.

One-piece flow (also commonly referred to as continuous flow manufacturing) is a technique used to manufacture components in a cellular environment. The cell is an area where everything that is needed to process the part is within easy reach, and no part is allowed to go to the next operation until the previous operation has been completed. The goals of one-piece flow are: to make one part at a time correctly all the time to achieve this without unplanned interruptions to achieve this without lengthy queue times. One-piece flow describes the sequence of product or of transactional activities through a process one unit at a time. In contrast, batch processing creates a large number of products or works on a large number of transactions at one time – sending them together as a group through each operational step. One-piece flow focuses on employees’ efforts on the manufacturing process itself rather than on waiting, transporting products, and storing inventory. It also makes the production process flow smoothly, one piece at a time, creating a steady workload for all employees involved. One-piece flow methods need short changeover times and are conducive to a pull system.

There are many advantages to incorporating the one-piece flow method into your work processes. These include the following:

  • It reduces the time that elapses between a customer order and shipment of the finished product.
  • It prevents the wait times and production delays that can occur during batch processing.
  • By reducing excess inventory, one-piece flow reduces the labour, energy, and space that employees must devote to storing and transporting large lots or batches.
  • It reduces the damage that can occur to product units during batch processing.
  • It reveals any defects or problems in product units early in the production process.
  • It gives your organization the flexibility to meet customer demands for a specific product at a specific time.
  • It reduces your operating costs by making non-value-added work more evident. This enables you to eliminate waste.

Difference between a push system and a pull system

“Fat” organizations use a push system. In such a system, goods are produced and handed off to a downstream process, where they are stored until needed. This type of system creates excess inventory. Lean organizations, on the other hand, use a pull system, in which goods are built only when a downstream process requests them. The customer then “pulls” the product from the organization. The final operation in a production process drives a pull system. Customer-order information goes only to the product’s final assembly area. As a result, nothing is produced until it is needed or wanted downstream, so the organization produces only what is needed. A pull system streamlines the flow of materials through your production process. This greatly improves your organization’s productivity by doing the following:

  • It reduces the time that employees spend in nonvalue-added steps, such as waiting and transporting product units.
  • It reduces downtime caused by product changeovers and equipment adjustments.
  • It reduces the distances that materials or works in progress must travel between assembly steps.
  • It eliminates the need for inspection or reworking of materials.
  • It bases your equipment usage on your cycle time.

Achieving one-piece flow

While many are familiar with the terminology, there is still a significant amount of confusion regarding what one-piece flow means and, more importantly, how to achieve it. Let us begin by stepping back and attempting to understand the concept of “connected flow.” Achieving connected flow means implementing a means of connecting each process step within a value stream. In a typical MRP batch-and-queue manufacturing environment as illustrated below, parts move from functional area to functional area in batches, and each processing step or set of processing steps is controlled independently by a schedule.

There is little relationship between each manufacturing step and the steps immediately upstream or downstream. This results in:

  • Large amounts of scrap when a defect is found because of large batches of WIP,
  • Long manufacturing lead time,
  • Poor on-time delivery and/or lots of finished goods inventory to compensate,
  • Large amounts of WIP.

When we achieve connected flow, there is a relationship between processing steps: That relationship is either a pull system such as a supermarket or FIFO lane or a direct link (one-piece flow). As illustrated below, one-piece flow is the ideal method for creating connected flow because the product is moved from step to step with essentially no waiting (zero WIP).

The basic condition for achieving one-piece flow works best when your production process and products meet certain requirements.To be good candidates for one-piece flow, we must have the following conditions:

  • Processes must be able to consistently produce a good product. If there are many quality issues, one-piece flow is impossible.
  • Your product changeover times must be very short; almost instantaneous is best. One-piece flow is impractical when many time-consuming changeover operations are needed during the production process.
  • Another requirement is that the products you make must be suitable for one-piece flow. Very small product units are usually not suitable because too much time is required for their setup, positioning, and removal from production equipment. The one-piece flow might be possible for the production of very small product units if you can completely automate their movement through your production process and if your cycle time is short.
  • Process times must be repeatable as well. If there is much variation, one-piece flow is impossible.
  • Equipment must have very high (near 100 percent) uptime. Equipment must always be available to run. If equipment within a manufacturing cell is plagued with downtime, one-piece flow will be impossible.
  • Processes must be able to be scaled to tact time, or the rate of customer demand. For example, if tact time is 10 minutes, processes should be able to scale to run at one unit every 10 minutes.
    Without the above conditions in place, some other form of connecting flow must be used. This means that there will be a buffer of inventory typically in the form of a supermarket or FIFO lane between processes; the goal would be to eventually achieve one-piece flow (no buffer) by improving the processes. If a set of processes is determined to a candidate for one-piece flow, then the next step is to begin implementation of a one-piece flow cell.

Implementing one-piece flow

The number of units you produce should equal the number of items your customer’s order. In other words, your selling cycle time should equal your manufacturing cycle time.The first step in implementing a one-piece flow cell is to decide which products or product families will go into the cells, and determine the type of cell: Product-focused or mixed model. For product-focused cells to work correctly, demand needs to be high enough for an individual product. For mixed model cells to work, changeover times must be kept short; a general rule of thumb is that change over time must be less than one tact time. The next step is to calculate tact time for the set of products that will go into the cell. Tact time is a measure of customer demand expressed in units of time and is calculated as follows:
Tact time = Available work-time per shift / Customer demand per shift
Next, determine the work elements and time required for making one piece. In much detail, list each step and its associated time. Time each step separately several times and use the lowest repeatable time. Then, determine if the equipment to be used within the cell can meet tact time. Considerations here include changeover times, load and unload times and downtime. The next step is to create a lean layout. Using the principles of 5-S (eliminating those items that are not needed and locating all items/equipment/materials that are needed at their points of use in the proper sequence), design a layout. Space between processes within a one-piece flow cell must be limited to eliminate motion waste and to prevent unwanted WIP accumulation. U shaped cells are generally best; however, if this is impossible due to factory floor limitations, other shapes will do. For example, I have implemented S-shaped cells in areas where a large U shape is physically impossible. Finally, balance the cell and create standardized work for each operator within the cell. Determine how many operators are needed to meet tact time and then split the work between operators. Use the following equation:
Number of operators = Total work content / Tact time
In most cases, an “inconvenient” remainder term will result (e.g., you will end up with Number of Operators = 4.4 or 2.3 or 3.6 instead of 2.0, 3.0, or 4.0). If there is a remainder term, it may be necessary to kaizen the process and reduce the work content. Other possibilities include moving operations to the supplying process to balance the line.

One-Piece Flow in production

The following illustration shows the impact of batch size reduction when comparing batch and – queue and one-piece flow.

How we can see differences between these both flow systems is very enormous. One-piece flow system saved 18 minutes for to the same batch of 10 pieces. With this system can be produced rather 3 times more than a batch and queue system. Next, the first piece was in processes for only 3 minutes. It means that system or operator can check part immediately in every process (A, B and C). Batch and queue system allowed produce many parts after every process. If will be occurred failure in the system than will be detected too late and many parts will be damaged.

 Equipment for one-piece flow

To accommodate one-piece flow, equipment should be correctly sized to meet customer demand. Machines designed for batch production might not be easy to adapt to one-piece-flow cycle times. One-piece flow works best with machines that are smaller and somewhat slower than equipment that is suited for batch processing. Equipment used for one-piece flow also needs to be easy to set up quickly so that you can use it to produce a wide mix of products. Because the volume, capacity, and force requirements are often lower for one-piece-flow production, machines that are suited for it can be smaller. Smaller machines save space and leave little opportunity for waste, such as inventory and defective parts, to accumulate. They are also less expensive to purchase. Slower machines are often sufficient for one-piece flow because the aim is to produce goods according to the manufacturing cycle time. Automated and semi-automated machines work well in one-piece-flow production. They stop and give the operator a signal when a cycle is complete or if any problems occur. They are sometimes also capable of notifying the next operation when to begin processing. And they often unload automatically after processing is done. Synchronize your equipment’s production operations by delaying the start of faster operations rather than speeding up or slowing down the machines. Running production equipment outside of its specified range can reduce product quality or tool life.

To achieve a one-piece-flow method’s full potential, it is important to follow five points with regard to your work-cell layout and employee training. These points are outlined below.

  1. Simplify the flow of your materials and parts. Below are several guidelines to follow:
    • Keep all goods flowing in the same direction.
    • Make sure all parts flow from storage through the factory according to the processing sequence.
    • Use first-in, first-out, or FIFO stocking.
    • Arrange parts for easy feeding into the production line.
    • Eliminate any non-value-added space in your work cells.
    • Keep all pathways in work areas clear; leave aisles open along walls and windows.
    • Make sure that material input and production output are separate operations.
    • Position your equipment to allow easy maintenance access.
    • Make sure separate work processes are located as close together as possible.
  2. Set up your production lines to maximize the equipment operators’ productivity. Review the feasibility of both straight-line and U-shaped work cells and their impact on both operator movement and productivity and the flow of work materials. Remember that a U-shaped work cell brings the ending point of a work process close to the beginning point, which minimizes the distance an operator has to move before beginning a new production cycle. This setup is better for some work processes than a straight-line work cell.
  3. Allot space in the layout of your work cells for regular equipment and product inspection. Remember that the employees working in each cell must be able to easily conduct a full-lot inspection. Such inspections prevent defects by catching any errors and non-standard conditions. This ensures that only defect-free parts are fed to the next step in your production process.
  4. Minimize your in-process inventory. Predetermine the stock that employees will have on hand for the entire production line. Arrange your work cells to enable an easy flow of materials into and out of all work areas.
  5. When your equipment is arranged to enable a smooth process flow, equipment operators might need to learn how to run different types of equipment. Such operators usually need to work standing up, instead of sitting down, so they can easily run a number of machines in sequence. Keep this in mind when designing your work cells. Cross-train your employees so that they know how to perform different work functions. Equipment operators are then able to go to other work cells if production is not required at their normal work areas. This also enables an entire work team to take full responsibility for the production process.

Tools  to implement a one-piece-flow process

Three tools are necessary for assessing and planning for a one-piece-flow process:

  1. PQ analysis table
  2. Process route table
  3. Standard Operation
  4. Quick Changeover
  1. PQ analysis table

    A PQ analysis table is a tool that helps employees understand the types of products your organization produces and the volume that your customers demand. It also shows whether the majority of your production volume is made up of a small or wide variety of parts. The PQ analysis table enables employees to identify what products are suitable for one-piece-flow production. The P in PQ stands for products; the Q stands for the quantity of production output.
    Case example: Quick-Lite’s PQ analysis Quick-Lite conducts a PQ analysis of its spark-plug final-assembly part numbers to see if a wide or limited variety of spark plugs makes up most of the volume. They find that six spark plugs made up 53.3% of the total volume. The manufacturing processes for these six spark plugs are likely candidates for one-piece-flow operations.

    Once the Quick-Lite team identifies these products in a PQ analysis table, they create a process route table to determine whether a similar technology is used to manufacture all six types of spark plugs.

  2. A process route table

    A process route table shows the machines and equipment required for processing a component or completing an assembly process. Such a table helps you to arrange your equipment in production lines according to product type and to group related manufacturing tasks into work cells. You can also use a process route table to analyze process, function, or task-level activities. The steps for creating a process route table are as follows:
    1. Somewhere above the top of the table, write the following:
    a. The name or number of the department whose activity is being analyzed.
    b. The operation or product that is being analyzed.
    c. The name of the person completing the form.
    d. The date on which the form is completed.
    2. Use the “No.” column on the left for the sequential numbering of the products or operations being analyzed.
    3. For each product or operation you are analyzing, enter the item name, machine number, or function.
    4. For each product or operation, enter circled numbers in the various resource columns that correspond to the sequence in which the resources are used for that product or operation.
    5. Connect the circled numbers with lines or arrows to indicate the sequence of operations. Once you have completed the table, look for items or products that follow the same, or nearly the same, the sequence of the machine and/or resource usage. You might be able to group these machines and/or resources together in the same work cells to improve the efficiency of your operations.
    Once your work team a) collects all the data necessary for selecting the products that are suitable for one-piece flow, b) verifies the operations needed and the available capacity, and c) understands the specific task in detail, you can implement the layout of your improved work cells and make one-piece flow a reality in your organization.

  3. Standard Operations

    A work combination is a mixture of people, processes, materials, and technology that comes together to enable the completion of a work process. The term standard operations refer to the most efficient work combination that a company can put together. When you apply all your knowledge of lean principles to a particular work process to make it as efficient as possible, a standard operation is a result. Employees then use this documented process as a guide to consistently apply the tasks they must perform in that work process. In addition, once you prepare standard operations for your work processes, they serve as the basis for all your organization’s training, performance monitoring, and continuous improvement activities. A big part of making your organization a lean enterprise is identifying different types of waste and finding ways to eliminate them. Ultimately, however, it is the correct combination of people, processes, materials, and technology that enables your organization to create quality products and services at the lowest possible operational cost. Putting together standard operations forces you to break down each of your work processes into definable elements. This enables you to readily identify waste, develop solutions to problems, and provide all employees with guidance about the best way to get things done. Many organizations that have used standard operations report that this lean initiative is the one that has had the biggest impact on their ability to produce better-quality products and services, make their workflow smoother and make their training process more productive. In addition, standard operations enable employees to actually see the waste that they previously didn’t see. The process for developing standard operations involves eight steps.

    1. Establish improvement teams.
    2. Determine your takt time.
    3. Determine your cycle time.
    4. Determine your work sequence.
    5. Determine the standard quantity of your work in progress.
    6. Prepare a standard workflow diagram.
    7. Prepare a standard operations sheet.
    8. Continuously improve your standard operations.

    Step 1: Establish improvement teams

    Some organizations take a top-down approach to the development of standard operations: supervisors alone determine what work tasks are to be performed, by whom, and when. Other organizations believe that only front-line workers should develop standard operations because these employees have keen insight into how things are done. But due to the nature of the steps required to establish standard operations, a team-based approach is best. It is best to have all employees who are impacted by a work process involved in the development of standard operations for that process. Lean organizations understand the need for complete buy-in and support of all work tasks by all the employees involved. It’s also important to coordinate this team effort with your organization’s other lean initiatives.

    Step 2: Determine your takt time

    Takt time is the total available work time per day (or shift), divided by customer-demand requirements per day (or shift). Takt time enables your organization to balance the pace of its production outputs to match the rate of customer demand. The mathematical formula for determining your takt time is as follows:
    takt time = available daily production time/ required daily quantity of output

    Step 3: Determine your cycle time

    Cycle time is the time it takes to successfully complete the tasks required for a work process. It is important to note that a work process’s cycle time may or may not equal its takt time. A process capacity table is a helpful tool for gathering information about the sequence of operations that make up a work process and the time required to complete each operation. Ultimately, the process capacity table can help you determine machine and operator capacity. Complete a process capacity table before you begin making changes such as moving equipment, changing the sequence of your operations, or moving employees’ positions and/or changing their job responsibilities. It is important to first know what your current capacity is and what it will be in the new process configuration that you plan.

    Steps for Creating a Process Capacity Table

    1. Enter the line/cell name.
    2. Record the total work time per shift.
    3. Enter the number of shifts.
    4. Record the maximum output per shift.
    5. Enter the sequence number of each processing step being performed on the part or product.
    6. Record the operation description, which is the process being performed on the part or product.
    7. Enter the number (if applicable) of the machine performing the process.
    8. Record the walk time, the approximate time required between the end of one process and the beginning of the next process.
    9. Enter the manual time, the time an operator must take to manually operate a machine when an automatic cycle is not activated. The manual time includes the time required to unload a finished part from the machine; load a new, unfinished part; and restart the machine.
    10. Record the automated time, the time required for a machine’s automatic cycle to perform an operation, from the point when the start button is activated to the point when the finished part is ready to be unloaded.
    11. Calculate the total cycle time by adding the manual time and the
      automated time.
    12. Enter the pieces per change, the total number of parts or products that a machine typically produces before its tool bits must be changed due to wear.
    13. Record the change time, the amount of time required to physically change a machine’s tool bits or perform a sample inspection. This is the time required to change tooling due to normal wear during a production run— not the changeover time required to go from making one part or product to making another.
    14. Calculate the time per piece, the change time divided by the pieces per change.
    15. Enter the production capacity per shift (also known as the total capacity). This is the total number of units that can be produced during the available hours per shift or per day.
    16. Record the takt time for the work process in the Takt Time box, using the mathematical formula shown earlier in this chapter.
    17. Calculate the total capacity of the process by adding the time to finish the process and the time per piece.

    Step 4: Determine your work sequence

    A work sequence is a sequential order in which the tasks that make up a work process are performed. A work sequence provides employees with the correct order in which to perform their duties. This is especially important for multifunction operators who must perform tasks at various workstations within the takt time. A standard operations combination chart enables your improvement team to study the work sequence for all your organization’s work processes. In such a chart, each task is listed sequentially and broken down into manual, automated, wait, and walk times. Wait time is not included in a process capacity table because worker idle time has no impact on automated activities or the capacity of a process. However, wait time is included in a standard operations combination chart to identify idle time during which a worker could instead be performing other tasks, such as external setup, materials handling, or inspection. The goal is to eliminate all worker idle time.

    The steps for completing a standard operations combination chart are described below.

    1.  At the top of a form indicate the following:
      1. The date that the work process is being mapped.
      2. The number of pages (if the chart is more than one page long).
      3. The name of the equipment operator.
      4. The name of the person entering data on the form (if different from the operator).
      5. The number and/or name of the part or product being produced.
      6. The name of the process or activity is mapped.
      7. The machine number and/or name.
      8. The work cell number and/or name.
      9. The required output per designated period (e.g., parts per shift or pounds per day).
      10. The takt time for the process.
      11. The total capacity for the process. Ideally, this should equal the takt time that you calculated in step 2.
    2. The difference between the takt time and the cycle time for the work process.
    3. It is often helpful to indicate the type of units the work activity is usually measured. Activities are normally measured in seconds, but some are measured in minutes or even longer intervals.
    4. Number every fifth or tenth line on the graph area to facilitate your recording of activity times. Choose convenient time intervals so that either the takt time or the actual cycle time—whichever is greater—is located near the right side of the graph area.
    5. Draw a line that represents the activity’s takt time. Trace the line with red so it stands out.
    6. Sequentially number each operational step in the appropriate column. Steps can include any or all of the following:
      • Manual operations.
      • Automated operations.
      • Time spent walking from one location to another.
      • Time spent waiting.
    7. Provide a brief name and description for each step.
    8. Note the time required for the completion of each step in the appropriate column.
    9.  Draw a horizontal line on the graph representing each step, using the following guidelines:
      • The length of the line should equal the duration of the step.
      • The line type should match the action type (see the line key at the top of the sample chart).
      • Each line type should be in a different colour, which will make your chart much easier to read.
      • Each line you draw should begin at the point on the vertical timeline that corresponds to the actual time the activity begins. It should end at the actual time the activity ends.

    For example, if the first step of work activity is an automatic hopper fill that takes fifteen seconds to complete, and the operator assembles a carton for ten seconds during that fifteen seconds, both steps would start at time zero, with the carton assembly ending at time ten and the automatic fill ending at time fifteen. However, if the operator waits until the automatic hopper fill is completed before assembling the carton, the fill would start at time zero and end at time ten, but the carton assembly would start at time fifteen and end
    at time twenty-five. Your completed standard operations combination chart should provide you with some useful insights, including the following:

    • If the total time to complete the process or activity equals the red takt-timeline,  You already have an efficient work combination in place.
    • If the total time required to complete the process or activity falls short of the red takt-timeline, you might be able to add other operations to the activity to use your resources more effectively.
    • If the total time required to complete the process or activity is longer than the red takt-timeline, there is waste in your process.

    Use the following steps to identify where this waste occurs:
    1. Look over the steps in your process to see if any of them can be compressed or eliminated. Perhaps one or more steps can be completed during periods when the equipment operator is waiting for automated operations to be completed.
    2. Look at the movement of employees and materials. Can you reduce or eliminate any of it by relocating supplies or equipment?

    Step 5: Determine the standard quantity of your work in progress

    The standard quantity of your work in progress (WIP) is the minimum amount of WIP inventory that must be held at or between your work processes. Without having this quantity of completed work on hand, it is impossible to synchronize your work operations.
    When determining the best standard quantity of WIP you should have, consider the following points:

    • Try to keep the quantity as small as possible.
    • Ensure that the quantity you choose is suitable to cover the time required for your error-proofing and quality-assurance activities.
    • Make sure that the quantity enables all employees to easily and safely handle parts and materials between work operations.

    Step 6: Prepare a standard workflow diagram

    A workflow diagram shows your organization’s current equipment layout and the movement of materials and workers during work processes. Such a diagram helps your improvement team plan future improvements to your organization, such as one-piece flow. The information in your workflow diagram supplements the information in your process capacity table and standard operations combination chart. When combined, the data in these three charts serve as a good basis for developing your standard operations sheet. The steps for completing a workflow diagram are described below.

    1. At the top of the diagram, indicate the following:
      a. The beginning and endpoints of the activity you are mapping.
      b. The date the activity is being mapped. The name of the person completing the diagram should also be included.
      c. The name and/or a number of the part or product being produced.
    2. Sketch the work location for the work process you are mapping, showing all of the facilities directly involved with the process.
    3. Indicate the work sequence by numbering the facilities in the order in which they are used during the activity.
    4. Connect the facility numbers with solid arrows and number them, starting with 1 and continuing to the highest number needed. Use solid arrows to indicate the direction of the workflow.
    5. Using a dashed arrow, connect the highest-numbered facility to facility number 1. This arrow indicates a return to the beginning of the production cycle.
    6. Place a diamond symbol (✧) at each facility that requires a quality check.
    7. Place a cross symbol (✝) at each facility where safety precautions or checks are required. Pay particular attention to facilities that include rotating parts, blades, or pinch points.
    8. Place an asterisk (* ) at each location where it is normal to accumulate standard WIP inventory. Adjacent to the asterisk, indicate the magnitude of the inventory— measured in number, weight, volume, and so on.
    9. Also, enter the total magnitude of the inventory in the “Number of WIP Pieces” box.
    10. Enter the takt time for the operation in the “Takt Time” box. Calculate the takt time.
    11. Enter the time required to complete a single cycle of the activity in the “Cycle Time” box. Ideally, this time should equal the takt time.

    The workflow diagram provides a visual map of workspace organization, movement of materials and workers, and distances travelled—information not included in either the process capacity table or the standard operations combination chart. You can use this information to improve your workspace organization, re-sequence your work steps, and reposition your equipment, materials, and workers to shorten your cycle time and the overall travel distance. This will help you to achieve your takt time.

    Step 7: Prepare a standard operations sheet

    Numerous formats exist for standard operations sheets. In general, the layout for your sheet should include the components listed below:

    1. The header section should contain the following:
      • Process name
      • Part or product name
      • Takt time
      • Cycle time
      • Sign-offs
      • Approval date
      • Revision level
    2. The work sequence section should contain the following:
      • Sequence number
      • Description of task
      • Manual time
      • Automated time
      • Walk time
      • Inventory requirements
      • Key points
      • Safety precautions
      • Related job procedures
    3. The workflow diagram section should contain a pictorial representation of the work area.
    4. The footer section should contain the following:
      • Lean enterprise tools applied to the work process
      • Safety equipment required
      • Page indicator (for multiple-page standard operations sheets)

    Step 8: Continuously improve your standard operations

    After you complete your standard operations sheet, you should train all employees who are affected by your changes to the work process in question. Don’t be surprised if, during this training, employees discover potential opportunities for even greater improvement. It is through the continuous improvement of your standard operations that your organization can systematically drive out waste and reduce costs. You should review your organization’s standard operations sheet(s) on a periodic basis to ensure all employees are accurately complying with them.

  4. Quick Changeover

    Quick changeover is a method of analyzing your organization’s manufacturing processes and then reducing the materials, skilled resources, and time required for equipment setup, including the exchange of tools and dies.Using the quick-changeover method helps your production teams reduce downtime by improving the setup process for new product launches and product changeovers, as well as improving associated maintenance activities. There are many advantages to using the quick changeover method. These include the following:

    • Members of your team can respond to changes in product demand more quickly.
    • Machine capacity is increased, which allows for greater production capacity.
    • Manufacturing errors are reduced.
    • Changeovers are made more safely.
    • You can reduce your inventory (and its associated costs) because it is no longer needed for extended downtimes.
    • Once you can make changeovers according to an established procedure, you can train additional operators to perform these tasks, which increases the flexibility of your organization.
    • Lead times are shortened, improving your organization’s competitive position in the marketplace.

    You use the PDCA Cycle to make improvements to your setup and changeover processes. The procedure to implement Quick changeover involves the following steps:

    1. Evaluate your current processes. (Plan)

      a. Conduct an overview of your current production process to identify all equipment and processes that require downtime for changeover. Include all processes that require tooling replacement or new dies, patterns, moulds, paints, test equipment, filtration media, and so on.
      b. Collect data using a check sheet for each process. Make sure the check sheet includes information about the following:

      • Duration of the changeover. This is the time it takes from the start of the changeover process to its completion, including preparation and cleanup.
      • The amount of production typically lost during the changeover, including the number of units not produced, the number of hours that operators are not engaged in productive activities lost production time, and rework (measured in hours and units).
      • Process events that are constraint operations: these are operations that are long in duration or are critical to completing the manufacturing process.

      c. Create a matrix diagram to display this data for each production process (categories might include setup time, resources and materials required, and change over time).

      d. Select a process as your target for improvement. A good process to choose is one that has a long downtime, setup time, and/or change over time; is a frequent source of error or safety concerns, or is critical to process output. A constraint operation that requires a changeover during your production operations is often a good first target to select. Choose no more than three targets to work on at one time.

    2. Document all the current changeover activities for the process you have selected. (Plan)

      a. Make a checklist of all the parts and steps required in the current changeover, including the following:

      • Names
      • Specifications
      • Numeric values for all measurements and dimensions
      • Part numbers
      • Special settings

      b. Identify any waste or problems associated with your current changeover activities.

      c. Record the duration of each activity. See the sample data sheet below.

      d. Create a graph of your current change over time (in seconds) to establish a baseline for improvement.
      e. Set your improvement target. A target of a 50% reduction is recommended.

    3. Identify internal and external process activities. (Plan)

      a. Create two categories on your checklist: one for internal processes, and one for external processes.
      b. List each task under the appropriate category, making sure to keep the tasks in the correct sequence.

    4. Turn as many internal processes as possible into external processes. (Plan)

      Using your checklist, complete the following steps:
      a. Identify the activities that employees currently perform while the line or process is idle that can be performed while it is still running.
      b. Identify ways to prepare in advance any operating conditions that must be in place while the line is running (e.g., preheating equipment).
      c. Standardize parts and tools that are required for the changeover process, including the following:

      • Dimensions.
      • Securing devices used.
      • Methods of locating and centring objects.
      • Methods of expelling and clamping objects.
    5. Streamline the process. (Plan)

      a. Use visual management techniques to organize your workplace.
      b. Consider ways to error-proof the process.
      c. Consider ways to eliminate unnecessary delays in your internal processes by doing the following:

      • Identifying the activities that can be done concurrently by multiple employees.
      • Using signals, such as buzzers or whistles, to cue operators.
      • Using one-turn, one-motion, or interlocking methods.

      d. Consider ways to eliminate unnecessary delays in your external processes by making improvements in the following:

      • Storage and transportation of parts and tools.
      • Automation methods.
      • Accessibility of resources.

      e. Create a new process map showing your proposed changes to the setup process.

    6. Test your proposed changes to the process. (Do)

      a. Consider the feasibility of each proposed change.
      b. Prepare and check all materials and tools required for the changeover. Make sure they are where they should be and that they are in good working order.
      c. Perform your revised setup activities for the parts and tools. Adjust settings, calibrate equipment, set checkpoints, and so on, as required.
      d. Perform a trial run of your proposed changes.
      e. Collect data on the duration of the setup time, and update your changeover improvement chart.

    7. Evaluate the results of your changes. (Check)

      Take a look at the results of the changes you have made. Did the results meet your target goal? If so, go on to step 8. If not, make adjustments or consider other ways in which you can streamline your changeover activities and make the process external.

    8. Implement your new quick-changeover process and continue to work to improve it. (Act)

      • Document the new procedures and train all involved employees on the new procedures.
      • Continue to collect data for continuous improvement of the changeover process.
      • Create a revised matrix diagram of the change processes and begin the quick changeover process again.

Cellular Manufacturing

Cellular Manufacturing is a method of producing similar products using cells, or groups of team members, workstations, or equipment, to facilitate operations by eliminating setup and unneeded costs between operations. Cells might be designed for a specific process, part, or a complete product. They are favourable for single-piece and one-touch production methods and in the office or the factory. Because of increased speed and the minimal handling of materials, cells can result in great cost and time savings and reduced inventory. Cellular design often uses group technology, which studies a large number of components and separates them into groups with like characteristics, sometimes with a computer’s help, and which requires the coding of classifications of parts and operation. The cellular design also uses families-of-parts processing, which groups components by shape and size to be manufactured by the same people, tools, and machines with little change to process or setup. Regardless of the cell design (straight line, u-shape, or other), the equipment in the cell is placed very near one another to save space and time. The handling of materials can be by hand, conveyor, or robot. A cell supervisory computer must be used to control movement between equipment pieces and the conveyor when robots or conveyors are used.

The Definition of a Cell

A cell is a combination of people, equipment, and workstations organized in the order of process flow, to manufacture all or part of a production unit. I make little distinction between a cell and what is sometimes called a flow line. However, the implication of a cell is that it:

  • Has one-piece, or a very small lot, flow
  • Is often used for a family of products
  • Has equipment that is right-sized and very specific for this cell
  • Is usually arranged in a C or U shape so the incoming raw materials and outgoing finished goods are easily monitored
  • Has cross-trained people for flexibility

Objectives of cellular manufacturing:

  • To shorten manufacturing lead times by reducing setup, work part handling, waiting times, and batch sizes.
  • To reduce Work in Process (WIP) inventory. Smaller batch sizes and shorter lead times reduce work-in-process.
  •  To improve quality. Accomplished by allowing each cell to specialize in producing a smaller number of different parts. This reduces process variability.􀂃
  • To simplify production scheduling. Instead of scheduling parts through a sequence of machines in a process-type shop layout, the system simply schedules the parts through the cell.
  •  To reduce setup times. Accomplished by using group tooling (cutting tools, jigs, and fixtures) that have been designed to process the part family rather than part tooling, which is designed
    for an individual part. This reduces the number of individual tools required as well as the time to change tooling between parts.

Steps to Implement Cell Manufacturing

After you’ve mapped your value streams, you are ready to set up continuous flow manufacturing cells. Most cells that have been set up in the past ten years do not have continuous flow; most changes to cells have been a layout change only. That is, machines were moved in a cellular arrangement and nothing more was changed. A change in layout alone does not create a continuous flow. This article will discuss seven steps to creating a continuous flow of manufacturing cells.

  1. Decide which products or product families will go into your cells, and determine the type of cell: Product-focused or Group Technology (mixed model). For product-focused cells to work correctly, demand needs to be high enough for an individual product. For mixed model or group technology cells to work, changeover times must be kept short.
  1. Calculate Takt Time.Takt time, often mistaken for cycle time, is not dependent on your productivity- it is a measure of customer demand expressed in units of time:

Takt Time = Available work-time per shift / Customer demand per shift

Ex: Work time/Shift = 27,600 seconds

Demand/Shift = 690 units

Takt Time = 27,600/690 = 40 sec.

The customer demands one unit every 40 seconds. What if your demand is unpredictable and relatively low volume? Typically, demand is unpredictable; however, aggregate demand (that is, the demand for a group of products that would run through a cell) is much more predictable. Takt time should generally not be adjusted more than monthly. Furthermore, holding finished goods inventory will help in handling fluctuating demand.

  1. Determine the work elements and time required for making one piece. In much detail, document all of the actual work that goes into making one unit. Time each element separately several times and use the lowest repeatable time. Do not include wasteful elements such as walking and waiting time.
  2. Determine if your equipment can meet takt time. Using a spreadsheet determine if each piece of equipment that will be required for the cell you are setting up is capable of meeting takt time.
  3. Create a lean layout. More than likely, you will have more than one person working in your cell (this depends on takt time); however, you should arrange the cell such that one person can do it. This will ensure that the least possible space is consumed. Less space translates to less walking, movement of parts, and waste. U-shaped cells are generally best; however, if this is impossible due to factory floor limitations, other shapes will do. For example, I have implemented S-shaped cells in areas where a large U-shape is physically impossible.
  4. Balance the cell. This involves determining how many operators are needed to meet takt time.

Number of Operators = Total Work content / Takt time

Ex.: Total work content: 49 minutes

Takt time: 12 minutes

Number of operators: 49/12 = 4.08 (4 operators)

If there is a remainder term, it may be necessary to kaizen the process and reduce the work content. Other possibilities include moving operations to the supplying process to balance the line. For example, one of my clients moved simple assembly operations from their assembly line to their injection moulding operation to reduce work content and balance the line.

  1. Determine how the work will be divided among the operators.There are several approaches. Some include:
  • Splitting the work evenly between operators
  • Having one operator perform all the elements to make a complete circuit of the cell in the direction of material flow
  • Reversing the above
  • Combinations of the above

After you’ve determined the above 7 elements, you will have gathered much of the necessary data required to begin drawing and laying out your continuous flow manufacturing cell.

Back to Home Page

If you need assistance or have any doubt and need to ask any question contact us at preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion are also welcome.


Introduction to Kaizen.

Kaizen is a Japanese management strategy that means “change for the better” or “continuous slow improvement, a belief that all aspects of life should be constantly improved. It comes from the Japanese words “kai” means continuous or change and “zen” means improvement, better. The Japanese way encourages small improvements day after day, continuously. The key aspect of Kaizen is that it is an on-going, never-ending improvement process. It’s a soft and gradual method opposed to more usual western habits to scrap everything and start with new. In Japan where the concept originated, kaizen applies to all aspects of life, not just to the workplace. Kaizen is the word that was originally used to describe a key element of the Toyota Production System that means “making things the way they should be” according to the basic, sensible principles of profitable industrial engineering. It means creating an atmosphere of continuous improvement by changing your view, your method and your way of thinking to make something better. In use, Kaizen describes an environment where companies and individuals proactively work to improve the manufacturing process. The kaizen system is based on incremental innovation, where employees are encouraged to make small changes in their work area on an ongoing basis. The cumulative effect of all these little changes over time can be quite significant, especially if all of the employees within a company and its leaders are committed to this philosophy. Improvements are usually accomplished at little or no expense without sophisticated techniques or expensive equipment. Instead of sinking more money in buying machinery, Kaizen veers an organization towards paying attention to small but significant details. Managers are encouraged to improve the efficiency of existing infrastructure instead of investing in more of the same. Kaizen focuses on simplification by breaking down complex processes into their sub processes and then improving them. The driving force behind kaizen is dissatisfaction with the status quo, no matter how good the firm is perceived to be. Standing still will allow the competition to overtake and pass any complacent firm. The act of being creative to solve a problem or make an improvement not only educates people but also inspires to go further. The fundamental idea behind kaizen comes straight from the Deming’s PDCA cycle:

  • someone has an idea for doing the job better (Plan)
  • experiments will be conducted to investigate the idea (Do)
  • the results evaluated to determine if the idea produced the desired result (Check)
  • if so, the standard operating procedures will be changed (Act)

Kaizen is a system that involves every employee, from upper management to the cleaning crew. Everyone is encouraged to come up with small improvement suggestions on a regular basis. In the first stage, management should make every effort to help the workers provide suggestions, no matter how primitive, for the improvement of the worker’s job and the workshop. This will help the workers look at the way they are doing their jobs. In the second stage, management should stress employee education so that employees can provide better suggestions. To enable workers to provide better suggestions, they should be equipped to analyze problems and the environment. This requires education. Main subjects for suggestions are, in order of importance:

  • Improvement in one’s own work
  • Savings in energy, material, and other resources
  • Improvement in the working environment
  • Improvements in machines and processes
  • Improvements in tools
  • Improvements in office work
  • Improvements in product quality
  • Ideas for new products
  • Customer services and customers relations
  • Others

Kaizen is based on making changes anywhere improvements can be made. Western philosophy may be summarized as, “if it ain’t broke, don’t fix it.” The Kaizen philosophy is to “do it better, make it better, improve it even if it isn’t broken, because if we don’t, we can’t compete with those who do.” For example, Toyota is well-known as one of the leaders in using Kaizen. In 1999 at one U.S. plant, 7,000 Toyota employees submitted over 75,000 suggestions; out of them, 99% were implemented.

Philosophy of kaizen:

Kaizen is one of the most commonly used words in Japan. It is used, not only in the workplace, but in popular culture as well. Kaizen is a foundation on which companies are built. Kaizen is such a natural way for people in Japan to think that managers and workers often do not make a conscious effort to think “Kaizen.” They just think the way they think – and that way happens to be Kaizen! If you are aware of the Kaizen philosophy and strive to implement it, not a day should go by without some kind of improvement being made somewhere in the company. After WWII most Japanese companies had to start over. Everyday brought new challenges, and rising to those challenges resulted in progress. Simply staying in business required a step forward everyday, and this made Kaizen a way of life.

  1. Constant Improvement

    In any business, management creates standards that employees must follow to perform the job. In Japan, maintaining and improving standards is the main goal of management. If you improve standards, it means you then establish higher standards which you observe, maintain and then later try to improve upon. This is an unending process. If you do not maintain the standard, it is bound to slip back, giving it the “two steps forward, one step back” effect. Lasting improvement is achieved only when people work to higher standards. For this reason, maintenance and improvement go hand in-hand for Japanese managers. Generally speaking, the higher up the manager is, the more he should be concerned with improvement. At the bottom level, an unskilled laborer may spend the day simply following instructions. However as he becomes better at his job, he begins to think about ways to improve, or make his job easier. In doing this, he finds ways to make his work more efficient, thus adding to overall improvement within the company. The value of improvement is obvious. In business, whenever improvements are made, they are eventually going to lead to better quality and productivity. Improvement is a process. The process starts with recognizing a need, and the need becomes apparent when you recognize a problem. Kaizen puts an emphasis on problem-awareness and will lead you to the identification of problems.
    According to Bicheno, kaizen or CI can be classified in five different improvement types; passive incremental, passive breakthrough, enforced incremental, enforced breakthrough and blitz.

    1. Passive Incremental
      Passive Incremental improvements can be the suggestion scheme with or without rewards, and with or without team emphasis. A team based passive incremental improvement example is the quality circle. According to Bicheno non-acknowledgement and non-recognition have probably been the major reason for suggestions schemes producing poor results and being abandoned.

    2. Passive Breakthrough
      Passive Breakthroughs normally springs from traditional industrial engineering projects and work study projects, particularly if the initiative is left to the Industrial Engineering of work study department Bicheno. According to Bicheno passive breakthroughs have probably been the greatest source of productivity improvement over the past 100 years. It is described by Bicheno as being the classic improvement method by industrial engineering and stated that it has been around for many years.
    3. Enforced Incremental
      Enforced Incremental is driven waste elimination and thereby not only left to chance of operator initiative. Examples of drivers could be response analysis, line stop, inventory withdrawal, waste checklist and the stage 1, stage 2 cycle. It is about setting up a culture that drives improvement, which constantly opens up new opportunities for another improvement activity Bicheno.
    4. Enforced Breakthrough
      Enforced Breakthroughs can be industrial engineering activities, for example initiated by management or by crisis. It is driven by active value stream current and future state mapping which generally target the complete value stream and followed up by action review cycles and an action plan or master schedule Bicheno.
    5. Blitz
      Blitz or kaizen events are a combination of Enforced Incremental and Enforced Breakthrough. It is breakthrough because typical blitz events achieve between 25% and 70% improvements within either a week or within a month at most. On the other hand it is incremental because blitz events typically relates to small areas so it is typically more point kaizen (local area) than flow kaizen (full value stream). It is enforced because the expectations and opportunities are in place Bicheno. According to Bicheno blitz events are not necessarily continuous improvement if you see it as an isolated event. But blitz events should be repeated in the same area at regular intervals. Product change, priority change, people change and technology improvement.
  2. Problem Solving

    Where there are no problems, there is no potential for improvement. When you recognize that a problem exists, Kaizen is already working. The real issue is that the people who create the problem are often not directly inconvenienced by it, and thus tend to not be sensitive to the problem. In day-to-day management situations, the first instinct is to hide or ignore the problem rather than to correct it. This happens because a problem is …. well, a problem! By nature, nobody wants to be accused of having created a problem. However if you think positive, you can turn each problem into a valuable opportunity for improvement. So, according to Kaizen philosophy, when you identify a problem, you must solve that problem. Once you solve a problem, you, in essence, surpass a previously set standard. This results in the need to set a new, higher standard and is the basis for the Kaizen concept.

  3. Standardization

    If you don’t first set a standard, you can never improve upon that standard. There must be a precise standard of measurement for every worker, every machine, every process and even every manager. To follow the Kaizen strategy means to make constant efforts to improve upon a standard. For Kaizen, standards exist only to be surpassed by better standards. Kaizen is really based on constant upgrading and revision. Not everything in a process or work environment needs to be measurable and standardized. Sometimes, Japanese factories use a one-point standardization. Each worker performs many tasks, but only one of those tasks needs to be standardized. This one-point standard is often displayed in the workplace so that the worker is always mindful of it. After the standard is followed for a while, it becomes second nature to perform the task to meet the standard. At that point, another standard can be added. Standardization is a way of spreading the benefits of improvement throughout the organization. In a disciplined environment, everyone, including management, is mindful of those standards.

  4. The Suggestion System

    Kaizen covers every part of a business. From the tasks of laborers to the maintenance of machinery and facilities, Kaizen has a role to play. All improvements will eventually have a positive effect on systems and procedures. Many top Japanese executives believe that Kaizen is 50 percent of management’s job, and really, Kaizen is everybody’s job! It is important for management to understand the workers role in Kaizen, and to support it completely. One of the main vehicles for involving all employees in Kaizen is through the use of the suggestion system. The suggestion system does not always provide immediate economic payback, but is looked at as more of a morale booster. Morale can be improved through Kaizen activities because it gets everyone involved in solving problems. In many Japanese companies, the number of suggestions made by each worker is looked at as a reflection of the supervisor’s Kaizen efforts. It is a goal of managers and supervisors to come up with ways to help generate more suggestions by the workers. Management is willing to give recognition to employees for making efforts to improve, and they try to make this recognition visible. Often, the number of suggestions is posted individually on the wall of the workplace in order to encourage competition among workers and among groups. A typical Japanese plant has a space reserved in the corner of each workshop for publicizing activities going on in the workplace. Some of the space might be reserved for signs indicating the number of suggestions made by workers or groups, or even post the actual suggestion. Another example would be to display a tool that has been improved as a result of a worker’s suggestion. By displaying these sorts of improvements, workers in other work areas can adopt the same improvement ideas. Displaying goals, recognition and suggestions helps to improve communication and boost morale. Kaizen begins when the worker adopts a positive attitude toward changing and improving the way he works. Each suggestion leads to a revised standard, and since the new standard has been set by a workers own volition, he takes pride in the new standard and is willing to follow it. If, on the contrary, he is told to follow a standard imposed by management, he may not be as willing to follow it. Thus, through suggestions, employees can participate in Kaizen in the workplace and play an important role in upgrading standards. Japanese managers are more willing to go along with a change if it contributes to any of the following goals:

    • Making the job easier
    • Making the job more productive
    • Removing drudgery from the job
    •  Improving product quality
    • Removing nuisance from the job
    • Saving time and cost
    • Making the job safer
  5. Process-Oriented Thinking

    Another change you will notice with Kaizen is that it generates a process oriented way of thinking. This happens because processes must be improved before you get improved results. In addition to being process oriented, Kaizen is also people-oriented, since it is directed at people’s efforts.  In Japan, the process is considered to be just as important as the intended result.  A process-oriented manager should be people-oriented and have a reward system based on the following factors:

    • Discipline
    • Participation and involvement
    • Time management
    • Morale
    • Skill development
    • Communication
  6. Kaizen vs. Innovation

    Kaizen vs. innovation could be referred to as the gradualist-approach vs. the great-leap-forward approach. Japanese companies generally favor the gradualist approach and Western companies favor the great-leap approach, which is an approach epitomized by the term innovation. Innovation is characterized by major changes in the wake of technological breakthroughs, or the introduction of the latest management concepts or production techniques. Kaizen, on the other hand, is un-dramatic and subtle, and its results are seldom immediately visible. Kaizen is continuous while innovation is a one-shot phenomenon. Further, innovation is technology and money-oriented whereas Kaizen is people- oriented.Kaizen does not call for a large investment to implement it, but it does call for a great deal of continuous effort and commitment. To implement Kaizen, you need only simple, conventional techniques. Often, common sense is all that is needed. On the other hand, innovation usually requires highly sophisticated technology, as well as a huge investment. Often, innovation does not bring the staircase effect, however, because it lacks the Kaizen strategy to go along with it. Once a new system has been installed as a result of new innovation, it is subject to steady deterioration unless continuing efforts are made to first maintain it and then improve on it. There is no such thing as static or constant. The worst companies are those that do nothing but maintenance (no internal drive for Kaizen OR innovation). Improvement by definition is slow, gradual and often invisible with effects that are felt over the long run. In a slow-growth economy, Kaizen often has a better payoff than innovation does. For example: it’s difficult to increase sales by 10% but it’s not so difficult to cut manufacturing costs by 10%. Kaizen requires virtually everyone’s personal efforts and the knowledge that with that effort and time, improvements will be made. Management must make a conscious and continuous effort to support it. It requires a substantial management commitment of time and effort. Investing in Kaizen means investing people, not capital.

  7. Management Support of Kaizen

    If the benefits of Kaizen come gradually, and its effects are felt only on a long-term basis, it is obvious that Kaizen can thrive only under top management that has a genuine concern for the long-term health of the company. One of the major differences between Japanese and Western management styles is their time frames. Japanese management has a long-term perspective and Western managers tend to look for shorter-term results. Unless top management is determined to introduce Kaizen as a top priority, any effort to introduce Kaizen to the company will be short lived. Kaizen starts with the identification of problems. In the Western hire-and -fire environment, identification of a problem often means a negative performance review and may even carry the risk of dismissal. Superiors are busy finding fault with subordinates, and subordinates are busy covering up problems. Changing the corporate culture to accommodate and foster Kaizen – to encourage everybody to admit problems and to work out plans for their solution – will require sweeping changes in personnel practices and the way people work with each other. Kaizen’s introduction and direction must be top-down, but the suggestions for Kaizen should be bottom up, since the best suggestions for improvement usually come from those closest to the problem. Western Management will be required to introduce process-oriented criteria at every level, which will necessitate company-wide retraining programs as well as restructuring of the planning and control systems. The benefits of Kaizen are obvious to those who have introduced it. Kaizen leads to improved quality and greater productivity. Where Kaizen is introduced for the first time, management may easily see productivity increase by 30 percent, 50 percent and even 100 percent and more, all without any major capital investments. Kaizen helps lower the breakeven point. It helps management to become more attentive to customer needs and build a system that takes customer requirements into account. The Kaizen strategy strives to give undivided attention to both process and result. It is the effort that counts when we are talking about process improvement, and management should develop a system that rewards the efforts of both workers and managers, and not just the recognition of results. Kaizen does not replace or preclude innovation. Rather, the two are complementary. Ideally, innovation should take off after Kaizen has been exhausted, and Kaizen should follow as soon as innovation is initiated. Kaizen and innovation are inseparable ingredients in progress. The Kaizen concept is valid not only in Japan, but in other countries. All people have an instinctive desire to improve themselves. Although it is true that cultural factors affect an individual’s behavior, it is also true that the individual’s behavior can be measured and affected through a series of factors or processes. Thus, it is always possible regardless of the culture, to break behavior down into processes and to establish control points and check points. This is why such management tools and decision-making and problem solving have a universal validity.

Kaizen -The three pillars

According to M. Imai, a guru in these management philosophies and practices , the three pillars of kaizen are summarized as follows:

  1. Housekeeping
  2. Waste elimination
  3. Standardization

and as he states , the management and employees must work together to fulfill the requirements for each category. Tο be ensured success on activities on those three pillars three factors have also to be taken account .

  1. Visual management,
  2. The role of the supervisor,
  3. The importance of training and creating a learning organization.

More analytically on each one pillar of Kaizen:

  1. Housekeeping

    This is a process of managing the work place ,known as ‘’Gemba’’ (workplace ) in Japanese, for improvement purposes .Imai introduced the word ’’Gemba ‘’, which means ‘’real place’’, where value is added to the products or services before passing them to next process where they are formed.
    For proper housekeeping a valuable tool or methodology is used , the 5S methodology. Then term “Five S” is derived from the first letters of Japanese words referred to five practices leading to a clean and manageable work area: seiri (organization), seiton (tidiness), seiso (purity), seiketsu (cleanliness), and shitsuke (discipline). The English words equivalent of the 5S’s are sort, straighten, sweep, sanitize, and sustain. 5S evaluations provide measurable insight into the orderliness of a work area and there are checklists for manufacturing and nonmanufacturing areas that cover an array of criteria as i.e. cleanliness, safety, and ergonomics. Five S evaluation contributes to how employees feel about product, company, and their selves and today it has become essential for any company, engaged in manufacturing, to practice the 5S’s in order to be recognized as a manufacturer of world-class status

    1. Seiri: SORT what is not needed. Use the red tag system of tagging items considered not needed, then give everyone a chance to indicate if the items really are needed. Any red tagged item for which no one identifies a need is eliminated (sell to employee, sell to scrap dealer, give away, put into trash.
    2. Seiton: STRAIGHTEN what must be kept. Make things visible. Put tools on peg board and outline the tool so its location can be readily identified. Apply the saying “a place for everything, and everything a place’’.
    3. Seiso: SCRUB everything that remains. Clean and paint to provide a pleasing appearance.
    4. Seiketsu: SPREAD the clean/check routine. When others see the improvements in the Kaizen area, give them the training and the time to improve their work area.
    5. Shitsuke: STANDARDIZATION and self-discipline. Established a cleaning schedule. Use downtime to clean and straighten area.

    As some of the benefits of employees of practicing the five S could be referred to as follows:
    Creates cleanliness, sanitary, pleasant, and safe working environments; it revitalizes Gemba and greatly improves employee morale and motivation; it eliminates various kinds of waste by minimizing the need to search for tools, making the operators’ jobs easier, reducing physically strenuous work, and freeing up space; it creates a sense of belonging and love for the place of work for the employees

  2. Waste (Muda ) elimination.

    Muda in Japanese means waste. The resources at each process — people and machines — either add value or do not add value and therefore ,any non-value adding activity is classified as muda in Japan. Work is a series of value-adding activities, from raw materials ,ending to a final product. Muda is any non-value-added task. To give some examples ,there are presented here Muda in both manufacturing and office settings described below:

    Muda in Manufacturing

    • Shipping defective parts
    • Waiting for inspection
    • Walking and transporting parts
    • Overproduction
    • Excess inventory which hides

    Muda in Office

    • Passing on work that contains errors
    • Signature approvals, bureaucracy
    • Walking or routing documents
    • Copies, files, a lot of papers
    • Excess documentation

    The aim is to eliminate the seven types of waste caused by overproduction, waiting, transportation, unnecessary stock, over processing ,motion, and a defective part, and presented as following:

    1. Overproduction – Production more than production schedule
    2. Inventory – Too much material ahead of process hides problems
    3. Defects – Material and labor are wasted; capacity is lost at bottleneck
    4. Motion – Walking to get parts because of space taken by high WIP
    5. Processing – Protecting parts for transport to another process
    6. Waiting – Poor balance of work; operator attention time
    7. Transportation – Long moves; re-stacking; pick up/put down

    So muda (waste) elimination will cover the categories described as follows:

    1. Muda of overproduction. Overproduction may arises from fear of a machine’s failure, rejects, and employee absenteeism. Unfortunately, trying to get ahead of production can result in tremendous waste, consumption of raw materials before they are needed, wasteful input of manpower and utilities, additions of machinery, increased burdens in interest, additional space to store excess inventory, and added transportation and administrative costs.
    2. Muda of inventory. Final products, semi finished products, or part supplies kept in inventory do not add any value. Rather, they add cost of operations by occupying space, requiring additional equipment and facilities such as warehouses, forklifts, and computerized conveyor systems .Also the products deteriorate in quality and may even become obsolete overnight when market changes or competitors introduce a new product or customers change their taste and needs. Warehouses further require additional manpower for operation and administration. Excess items stay in inventory and gather dust (no value added), and their quality deteriorates over time. They are even at risk of damage through fire or disaster. Just-in-time (JIT) production system helps to solve this problem .
    3. Muda of defects (repair or rejects). Rejects, interrupt production and require rework and a great waste of resources and effort .Rejects will increase inspection work, require additional time to repair, require workers to always stand by to stop the machines, and increase of course paperwork.
    4. Muda of motion. Any motion of a persons not directly related to adding value is unproductive. Workers should avoid walking, lifting or carrying heavy objects that require great physical exertion because it is difficult, risky, and represents non-value added activities. Rearranging the workplace would eliminate unnecessary human movement and eliminate the requirement of another operator to lift the heavy objects. Analysis of operators’ or workers leg and hand motions in performing their work will help companies to understand what needs to be done.
    5. Muda of processing. There are many ways that muda can happen in processing. For example, failure to synchronize processes and bottlenecks create muda and can be eliminated by redesigning the assembly lines so, utilizing less input to produce the same output. Input here refers to resources, utilities, and materials. Output means items such as products, services, yield, and added value. Reduce the number of people on the line; the fewer line employees the better. Fewer employees will reduce potential mistakes, and thus create fewer quality problems. This does not mean that we need to dismiss our employees. There are many ways to use former line employees on Kaizen activities, i.e., on value-adding activities. When productivity goes up, costs will go down. In manufacturing, a longer production line requires more workers, more work-in-process and a longer lead-time. More workers also means a higher possibility of making mistakes, which leads to quality problems. More workers and a longer lead-time will also increase cost of operations. Machines that go down interrupts production. Unreliable machinery necessitates batch production, extra work-in-process, extra inventory, and extra repair efforts. A newly hired employee without proper training to handle the equipment can consequently delay operation, which may be just as costly as if the equipment were down. Eventually, quality will suffer and all these factors can increase operation costs.
    6. Muda of waiting. Muda of waiting occurs when the hands of the operator are idle; when an operator’s work is put on hold because of line imbalances, a lack of parts, or machine downtime; or when the operator is simply monitoring a machine as the machine performs a value-adding job. Watching the machine, and waiting for parts to arrive, are both muda and waste seconds and minutes. Lead time begins when the company pays for its raw materials and supplies, and ends when the company receives payment from customers for products sold. Thus, lead time represents the turnover of money. A shorter lead time means better use of resources, more flexibility in meeting customer needs, and a lower cost of operations. Muda elimination in this area presents a golden opportunity for Kaizen. There are many ways to cut lead time. This can be done through improving and speeding up feedback from customer orders, having closer communications with suppliers, and by streamlining and increasing the flexibility of Gemba operations . Another common type of muda in this category is time. Materials, products, information, and documentation sit in one place without adding value. On the production floor, temporary muda takes the form of inventory. In office work, it happens when documents or pieces of information sit on a desk or in trays or inside computer disks waiting to be analysed, or for a decision or a signature.
    7. Muda of transportation In workplace ,gemba, one notices all sorts of transport by such means as trucks, forklifts, and conveyors. Transportation is an essential part of operations, but moving materials or products adds no value. Even worse, damage often occurs during transport. To avoid muda, any process that is physically distant from the main line should be incorporated into the line as much as possible. Because eliminating muda costs nothing, muda elimination is one of the easiest ways for a company to improve its Gemba’s operations
  3. Standardization

    Standards are set by management, but they must be able to change when the environment changes. Companies can achieve dramatic improvement as reviewing the standards periodically, collecting and analysing data on defects, and encouraging teams to conduct problem-solving activities. Once the standards are in place and are being followed then if there are deviations, the workers know that there is a problem. Then employees will review the standards and either correct the deviation or advise management on changing and improving the standard. It is a never-ending process and is better explained and presented by the PDCA cycle(plan-do-check-act), known as Demming cycle , shown
    Pick a project (Pareto Principle)
    Gather data (Histogram and Control Charts)
    Find cause (Process Flow Diagram and Cause/Effect Diagram
    Pick likely causes (Pareto Principle and Scatter Diagrams)
    Try Solution (Cause/Effect , ‘’5W AND 1H ‘’ methodology : who, what, why, when, where, how)
    Implement solution
    Monitor results (Pareto, Histograms, and Control Charts)
    Standardize on new process (Write standards, Train, Foolproof, Quality-At-The-Source[QUATS])

    A successful PDCA cycle then is followed by the SDCA cycle where ‘S’ stands for standardization and maintenance of the new situation. So, PDCA stands for improvement and SDCA stands for maintenance .The two cycles are combined and presented on the following1
    Standardization process is a very important one that has few key features, presented below:

    • Represent the best, easiest, and safest way to do the job.
    • Offer the best way to preserve know-how and expertise.
    •  Provide a way to measure performance.
    • Show the relationship between cause and effect
    • Provide a basis for both maintenance and improvement
    • Provide objectives and indicate training goals
    • Provide a basis for training
    • Create a basis for auditing or diagnosis, and
    • Provide a means for preventing recurrence of errors and minimizing variability.

Types of Kaizen:

Types of Kaizen are based on the degree of problems or issues. If you do not know the degree of problem or issue, one may have a wrong approach in implementing Kaizen, and may take unnecessary action and waste time. Let’s look at different types of Kaizen and how those are implemented.

  1. Small Kaizen

    Small Kaizen or simple, quick Kaizen is useful to solve small issues that exist in the workplace. Small Kaizen does not need many resources and time to improve the situation. Many small issues that exist in the workplace are often ignored as staffs are used to work in such an environment, and forget to recognize small problems/issues as “Problem”. Note that the hospitals practicing 5S very well and sustain their 5S activities are often unknowingly practicing small Kaizen. One of the effective ways of practicing small Kaizen is using “Kaizen suggestion board.” Kaizen topics are usually discussed among Work Improvement Team (WIT) members.


    KAIZEN activity starts from sensing and realization of small issues/ problems in your work place. It is recommended to keep “Kaizen Memo” as a record of small Kaizen activities. Record about problems, countermeasures taken and improvement achieved together with pictures.

  2. Large Kaizen

    Large Kaizen approach is applied to solve complicated problems that need inputs and some other resources. Large Kaizen requires adequate time to analyze the problem carefully to solve problems and prevent recurrences. One cycle of large Kaizen is usually 6 months as shown in Diagram.

1Time spent for each step is dependent on data collection methods, number of countermeasures to implement, and monitoring of progress.

Kaizen Events

Montabon definition of a kaizen event: “Kaizen events are essentially well structured, multi-day problem solving sessions involving a cross-functional team, who is empowered to use experimentation as they see fit to derive a solution”.
Van et al’s definition of a kaizen event: “A kaizen event is a focused and structured improvement project, using a dedicated cross-functional team to improve a targeted work area, with specific goals, in an accelerated timeframe”. 
First and foremost CI, lean and kaizen events are performed by organizations with groups and individual people so it is important to categorize the different way of working in order to find the best work approach according to improving synergy levels. The framework of Kaizen is based on four areas; plan, implement, sustain and support. Furthermore it is constructed so it can be self assessed, in order to improve specific topics and in order to improve itself. The article of Van et al concludes that; “Use of the framework as a design and assessment tool appeared to make the kaizen events program more effective in the case study organization”.


Level of Kaizen:

The hierarchy of kaizen or lean improvements needs to be organized into five levels. The organization needs to use most if not all levels in order to aspire towards lean.


Level 1: The Individual

Level one, the individual, were individual employee’s needs to be recognized as being experts of their own process. They need to have the knowledge to understand their own processes in the big picture of organizational processes (wider value stream) and why their own process is important and necessary. The know why or underlying philosophy is the most important stage of learning and understanding. Hence improvements and sustainability starts with the individual at the workplace. The team leaders are important as they can encourage, facilitate and recognize individual achievements. Furthermore they can bring individual improvements to the attention of others. Individual “thank you” notes could be examples and carry much weight. Examples of work; waste reduction, work piece orientation, inventory and tool location, work sequence, ergonomics and/or pokayoke.

Level 2: The Work Team or Mini Point Kaizen

Level two, the work team, consists of groups or teams, which work in a cell or on a line segment. If they undertake an improvement workshop it will affect their collective work area. The initiatives may be done regularly as a part of team meetings, but can also be conducted on 1-2 day workshop. Recognition is crucial, so the team needs to present its results to a wider audience. Examples of work; work flows, cell layout, line rebalance, 5S, Footprinting and/or cell level quality.

Level 3: Kaizen Blitz Group or Point Kaizen

Level three, the kaizen blitz group, is work carried out in the local area. The event is often between 3-5 days and involves people from outside of the local area. The events usually address more complex issues. Unlike level 2 improvement teams this group forms for a specific purpose or problem to solve for an event. After the event the group disbands. Examples of work; substantial layout change, the implementation of a single pacemaker-based scheduling system together with runner route and integrating manufacturing and information flows.

Level 4: Value Stream Improvements: Flow Kaizen Groups

Level four, the flow kaizen groups, is work carried out across a full internal value stream. The time duration is between weeks and 3 months and with the purpose of creating future state maps and an action plan. The groups does usually not work full time but on and off the project. There will therefore be project managers assigned and sometimes with assistance from consultants. The group would be a multi-disciplinary group, working with a complete process or value stream and across several areas and functions. Examples of work; process issues, system issues and organizational issues.

Level 5: Supply Chain Kaizen Groups

Level five, the supply chain kaizen groups, are similar to flow kaizen groups but are focused toward the supply chain. They involve part time representatives from each participating organization. A project manager from the initiating organization is appointed and consultants are usually involved. Examples of work; A full supply chain value stream map for all the involved organizations would typically be the centerpiece in order to get the whole picture. The distinction between teams and teamwork . Teams refer to small groups of people working together towards some common purpose. Teamwork refers to an environment in the larger organization that creates and sustains relationships of trust, support, respect, interdependence and collaboration. It is relatively easy to establish a team, but to establish an environment for teamworking is a lot more difficult.

Steps to implement Kaizen:

There are seven  steps as follows;

  1. Selection of Kaizen theme
  2. Situation analysis
  3. Root cause analysis
  4. Identification of countermeasures
  5. Implementation of identified countermeasures
  6. Check effectiveness of the countermeasures
  7. Standardization of effective countermeasures

Step 1:Kaizen Theme selection

First step of Kaizen  is to select a Kaizen theme. Kaizen theme is a “Problem” or “Issue” that your section/department is facing, and staff of the section or department would like to reduce the problem for their workplace and its client. Kaizen theme should be able to implement with existing resources and implemented by the section staff.

Kaizen theme is:
= A problem your workplace is facing
= Something your section wants to improve
= An unsatisfying issue raised or claimed by clients
Process of selecting Kaizen theme should be;

  • Led by Work Improvement Team
  • Done by using brainstorming technique / method in a meeting involving all staff in a particular workplace
  • Use matrix to evaluate feasibility (ask ourselves “can we do it?).

Kaizen theme is described with:

  • Simple sentence containing the basic information of “What” and “Where” it is supposed to be done
  • Clarification of the reason for selecting the theme

Examples of Kaizen themes :

  • Time for searching items in the department is reduced
  • Mistake on packaging
  • Overstock of raw material

Note that action verb must be used. Word “Improve” seems to be OK but we do not know how much you want to “improve”. Therefore, it is better to clarify what you want to do.

Tips for selection of Kaizen theme are:

  • Possible to carry out within own department
  • Issue related with everyone in the department
  • Possible to solve within 3 to 6 months
  • Benefit to own section/department and its clients

Step 2: Situation analysis

Kaizen theme was selected in the Step 1, and this is equal to the “Problem”. There are different “Contributing factors” that  compose of the “Problem”. Therefore, the first process of “Situation analysis” is to brainstorm within Kaizen team on factors that contribute to the “Problem”. After identification of “Contributing factors” to the “Problem”, it is necessary to measure frequency of occurrence of identified “Contributing factors” of the problem. It is important to note that record of step by step of the current process as it is done and not how it would have been done is mandatory, as it will facilitate identification of type of data to be collected.
The following areas need to be carefully checked:

  • Knowledge of Kaizen among team members in relation to Kaizen theme and its contributing/component factors
  • Check if quantitative data are collected appropriately and related with  the Kaizen theme or not
  • Data collected are from reliable data source or not
  • Proper methodology is used for data collection or not
  • Data collection methodology is clearly recorded or not
  • Period of data collection is clearly recorded or not

Target setting for Kaizen

Target for achievement of Kaizen activity needs to be set. Target should be set based on the result of the situation analysis and performance level of the section. “What to improve”, “By when need to be achieved” and “How much should be improved or reduced”, etc. It is better not to be too  ambitious for target setting.

The following points need to be checked carefully:

  • Calculation of cumulative frequency and ratio
  • Pareto chart scale for frequency (defect)
  • Cut off point at 80% line
  • Plotting of cumulative ratio and match with scale
  • Target setting
  • Prioritization of component factors for next step

Step 3: Root cause analysis

Root cause analysis is a process to identify and understand the contributing factors or causes of a system failure. To do so, “Fish bone (Cause–Effect) diagram”,  can help in brainstorming to identify possible causes of a problem (effect). While drawing fish bone diagram remember  “Head of fish” is not the kaizen Theme. Common mistake found in beginners is that they put Kaizen  theme as “Head of fish”.  “Head of fish” is the contributing factor of the problem to be resolved (the  effect). For example, “Reduce long waiting time at OPD” is chosen as Kaizen them. However, long waiting time may be caused by different causes or influenced by different factors (contributing factors) such as “Staff is not coming on time and cannot start clinic earlier”, “Registration taking a lot of time” and so on.

Step 4: Identification of countermeasures

In this step, it is necessary to understand how to identify countermeasures  using Tree diagram and evaluate feasibility using Matrix diagram. It is also important that a second line countermeasures are also  well identified, and connection among countermeasures is also well defined. Thus, those points need to be carefully observed and provided with technical inputs for proper identification of countermeasures. After identification of countermeasures, feasibility needs to be checked with Matrix diagram. For example, if  “conduct training” is identified as 1st line countermeasure. Then, 1) develop training materials, 2) conduct a training session, and 3) monitoring and mentoring of trained staff, can be  identified as second line countermeasures. Then, feasibility for those 3 activities can be checked using Matrix diagram. if  “conduct a training session” got high mark then it can be  judged as feasible. Then here comes a question. Is it possible to conduct a training session without teaching materials? Answer is NO. Need to have handouts for training.


Identification of countermeasures

The following points need to be carefully checked in this Step:

  • All identified root causes in Step 3 are reflected in Matrix Diagram or not
  • Detailed countermeasures are identified or not; breakdown of countermeasures by the level of countermeasures
  • Feasibility is appropriately done or not; Check the relation among the identified countermeasures against a root cause
  • Scale and cutoff point of feasibility check are clarified or not

Step 5: Implementation of countermeasures

All countermeasures identified in Step 4 are accommodated into action plan for implementation of countermeasures. The action plan is developed using 5W (When, Where, Who, What, Why) and 1H (How) method to clarify key issues. A checklist must be developed to monitor the progress of countermeasures implementation and timeframe. Both action plan and checklist need to be displayed where all staff can see and access. This is very important to remind staff to implement identified countermeasures within the given timeframe.
The following points need to be carefully checked:

  • All countermeasures identified should be carried out within the section/unit
  • Action plan is developed based on “5W1H” concept
  • Checklist for monitoring of progress is developed
  • Appropriate time for implementation of countermeasures is indicated

Step 6: Checking effectiveness

Data collection

In this step, same data collected in Step 2 need to be collected again
for comparison of data to see the effectiveness of Kaizen activities
implemented in Step 5. Therefore facilitators need to ensure the following points in Step 6:

  • Necessary data is collected for effectiveness, check if it is the same methodology and period applied in Step 2
  • Compare table for effectiveness, check if it is developed or not
  • Pareto Charts for before and after Kaizen are developed based on the comparison table or not

Same scale of frequency needs to be applied on Pareto chart of before and after Kaizen. Plotting points of cumulative ratio also need to be checked. Another important thing to check is identification of effective countermeasures and other effects.


Whatever the results are, it is necessary to clarify the relationship between countermeasures and effectiveness.

  • Effectiveness should be measured by each countermeasure
  • The countermeasure that is not implemented but shows some good effects need to be investigated to identify the reason.
  • The countermeasure that is not implemented and hence cannot measure effectiveness need to be implemented.
  • It is necessary to review countermeasures if they are not effective.
  • The countermeasures that were implemented and judged as “effective” will be standardized in Step 7.
  • The countermeasures may cause bad effects. If bad effects are greater than effectiveness, it is necessary to review the countermeasures.

Note that effectiveness can be categorized into;

  • Tangible effects – Expected outcome
  • Ripple effects – predicted outcome
  • Intangible effects – unexpected outcome

Step 7:Standardization of effective countermeasures

The main purpose of this step is to maintain good results of Kaizen is  to prevent recurrence of tackled problems.


Step 7 adds another cycle called Standardize-Do-Check-Act (SDCA) cycle as discussed above to ensure continuation of effective measures to prevent fallback. The following points need to be carefully checked in this Step:

  • All effective countermeasures are reflected on standardization plan or not
  • Standardization is developed based on “5W1H”
  • If monitoring checklist for standardized activities is developed and used or not
  • Standardization plan is shared with all staff working in the section/ unit

Use 5W1H to clarify the activities for sustainable manners. After development of Standardization plan, there must be a mechanism to continue practicing effective measures to prevent fallback. We often see that majority of Work Improvement Teams, when they complete Step 6, they start relaxing and forget taking Step 7. As facilitator, member of Management Team or QIT, we need to remind them to implement Step 7.

Support to Kaizen events

It is highly unlikely that an organization can sustain kaizen events, including the support for kaizen events, if there is no overall support within the organization for CI. So for all organizations no matter the level of CI experience the support should be sufficient in order to achieve and sustain CI. As Anderson Kaye explains “Even where organizations are using self-assessment techniques and employing other positive approaches to quality management, they are failing to
sustain continuous improvement in the longer term”. He regard self-assessment models like the European Business Excellence Model and the Malcom Baldrige National Quality Award as holistic models, but state that they do not sufficiently emphasize the factors which will generate and keep the improvement momentum going. According to Kaye  the business excellence model has been found lacking in respect of  drivers. Kaye  made a model based on ten essential key criteria and supporting elements of best practice as a planned and integrated approach for achieving continuous improvements in an organization. The ten key criteria are illustrated in figure


Kaizen Team

The Facilitator

He is  responsible for making sure that the Kaizen Event flows smoothly from start to finish. This facilitator will organize prep meetings, collect data in advance, and report results. He is someone who will not be directly impacted by decisions made during the Event and one who is unlikely to have a preconceived opinion about which changes should be made. Some organizations choose to use a professional outside facilitator while others select a capable staff member from an internal team.

The Process Owner

He  is responsible for the process to be addressed during the Event. He or she is likely a director, manager, or supervisor. The process owner is responsible for supporting the facilitator in coordinating logistics, obtaining supplies and equipment, facility and team member access, and so forth. They will also help the facilitator select the other Event participants and rearrange resources so that all required team members can be available to participate. The process owner is essential to scoping the project and providing background information.

Subject Matter Expert(s)

Depending on the technical complexity of the process being addressed, one or more subject matter experts may be required. They may not need to participate in every aspect of the Event, but should be “on call” to address any specific issues or questions that arise. The same sometimes applies for specialists from areas like IT/IS or facilities.

Team Members

It is critical to involve some of the people who actually do the work on a day-to-day basis in your Kaizen Event. They are closest to the front line and uniquely understand and feel the roadblocks to a painless, efficient flow of work. They likely develop ideas for improvement based on what they learn during the early stages of the Event.

Other Resources

While the four roles above are almost always necessary, there are some others that may or may not be helpful, depending on the nature of the event and the type of problem to be solved. For example, it may be useful to include internal or external customers as part of your event, especially if the object is to improve customer satisfaction or to address a problem that affects them. In the case of a major or cross-functional change, an executive sponsor might be necessary to provide resources or to simply signal support for the team’s work.  Some events also benefit from the inclusion of people who know little or nothing about the process, often referred to as “fresh eyes.” They often ask unexpected questions that lead the team down a path that might otherwise have gone unexplored. Choosing the right players from the beginning will set your Kaizen Event up for success. You don’t need a cast of thousands, but you should have a mix of insights, experience and points of view. It makes sense to think as much about the who as the how.

Kaizen Event Synergy Framework

The framework is a four step model with an overall support area. Before starting the first step a pre-synergy assessment is recommended in order to set the focus area for the first kaizen event. The first step is planning the kaizen event 1, the second step is implementing or conducting the kaizen event 1 and the third step is sustaining the results from kaizen event 1. The fourth step is making a synergy assessment 1 after kaizen event 1. After finishing synergy assessment 1 the four steps repeats themselves. The improvements will be known, through the changes to synergy assessment 1 and new focus areas can be set as target focus for kaizen event 2. The four steps can then be run over and over which in the end should preferably result in more efficient kaizen events, optimized processes and higher levels of synergy for the company. The support area is a support area for continuous improvement at an organizational level and it will provide the needed support for all four steps along the way. The CI support area is needed in order to run the four steps, it will furthermore have an effect on sustaining the framework process, sustaining the new changes implemented and the efficiency with which the four steps are run.


  1. Kaizen Event Plan

    The first step of the framework is to plan the kaizen event.  The planning phase consists of three areas upfront to the kaizen event. These three areas include 1. Identify candidates, 2. Select candidates and 3. Define selected candidates. Furthermore these
    three areas consists of subareas that are important in order to become able to increase kaizen event performances and thereby also important in being able to conduct efficient kaizen events. The candidates for the event have to be identified and it includes important subareas such as; Deriving from a strategic direction, performing an analysis to define the candidates and make sure that it responds to emerging problems. The selection of the candidates includes the important subarea; defining an improvement strategy, defining a portfolio of events and scheduling of these events. The defining of the selected candidates includes the subarea of defining an initial project charter. Overall the planning phase makes sure that the long term direction is set both strategically and project scheduling vise. It makes sure that the right candidates are chosen and that there is a portfolio of projects that has the right candidates and future direction.


  2. Kaizen Event Implement

    The second step of the framework is to implement the kaizen event. According to the implementation phase consists of four areas upfront, while executing and after the kaizen event.These four areas include 1. Prepare for event, 2. Execute event, 3. Follow-up after event and 4. Deploy full-scale change. Furthermore these four areas consists of subareas that are important in order to become able to increase kaizen event performances and thereby also important in being able to conduct efficient kaizen events.
    The preparations for the event includes the important subareas; Explore, refine the charter, announce the event, select the team roles and prepare for the event.
    The execution of the event includes the important subareas; kicking off the event, build the team and train the team. Furthermore you need to follow a structured approach, report out to relevant parties and evaluate the kaizen event.
    After the event a follow up is needed and it includes important subareas such as; completing the action items and documenting the changes. Thirdly defining management processes has to be conducted in connection to the changes.
    Lastly a full-scale deployment is needed and it includes the subarea of; Completing the full-scale implementation and deployment.
    The implementation phase makes sure the long term planning and scheduling is adjusted when exploring before the actual event. It furthermore makes sure that the event is properly conducted with the right and trained team in place. The phase also includes a structure approach along with an evaluation and reporting out to interested parties to ensure the efficiency of the event. Lastly after the event it follows up with documentation and action items, it also makes sure to fit management processes before completing the full scale changes.1

  3. Kaizen Event Sustain

    The third step of the framework is to sustain the changes from the kaizen event.The sustain phase consist of two areas after the kaizen event. These two areas include 1.Review results and 2. Share results. Furthermore these two areas consists of subareas that are important in order to become able to increase kaizen event performances and thereby also important in being able to conduct efficient kaizen events. The reviewing of the results after the event includes the subareas; measuring the results, evaluating the results and adjusting the results.
    After reviewing the results the results should be shared in order to cover the subareas; standardizing the best practices and sharing the lessons learned.
    The sustain phase handles the results after the kaizen event. In order to sustain the results properly the results have to be measured, evaluated and adjusted. When sharing the results to other parties it is important to make sure to standardize the best practices and share the lessons learned within the organization.1

  4. Synergy Level Assessment

    The fourth step of the framework is to conduct a synergy assessment in order to determine the synergy levels.  The synergy assessment phase consists of four assessment areas which has to be assessed after the changes from the kaizen event has been sustained. These four areas includes 1. Assessing strategic synergy, 2. Assessing operational synergy, 3. Assessing cultural synergy and 4. Assessing commercial synergy. All four areas consist of specific criteria’s, in relation to the area, which is assessed by employees, the scores are then evaluated in order to find areas for improvement.
    The strategic synergy assessment consists of two sections. The first part is self awareness which implies to understand one’s own strategic and operational environment. The second part is collective awareness which implies to understand one’s collaborative partner(s) objectives and expectations. Furthermore to become aware what each party is going to contribute to the collaboration, as well as the new value proposition due to the collaboration.
    The operational synergy assessment likewise consists of two sections. The first part is the self awareness of internal operational processes. The second part is the level of cross party processes in order to coordinate the business processes beyond the individual boundaries.
    The cultural synergy assessment focuses on organizational and people related compatibility of each party The commercial synergy assessment focuses on clarity and robustness of commercial arrangements for all parties involved in the collaboration. It makes sure that each party is aware of the other parties and that agreements concerning, risks, intellectual property rights and gain sharing, have been made.
    The synergy level assessment phase is about getting the most accurate levels of synergy from employees in order to make improvements in specific low areas which become target areas. The assessment focuses on areas and criteria that can affect the overall synergy level of the company but it doesn’t tell you how to improve the area(s).
    The strategic synergy ensures that participating parties have a common ground and that individual objectives and expectations are understood and are consistent with competencies and contribution of each party, as well as the additional value and competitive advantage to be delivered through the collaboration.
    The operational synergy ensures that each party’s internal management processes and difficulties are understood and resolved, and that customer focused operational systems extend across organizational boundaries.
    The cultural synergy ensures that the mindset, organizational culture and management styles are compatible between partners and there is a sufficient level of trust and commitment in place. The commercial synergy ensures that the short and long term expectations, benefits and risks are understood and appropriate agreements have been put in place with regards to distribution of risks, as well as benefits arising from collaboration.1

Back to Home Page

If you need assistance or have any doubt and need to ask any question contact us at: preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion is also welcome.

The Kanban System

The Kanban system is a method of using cards as visual signals for triggering or controlling the flow of materials or parts during the production process. It synchronizes the work processes within your own organization as well as those that involve your outside suppliers. The Japanese word Kanban which is translated as “signboard”, has been seen as the substitute works of demand scheduling. In the late 1940s and early 1950s, Taiichi Onho developed Kanban to control production between processes and to implement Just in Time manufacturing at Toyota manufacturing plants. And the strategy of Kanban became one of the pillars of Toyota’s successful implementation of JIT manufacturing. However, these Kanban ideas hadn’t been accepted until the global economic recession blown the world. Under the global recession, people saw that Kanban could minimize the work in process (WIP) between processes and reduce the cost associated with holding inventory as well. The modern Kanban has been developed a lot comparing with the Japanese Kanban, which is seen as a software tool of Lean.

 Definition of Kanban

From different stages of Kanban development and the functions, Kanban has different definition. Some are more close to what is used in Toyota, some are more developed as a component of Lean. Two definitions will be showed below to form an overall understanding of Kanban.

Definition one

Kanban is defined as demand scheduling. In processed controlled by Kanbans, the operators produce products based on actual usage rather than forecasted usage. The Kanban schedule replaces the traditional weekly or daily production schedule. This schedule is replaced with visual signals and predetermined decision rules that allow the production operators to schedule the line. What Kanban replace is:

  • The daily scheduling activities necessary to operate the production process;
  • The need for production planner and supervisors to continuously monitor schedule status to determine the next item to run and when to change over. (John M.Gross, 2003)

In this case, it frees the materials planners, schedulers and supervisors to manager exception and to improve the process. Finally, it also places control at the value-added and empowers the operators to control the line.

Definition two

Kanban is a lean agile system that can be used to enhance any software development life-cycle including Scrum, XP, Waterfall, PSP/TSP and other methods. Its goal is the efficient delivery of value. (Linden-Reed, 2010)

  • Kanban promotes the lean concept of flow to continuously and predictably deliver value;
  • The work and the workflow is made visible to make activities and issue obvious;
  • Kanban limits WIP to promote quality, focus and finishing.

Comparing with the first definition, the second one is more abstract. It is defined by what functions it could deliver.

Need for Kanban system

In the Kanban system, a card (called a Kanban) controls the movement of materials and parts between production processes. A kanban moves with the same materials all the way down the production line. When a process needs more parts or materials, it sends the corresponding Kanban to the supplier; the card acts as the work order. A kanban card contains the following data:

  • What to produce
  • How to produce it
  • When to produce it
  • How much to produce
  • How to transport it
  • Where to store it

In an ideal world, demand for products would be constant. Organizations could always operate at maximum efficiency, producing exactly what was needed—no more, no less. But for most companies, the amount of work that must be done varies by the day, week, or month. An organization must have enough capacity so that there are enough people, machines, and materials available to produce what is needed at times of peak demand. But when there is a smaller amount of work to be done, one of two things can happen: 1) Underutilization of people, machines, or materials or 2) overproduction.
With the kanban system, workers are cross-trained to be knowledgeable about various machines and work processes so that they can work on different manufacturing tasks as needed. This prevents under utilization. Kanban systems also prevent overproduction, which is the single largest source of waste in most manufacturing organizations. When you use the kanban system correctly, no overproduction will occur. The kanban system also gives your organization the following positive results:

  • All employees always know their production priorities.
  • Employees’ production directions are based on the current conditions in your workplace. Employees are empowered to perform work when and where it is needed. They do not need to wait to be assigned a work task.
  • Unnecessary paperwork is eliminated.
  • Skill levels among your employees are increased.


  1. Mass customization

    Mass customization was initiated in 1987 by Stan Davis. The concept is defined by David as: “Mass customization is the ability to quickly and efficiently build-to-order customized products.” Practitioners of mass customization share the goal of developing, producing, marketing, and delivering affordable goods and services with enough variety and customization that nearly everyone finds exactly what they what.  Mass customization is a new form of competition that has arisen in the global market place. As Pine summaries, there are three main forms to face this challenge: Japan Inc., flexible specialization communities, and dynamic extended enterprises. Each form was simply trying to find its way to competitive advantage in a world increasingly characterized by a high degree of market turbulence.
    To summaries the new competitions, there are positive effects of mass customization on different functions in a firm, e.g. the production function, R&D function, marketing function, and financial function. Here list the positive effects on production function.

    Positive Effects

    • Low overhead and bureaucracy
    • Optimum quality
    • Elimination of waste
    • Continual process improvement
    • Low inventory carrying costs
    • High labor productivity
    • Integration of thinking and doing
    • High utilization of and investment in worker skills
    • Sense of community
    • Low total costs
    • High production flexibility
    • Greater variety at lower costs

    The focus of the manufacturing production function of the new competition is on total process efficiency. Process efficiency includes both productive and unproductive time. Unproductive time is the time materials spend in inventory or other non-operational activities such as handling, moving, inspecting, reworking, recoding, batching, chasing, counting, and repacking…. (Michael) According to the above table, the mass customization has potential to do changes on these aspects of production. When facing the new frontier of mass customization, there are proper strategies to response. There are three basic ways to do this: incrementally over time, more quickly via business transformation, or by creating a new business firmly planted in the new territory.

    Strategic Response When Appropriate
    Move Incrementally
    • Market turbulence low and not increasing dramatically
    • Competitors not transforming for Mass Customization
    • Middle-and lower-level managers and employees who want to change but cannot affect the business as a whole
    Transform the Business
    • Dramatic increase in market turbulence
    • Competitors already shifting to Mass customization
    • Only if instigated or fully supported by top management
    Create a New Business
    • Businesses based on new, flexible technologies
    • New ventures in large corporations
    • Most any new business
  2. Lean Manufacturing

    Lean manufacturing is a generic process management philosophy derived mostly from the Toyota Production System (TPS). Lean manufacturing or lean production is often simply known as “Lean‟. The core idea of lean is to maximize customer value while minimizing waste. Lean means creating more value for customers with fewer resources. A lean organization understands customer value and focuses its key processes to continuously increase it. The ultimate goal is to provide perfect value to the customer through a perfect value creation process that has zero waste.
    To achieve this, lean thinking changes the focus of management from optimizing separate technologies, assets, and vertical departments to optimizing the flow of products through entire value streams that flow horizontally across technologies, assets, and departments to customers.

     Lean principles

    There are five main principles of lean manufacturing: value specification, value stream, flow, pull and perfection.

    1. Specify value

      • Overproduction
      • Waiting
      • Unnecessary transport
      • Over processing
      • Excess inventory
      • Unnecessary movement
      • Defects
      • Unused employee creativity

      The main non-value-adding waste is overproduction, because it could generate other wastes. However, making flow does simply mean to eliminate waste one by one. It needs much preparation work and a holistic vision which guides a strategy towards flow.

    2. Identify the value stream

      The value stream is the set of all the specific actions required to bring a specific product through the three critical management tasks of any business. (Womack and Jones, 1996) identifying the entire value stream for each product is the next step in lean thinking. Lean thinking must go beyond the firm to look at the whole. Creating lean enterprises requires a new way to think about firm-to-firm relations, some simple principles for regulating behavior between firms, and transparency regarding all the steps taken along the value stream so each participant can verify that the other firms are behaving in accord with the agreed principles.

    3. Flow

      The third principle is Flow. Once value has been specified, the value stream for a specific product fully mapped and obvious wastes eliminated, it is time to make the remaining, value-creating steps flow. However the traditional “functions” and “departments” concepts always block producers realizing real flow. The performing tasks in batches are always thought as best. The batches and queues are common used by most manufacturers blinding other common senses. The lean thinking is to redefine the work of functions, departments, and firms so they can make a positive contribution to value creation and to speak to the real needs of employees at every point along the stream so it is actually in their interest to make value flow. There are three steps to make value flow. The first step, once value is defined and the entire value stream is identified, is to focus on the actual object and never let it out of sight from start to finishing. The second step is to ignore the traditional boundaries of jobs, careers, functions and firms to form a lean enterprise removing all impediments to the continuous flow of specific product or product family. The third step is to rethink specific work and tools to eliminate backflows, scrap, and stoppages of all sorts so that the design, order and production of the specific product can proceed continuously.
      There are also some practical techniques to prepare for flow:

      • Level out workloads and pace production by Takt time/pitch time;
      • Standardizing work and operating procedures;
      • Total productive maintenance;
      • Visualize management;
      • Reduce changeover;
      • Avoid monuments and think small
    4. Pull

      The subjective of pull is customer orders. Let the customer pull the product rather than pushing products onto the customer. It means short-term response to the rate of customer demand without overproduction. There are two levels which express the meaning of pull. On the macro level, the production process should be triggered by customer demand signals. The trigger point is expected to be pushed further and further upstream. On the micro level, there is responding to pull signals from an internal customer that may be the next process step I the case of Kanban or an important stage in the case of Drum/Buffer.

    5. Perfection

      The final one is Perfection. Perfection means producing exactly what customers want, exactly when without delay, at a competitive price and with minimum waste. The real benchmark is zero waste, not what competitors do.

     The five powerful ideas in the lean tool kit needed to convert firms and value streams from a meandering morass of muda to fast-flowing value, defined and then pulled by the customer. And it reveals the inherent thinking to pursue perfection. The techniques themselves and the philosophy are inherently egalitarian and open. Transparency in everything is a key principle.

  3. Just-In-Time

    Just-in-time (JIT) is developed by Taichi Ohno and his fellow workers at Toyota, one of the pillars of TPS. It means to supply to each process what is needed when it is needed and in the quantity it is needed. The main objective of JIT manufacturing is to reduce manufacturing lead times which is primarily achieved by drastic reductions in work-in-process (WIP). The result is a smooth, uninterrupted flow of small lots of products throughout production. These stock reductions will be accompanied by sufficiently great improvements in quality and production to result in unheard-of cost reductions. There are three main kind of stockholding: incoming material, work-in-process and finished goods. JIT aims to reduce each of them through a holistic principle. In the following text, the principle will be illustrated through each stockholding.

    1. Incoming material

      The incoming material control is always relates to a firm´s purchasing policy and its suppliers. In reality, the incoming material is unreliable or unpredictable. The excessive stocks and stock out of incoming material are always happen. What firm should do is to involve its suppliers into their own manufacturing instead of tell them what to do.

    2. Work-in-process

      In the factory buffer stocks exist everywhere in several forms. WIP is always been a key industrial measure. The total value forms part of the balance sheet, and industrial managers are under intense pressure to keep the figures as low as possible. However, there are many causes contribute to a high WIP. The causes include:

      • Production scheduling
      • Machine capability
      • Operator capability
      • Product mix
      • Product modification
      • Changing product priorities
      • Cross-functioned organization
      • Machine breakdown

      In order to achieve low WIP, JIT provides principles to deal with the above obstacles. There are some main principles mentioned as following:

      1. Level out the workload and pace production.

        JIT techniques work to level production, spreading production evenly over time to make a smooth flow between processes. Varying the mix of products produced on a single line, sometimes, provides an effective means for producing the desired production mix in a smooth manner.

      2. Pull production.

        With pull system, Kanban is always used. To meet JIT objectives, the process relies on signals or Kanban between different points in the process, which tell production when to make the next part. Implemented correctly, JIT can improve a manufacturing organization’s return on investment, quality, and efficiency. Its effective application cannot be independent of other key components of a lean manufacturing system.

      3. Finished goods

        In an idea JIT production operating the pull system, there will be no finished goods in stock. Even though the stockholdings would be illustrated separately, JIT should be designed as a whole principle to reduce the stockholdings on a holistic level. To summarize, JIT is pulling work forward from one process to the next “just-in-time”. One benefit of manufacturing JIT is reducing work-in-process inventory, and thus working capital. An even greater benefit is reducing production cycle times, since materials spend less time sitting in queues waiting to be processed. However, the greatest benefit of manufacturing JIT is forcing reduction in flow variation, thus contributing to continuous, ongoing improvement.

Selecting the physical signaling for the Kanban

When people think of Kanban, most of them will think of Kanban cards. Actually, there are different types of physical Kanbans could be applied in production system. Each company could definitely do some innovation on their own physical Kanban regarding of their unique production systems.

  1. Kanban cards

    Kanban cards are essentially pieces of paper which travel with the production item and identify the part number and amount in the container. Kanban cards serve as both a transaction and communication device. The following figure shows a Kanban card used between processes.


    Kanbans using cards signal often follow the routines below:

    • A card is placed with the completed production container;
    • The container with its Kanban card is then moved into a staging area to wait for use;
    • When the container is moved to production work center for use, the Kanban card is pulled from the container to signal consumption.
    • The Kanban card is then placed in a cardholder, or Kanban post, to await transit back to the production line;
    • When the Kanban card returns to the production line, it is placed in a cardholder that has been set up to provide a visual signal for operation of the line;
    • The Kanban card sits in the cardholder waiting to be attached to a completed production container.

    The Kanban cards illustrated here mainly concern the concept used in Toyota production system. Individual company could do any verification regarding of its own condition. However, this Kanban card is more useful in assembly line than other type of production line.

  2. Kanban boards

    Kanban boards simply use magnets, plastic chips, colored washers, etc. as signals. The objects represent the items in inventory- backlog, in-process inventory. It helps to visualize the workflow, limit WIP and measure the lead time. There is a sample of Kanban board. Each firm could develop the column detail according to its own production condition.1

    The two columns stands side shows the product backlog and finished products. And the column in between illustrate the sequence processes. The stick notes are updated by operators going from backlog to finished products. To determine what gets produced next, operators just look at the board and follow its rules.
    Kanban boards work best when two conditions exist in the relationship of inventory storage and the production process:

    • The board can be positioned in the path of the flow of all the material to the customer;
    • The board can be positioned so that the production process can see it and follow the visual signals.
  3. Two-card Kanban

    Two-card Kanban is typically used for large items where flow racks are not utilized. It is a combination system of the Kanban board and the Kanban card racks. It works like this:

    •  When product is produced or received from a vendor, two cards are pulled from a Kanban card rack and filled out: one Kanban card goes with the container; the second Kanban card goes into a special FIFO box.
    • Whenever a container of this product is needed, a material handler goes to the FIFO box and pulls out the bottom card.
    • The material handler then goes to the location written on the card and pulls this product for the production operation.
    • The material hander then takes both cards and places them in the Kanban card racks, which show the schedule signals for production or record.

    This system allows pallet size items to flow while managing product rotation. It works especially well when used for floor stacked items.

  4. Look-see

    Look-see is a Kanban signal that behavior relying on the sensor of people` eyes. It includes visual signals such as floor marking that shows when to replenish the item. The basic rule with a look-see signal is that when yellow signal signs, then it is time to replenish the item. The red, or danger, signal is also integrated into this scheme. Look-see signals greatly aid in the implementation of the Kanban supermarkets.

Little´s Law

Little´ s law is firstly proved by John little in 1961. In queuing theory, it says:
“The average number of customers in a stable system (over some time interval) is equal to their average arrival rate, multiplied by their average time in the system. Although it looks intuitively reasonable, it’s a quite remarkable result. “
The strength of Little´s law is the fact that is makes no assumption about the probability distribution of arrival rate or service rate, or if they make a first-in-first-out queues, or some other order in which they are served. The only pre-condition or requirement for Little´s Law to hold is that it must be applied to a stable or a steady state system.  Little´s law one thing that constant and true in manufacturing field. In manufacturing field, little´s law could be expressed as: cycle time in time unit is equal to amount of work in process in units, divided by the output in units during this time unit. This is to say that if the total units throughout the work areas and the output per time unit are constant, the cycle time could be easily got. It is also true that if the WIP remains constant and the output is decreased, the cycle time will go up. And if the Takt is constant, when reducing the WIP, the cycle time would be reduced. If manufacturing could maintain close control over the cycle time of its product, from the input point till the completely release point, it could predict to customers what they expect in terms of delivery. If the process were completely under control, there could be no problem in guaranteeing delivery date. The customer satisfactory could be increased. In realistic manufacturing, manufacturing processes are mostly hard to predict. Problems occur everywhere, e.g. operator absence, machine breaking down, vendor problems etc. If the input is the same, and the output goes down, WIP will most definitely build up at the bottleneck. It is know that the bottleneck could change the pace until the problems are fixed. Little´s law tells that how much it could raise or lower the output. Once product is launched in shop floor, it is crucial to do everything to keep it moving. If the products are stuck somewhere, it is better to slow or stop launching new products in the production system. According to little´s law, if a production system is expected to increase its output, the way is not to increase the input amount when its output levels can`t be reached. The best way is to find the bottleneck and increase its output. Little´s law suggests that don‟t operate on the edge of capability or accept orders that challenge the edge of production capacity. If it does, there is a risk to prolong the delivery dates. Little´s law backs up the Flow theory in manufacturing field. Idealistically, if it is one-piece-flow, the output is much easy to predict and the cycle time would be limited to its extreme.

Determine rules of Kanban

Before developing rules for Kanban implementation, it is essential to make materials and physical Kansans to move in a continuous flow. This is to determine how material and physical Kanban move through the production process, and how the move Kanban go back to production process when they are released.


The rules for developing the Kanban are its driving force. The rules are the guidance to allow the operation unit staffs to control the production schedule. The rules should include:

  • The part numbers covered by the Kanban;
  • How the design works-how the cards, magnets, etc., move
  • The meaning of the scheduling signals and how to interpret them;
  • Any scheduling rules of thumb;
  • The preferred production sequence
  • Who to go to and what the “helpers” should do when contacted;
  • Any special quality or documentation requirements.

When creating rules, one thing should be in head all the time: the rules are to communicate how to run the Kanban and to allow the process operator to schedule the line. The only way the production operators can take over scheduling the line is by the rules providing clear direction and scheduling guidance. When drafting the scheduling rules make them easy and unambiguous to follow. Think through possible misconceptions and correct them so they will not occur. Spell out what signals a normal changeover. Spell out what signals emergency changeover. Seek feedback to make sure that everyone else is as clear about how to interpret the signals as asked.
Additionally, the scheduling rules should contain clear-cut decision rules.  The decision rules should help the production operators make consistent production scheduling decision based on the stated priorities. The rules should provide rate information, if applicable, to allow the operator to develop production expectation. The decision rules should contain instructions on when and whom to call for help. Also the rules should include all the “everyone knows this” items that everyone seems to forget from time to time.

Create a visual management plan

The visual management plan will explain the Kanban to everyone and visually instruct everyone how the Kanban operates. The basic goal of visual aids should be to answer the questions that pop up on a daily basis: where do I get this from, where do I move that, which color buggy contains which part, is there a color scheme, do we have any more of this part?
To make the visual aids colorful and easy to read. There are some useful tips :

  • Keep the colors consistent with existing color schemes;
  • Avoid red-typically associated with safety or quality;
  • Avoid yellow-typically associated with safety;
  • Use large print for hanging signs and wall signs;
  • Avoid excessive words on signs-people don‟t read signs, they glance at them.

After the above three steps: Selecting the physical signaling for the Kanban; determine rules of Kanban; create a visual management plan. The Kanban design process could be finished.

Principles for Kanban design/ implementation

This part is to summarize the above Kanban implementing details and provide a general guidance. Here is a minimal way to implement Kanban:

  1. Preparation stage:

    1. Review entire workflow. Look at the end-to-end process from initial concept forward through release. Analyze for any excessive time pockets. Remember to look at handoff times.
    2. Address bottlenecks. If bottlenecks are found, including upstream of the engineering phase, work to break them down and deliver their value in small increments.
    3. Switch from iterations to SLA(Service level Agreement). Forget about iteration time-boxes because they encourage excess batching of planning and of work. Instead, decide on the SLA (Service Level Agreement) time-box for each feature/epic. The clock starts when the active planning on each feature starts and ends when it is released.
    4. Classify by Cost of Delay. Classify each feature by type, e.g.: is it a fixed date or a rush job? Then have all stakeholders in a meeting use this classification to help prioritize a limited queue that the team can pull from. Update this queue weekly or however often you want but allow the team to continue on features they start.
    5. Set WIP Limits. With the team and the managers together, decide on a WIP limit for any workflow phases you want to limit (minimum: the In Progress phase). This is a limit of the features that can be in progress at a time. They only pull a new feature when a slot opens by finishing a feature.
    6. Make work visible. Have a visible task/story board where the team can see it. On the board, show the workflow phases on the board and the agreed WIP limits.
    7. Groom the queue. The team should periodically scope the features waiting in the limited queue to make sure they will fit in the agreed SLA time-box. If not, they are thrown back to the stakeholders to break down further.
  2. Implementation:

    The per-feature SLA clock starts now.

    1. Pull the next work item. When capacity is available, the team chooses a feature to pull. They will consider the Cost of Delay classification plus resource considerations when deciding which one to pull.
    2. Decompose the work items just in time. The team breaks the feature/epic into stories and/or tasks when it is pulled.
    3. Watch for flow. Everyone obeys the WIP limits. Note bottlenecks that occur. Adjust limits or other elements as needed till you achieve a smooth delivery flow.
    4. Inspect and adapt. Have daily stand-ups, periodic demos, and retrospectives (or you can deal with issues as they arise and get rid of retrospectives).
    5. Go live! Release a feature as soon as it is ready.

Functions of Kanbans

The key objective of a Kanban system is to deliver the material just-in-time to the manufacturing workstations, and to pass information to the preceding stage regarding what and how much to produce.
A Kanban fulfills the following functions:

  1. Visibility function
    The information and material flow are combined together as Kanbans move with their parts (work-in-progress WIP).
  2.  Production function
    The Kanban detached from the succeeding stage fulfills a production control function which indicates the time, quantity, and the part types to be produced
  3. Inventory function
    The number of Kanbans actually measures the amount of inventory. Hence, controlling the number of Kanbans is equivalent to controlling the amount of inventory; i.e. increasing (decreasing) the number of Kanbans corresponds to increasing (decreasing) the amount of inventory. Controlling the number of
    Kanbans is much simpler than controlling the amount of inventory itself.

Auxiliary equipment

  1. Kanban box: to collect Kanbans after they are withdrawn.
  2. Dispatching board: in which Kanbans from the succeeding stage are placed in order to display the production schedule.
  3. Kanban management account: an account to manage Kanbans.
  4. Supply management account: an account to manage the supply of raw materials.

Classifications of  Kanbans

According to their functions, Kanbans are classified into:

  1. Primary Kanban: It travels from one stage to another among main manufacturing cells or production preparation areas. The primary Kanbans are two kinds, one of which is called `withdrawal Kanban’ (conveyor Kanban) that is carried when going from one stage to the preceding stage. The other one is called `production Kanban’  and is used to order production of the portion withdrawn by the succeeding stage. These two kinds of Kanbans are always attached to the containers holding parts.1
  2. Supply Kanban: It travels from a warehouse or storage facility to a manufacturing facility.1
  3. Procurement Kanban: It travels from outside of a company to the receiving area.1
  4. Subcontract Kanban: It travels between subcontracting units.
  5. Auxiliary Kanban: It may take the form of an express Kanban, emergency Kanban, or a Kanban for a specific application.

Concepts to be used in Kanban

Before you can put the kanban system in place, you must first make your production process as efficient as possible. Two practices—production smoothing and load balancing—are helpful for doing this.
Production smoothing refers to synchronizing the production of your company’s different products to match your customer demand. Once you successfully accomplish production smoothing, daily schedules for your production processes are arranged to ensure production of the required quantity of materials at the required time. Your employees and equipment are all organized toward that end as well. To successfully do production smoothing, you first break down your required monthly production output into daily units using the following formula:1Then you compare this daily volume with your operating hours to calculate the takt time. Calculating your takt time for production lets you determine how much to vary the pace of the work you must do. The mathematical formula for determining your takt time is as follows:1
Then you look at your capacity, which is the ability of a machine and operator to complete the work required, and determine the number of employees required to complete your production processes. Don’t calculate your takt time based on the number of employees already working on your production line. That can result in too much or too little capacity. Instead, calculate your takt time based on the number of units required per day and then determine the number of employees needed to staff the line to produce at that rate. Load is the volume of work that your organization needs to do. Load balancing is finding a balance between the load and your capacity. Timing and volume are critical to achieving load balancing. Although kanban systems are a very effective way to fine-tune your production levels, they work best only after you have implemented value stream mapping  and one-piece flow. This is because kanban systems minimize your stocking levels and use visual management, error proofing, and total productive maintenance to ensure that quality parts and materials are delivered when a kanban triggers their flow through the production process. Perform maintenance and process-improvement activities during times of lower demand. This way, during peak demand times, every employee can be actively engaged on the production line. The kanban system fine-tunes your production process. But it cannot make your organization able to quickly respond to sudden large changes in demand. You might not be able to rally sufficient resources to produce a very big order in time, or to find enough alternate activities to keep employees busy when there is a sudden large drop in orders.

The general guidelines for using the kanban system.

When using the kanban system, it’s important to follow the six general guidelines listed below.

  1. An upstream process never sends defective parts to a downstream process.
    1. Operators at a process that produces a defective product must immediately discover it.
    2. The problem(s) that created the defective product must be resolved immediately.
    3. Machines must stop automatically when a defect occurs.
    4. Employees must stop their work operation when a defect occurs.
    5. All defective products mixed with good products must be separated promptly.
    6. Suppliers who ship defective parts to your organization must send the same number of replacement parts in their next shipment. This ensures that the exact number of good parts required is available for production operations.
  2. A downstream process withdraws only what it needs from an upstream process.
    1. No withdrawal of materials from a process is allowed without a kanban.
    2. Withdraw the same number of items as kanbans (unless a kanban indicates item quantities of more than one).
    3. A kanban must accompany each item.
  3. An upstream process produces the exact quantity of products that will be withdrawn by the next process downstream.

    1. Inventory must be restricted to an absolute minimum. This is called just-in-time inventory.
    2. Do not produce more items than the number of kanbans (unless a kanban indicates item quantities of more than one).
    3. Produce units in the order in which their production kanbans are received.
  4. Synchronize your production processes by regularly maintaining your equipment and reassigning workers as needed.

  5. Remember that the kanban system is a way of fine-tuning your production amounts.

    1. The kanban system cannot easily respond to major changes in production requirements. Your company also needs to have proactive sales and operations-planning procedures in place.
    2. The principles of load balancing must be followed.
    3. Employees receive work instructions for production and transportation of materials via kanbans only. No other production information is sent to employees.
  6. Work to stabilize and improve your production processes. Variations and impractical work methods often produce defective materials. Make sure you keep all your work processes in control, and keep variation levels within the requirements of your customers.

General description of Kanban operations

There are two basic types of kanban cards: production kanbans and withdrawal kanbans. A production kanban describes how many of what item a particular operation needs to produce. Once employees have a production kanban in hand, their operation can begin producing the item. A withdrawal kanban is used to pull items from a preceding operation or a marketplace, an area where materials are stocked in a supermarket system. The figure below shows the kanban system in use.For production stage i, when parts are processed and demand from its receiving stage i + 1 occurs, the production Kanban is removed from a container and is placed on the dispatching board at stage i. The withdrawal Kanban from stage i + 1 then replaces the production Kanban and the container. This container along with the withdrawal Kanban is then sent to stage i + 1 for processing. Meanwhile at stage i, the production activity takes place when a production Kanban and a container with the withdrawal Kanban are available. The withdrawal Kanban is then replaced by the production Kanban and sent back to stage i – 1 to initiate production activity at stage i – 1. This forms a cyclic production chain. The Kanban pulls (withdraws) parts instead of pushing parts from one stage to another to meet the demand at each stage. The Kanban controls the move of product, and the number of Kanbans limits the flow of products. If no withdrawal is requested by the succeeding stage, the preceding stage will not produce at all, and hence no excess items are manufactured. Therefore, by the number of Kanbans (containers) circulating in a JIT system, nonstock production (NSP) may be achieved.


Withdrawal and Production Kanban Steps

  1.  An operator from the downstream process brings withdrawal kanbans to the upstream process’s marketplace. Each pallet of materials has a kanban attached to it.
  2. When the operator of the downstream process withdraws the requested items from the marketplace, the production kanban is detached from the pallets of materials and is placed in the kanban receiving bin.
  3. For each production kanban that is detached from a pallet of materials, a withdrawal kanban is attached in its place. The two kanbans are then compared for consistency to prevent production errors.
  4. When work begins at the downstream process, the withdrawal kanban on the pallet of requested materials is put into the withdrawal kanban bin.
  5. At the upstream process, the production kanban is collected from the kanban receiving bin. It is then placed in the production kanban bin in the same order in which it was detached at the marketplace.
  6. Items are produced in the same order that their production kanbans arrive in the production bin.
  7. The actual item and its kanban must move together when processed.
  8. When a work process completes an item, it and the production kanban are placed together in the marketplace so that an operator from the next downstream operation can withdraw them. A kanban card should be attached to the actual item it goes with so that it can always be accurately recognized.

Kanban control

Toyota considered its system of external and internal processes as connected with invisible conveyor lines (Kanbans). The information flow (Kanban flow) acts like an invisible conveyor through the entire
production system and connects all the department together.


1. The production line.

Due to different types of material handling systems, there are three types of control:

Single Kanban system (using production Kanbans)


The single Kanban (single-card) system uses production Kanbans only to block material-handling based on the part type. The production is blocked at each stage based on the total queue size. In a single-card system, the size of a station output buffer and part mix may vary. Multiple containers contain the batches to be produced, as long as the total number of full containers in the output buffer does not exceed the buffer output capacity.  The following conditions are essential for a proper functioning of the single Kanban system:

  • small distance between any two subsequent stages;
  • fast turnover of Kanbans;
  • low WIP;
  • small buffer space and fast turnover of WIP; and
  • synchronization between the production rate and speed of material handling.

(2) Dual Kanban system (using two Kanbans simultaneously)


The dual Kanban system (two-card system) uses production and withdrawal Kanbans to implement both the station and material-handling blocking by part type. There is a buffer for WIP while transporting the finished parts from a preceding stage to its succeeding stage. The withdrawal Kanbans are presented in the buffer area.  This system is appropriate for manufactures who are not prepared to adopt strict control rules to the buffer inventory. The following conditions are essential for the dual Kanban system:

  • moderate distance between two stages;
  • fast turnover of Kanbans;
  • some WIP in a buffer is needed;
  • external buffer to the production system; and
  • synchronization between the production rate and speed of material handling

(3) Semi-dual Kanban system (changing production Kanbans and withdrawal Kanbans at intermediate stages)


The semi-dual Kanban system has the following characteristics:

  • large distance between two stages;
  • slow turnover of Kanbans;
  • large WIP is needed between subsequent stages;
  • slow turnover of WIP;
  • synchronization between the production rate and speed of material handling is not necessary.

2. The receiving area.

Based on different types of receiving, three types of Kanban operations are performed:
(1) receiving from a preceding stage in the same facility
(2) receiving from a storage
(3) receiving from a vendor

The optimal number of Kanbans.


The number of Kanbans is determined based on the amount of inventory. It is important to have an accurate number of Kanbans so that the WIP is minimized and simultaneously the out-of-stock situation is avoided.
In the Toyota Kanban system:
number of Kanbans = (maximum daily production quantity) × (production waiting time + production processing time + withdraw lead time + safety factor)÷standard number of parts (SPN)
In Figure above, the cycle time of Kanbans (part{A, B, C})=
0.1 + 0.5 + 0.5 + 0.2 + 0.1 + 0.1 = 1.5 (days).
The number of Kanbans of part{A, B, C} =
1000 * 1.5/ 100 = 15 (Kanbans), where Qmax = 1000 and SNP = 100


  1. The maximum daily production quantity is the maximum output based on the daily production plan. Note that the production quantity should not vary too much on a daily basis, which is one of
    the necessary conditions to implement the Kanban production concept.
  2. Production waiting time is the idle interval between two production commands (for example 0.5 day in Figure above).
  3.  Production processing time is the interval between receiving production command and completing the lot.
  4. Withdrawal lead time is the interval between withdrawing a Kanban from the preceding stage and issuing a production command.
  5. The safety factor is based on time unit, e.g. day. It allows avoidance of an interruption of the production line due to unexpected conditions.
  6. SNP represents the standard number of parts. A Kanban indicates the standard number of the parts. The number of Kanbans between adjacent stations impacts the inventory level between these two stations. Several methods have been developed for determining the optimal number of Kanbans

Adjustment of the Kanban system

(1) Insertion maintenance action

Insertion maintenance takes place when the number of Kanbans used in a current planning period is larger than the number of Kanbans used in the previous period. Additional Kanbans are introduced to the system immediately after withdrawing the production Kanbans and placing them on the dispatching board.

(2) Removal maintenance action

Removal maintenance, similar to the insertion maintenance, takes place when the number of Kanbans used in the current planning period is smaller than the number of Kanbans used in the previous planning period. The additional Kanbans are always removed immediately after withdrawing the production Kanbans and removal of an equivalent number of Kanbans from the dispatching board.

Supermarket system

Lean enterprises use a supermarket system to achieve just-in-time inventory. The concept of a supermarket system is similar to that of shopping at a supermarket. When you go to a supermarket, you do the following:

  • Select the type and quantity of food you need, taking into account the number of people in your family, the space you have available to store goods, and the number of days the supply must last.
  • Put the food items into a shopping cart and pay for them.

When you use a supermarket system for your organization’s manufacturing operations, the following steps occur:

  • The process that manufactures parts keeps them in a marketplace.
  • When the marketplace is full, production stops.
  • A downstream process requests parts from an upstream process only when it needs them.
  • The responsibility for transporting materials from one process to another belongs to the downstream process that uses them.

A storage area for parts is called a marketplace because it is the place where downstream processes go to get the parts and materials they need. For a supermarket system to work as efficiently as possible, the following must occur:

  • No defective items are sent from a marketplace to downstream processes.
  • Marketplaces are assigned the smallest space possible to fit the materials they must hold. A marketplace is clearly defined by a line or divider, and no materials are stored beyond its boundaries.
  • A minimum number of items is placed in each marketplace.
  • Marketplaces are maintained with visual management  techniques.

The kanban system for an automated assembly line

To implement the kanban system in an assembly line where no human operators oversee the production equipment, you must make some technical modifications. Automatic limit switches must be installed on your equipment to keep the machines from producing too many units. In addition, all your production processes should be interconnected so that they have the required quantity of standard stock on hand. A fully automatic kanban system is known as an electric kanban.

The kanban system for producing custom orders

A kanban system is an effective way of controlling the production of specialized parts or products that your organization makes. Using the kanban system for special parts or products ensures the following:

  • Your starting and transporting procedures are conducted in the right sequence and on a constant basis.
  • You can keep your stocking levels constant. This enables you to reduce your overall stocking levels.

Because companies do not ordinarily produce specialized parts on a regular basis, it’s important for employees to share information about their production in a timely manner. Information delays can result in increases or decreases in the number of units you have on hand. Circulating your kanbans more frequently enables you to produce fewer batches of specialized parts more frequently.

Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion is also welcome.


 Error Proofing

Error proofing is a structured approach to ensuring quality all the way through your work processes. This approach enables you to improve your production or business processes to prevent specific errors—and, thus, defects—from occurring. Error-proofing methods enable you to discover sources of errors through fact-based problem solving. The focus of error proofing is not on identifying and counting defects. Rather, it is on the elimination of their cause: one or more errors that occur somewhere in the production process. The distinction between an error and a defect is as follows:

  • An error is any deviation from a specified manufacturing or business process. Errors cause defects in products or services.
  • A defect is a part, product, or service that does not conform to specifications or a customer’s expectations. Defects are caused by errors. The goal of error proofing is to create an error-free production environment. It prevents defects by eliminating their root cause, which is the best way to produce high-quality products and services.


Shigeo Shingo is widely associated with a Japanese concept called poka-yoke (pronounced poker-yolk-eh) which means to mistake proof the process. Mr. Shingo recognized that human error does not necessarily create resulting defects. The success of poka-yoke is to provide some intervention device or procedure to catch the mistake before it is translated into nonconforming product. Shingo lists the following characteristics of poka-yoke devices:
– They permit 100% inspection
– They avoid sampling for monitoring and control
– They are inexpensive
Poka-yoke devices can be combined with other inspection systems to obtain near zero defect conditions.



Error proofing in Lean organization

For your organization to be competitive in the marketplace, you must deliver high-quality products and services that exceed your customers’ expectations. You cannot afford to produce defective products or services. A lean enterprise strives for quality at the source. This means that any defects that occur during one operation in a manufacturing or business process should never be passed on to the next operation. This ensures that your customers will receive only defect-free products or services. In a “fat” system, any defects that are found can simply be discarded while operations continue. These defects are later counted, and if their numbers are high enough, root-cause analysis is done to prevent their recurrence. But in a lean enterprise, which concentrates on producing smaller batch sizes and producing to order versus adding to inventory, a single defect can significantly impact performance levels. When a defect occurs in a lean enterprise, operations must stop while immediate action is taken to resolve the situation. Obviously, such pauses in operations can be costly if defects occur often. Therefore, it is important to prevent defects before they can occur. Your organization can achieve zero errors by understanding and implementing the four elements of error proofing. These are as follows:

  1. General inspection.
  2. 100% inspection.
  3. Error-proofing devices.
  4. Immediate feedback.
  1. General inspection

    The first, and most important, element of error proofing is inspection. There are three types of inspections that organizations commonly use.

    1. Source inspections. Source inspections detect errors in a manufacturing process before a defect in the final part or product occurs. The goal of source inspections is to prevent the occurrence of defects by preventing the occurrence of errors. In addition to catching errors, source inspections provide feedback to employees before further processing takes place. Source inspections are often the most challenging element of error proofing to design and implement.
    2. Judgment inspections. Often referred to as end-of the-line inspections, final inspections, or dock audits, these are inspections during which a quality inspector or operator compares a final product or part with a standard. If the product or part does not conform, it is rejected. This inspection method has two drawbacks. First, it might not prevent all defects from being shipped to customers. Second, it increases the delay between the time an error occurs and the time a resulting defect is discovered. This allows the production process to continue to make defective products and makes root-cause analysis difficult. If you rely on judgment inspections, it’s important to relay inspection results to all the earlier steps in your production process. This way, information about a defect is communicated to the point in the process at which the problem originated.
    3. Informative inspections. Informative inspections provide timely information about a defect so that root-cause analysis can be done and the production process can be adjusted before significant numbers of defects are created. Typically, these inspections are done close enough to the time of the occurrence of the defect so that action can be taken to prevent further defects from occurring.
      There are two types of informative inspections. They are as follows:

      • Successive inspections. These inspections are performed after one operation in the production process is completed, by employees who perform the next operation in the process. Feedback can be provided as soon as any defects are detected (which is preferable) or simply tracked and reported later. It is always better to report defects immediately.
      • Self-inspections. Operators perform self inspections at their own workstations. If an operator finds a defect in a product or part, he/ she sets it aside and takes action to ensure that other defective products or parts are not passed on to the next operation. The root cause of the defect is then determined and corrected. Often this involves putting error-proofing measures and devices in place to prevent the problem from recurring. Industrial engineering studies have shown that human visual inspection is only about 85% effective. Similar inaccuracies happen when humans directly measure physical properties, such as pressure, temperature, time, and distance. Use electronic or mechanical inspection devices to achieve better accuracy. Operator self-inspection is the second most effective type of inspection. It is much more effective and timely than successive inspection. The number of errors detected depends on the diligence of the operator and the difficulty of detecting the defect. Wherever practical, empower operators tostop the production line whenever a defect is detected. This creates a sense of urgency that focuses employees’ energy on prevention of the defect’s recurrence. It also creates the need for effective source inspections and self-inspections.
  2. 100% inspection

    The second element of error proofing is 100% inspection, the most effective type of inspection. During these inspections, a comparison of actual parts or products to standards is done 100% of the time at the potential source of an error. The goal is to achieve 100% real-time inspection of the potential process errors that lead to defects. It is often physically impossible and too time-consuming to conduct 100% inspection of all products or parts for defects. To help you achieve zero defects, use low-cost error-proofing devices to perform 100% inspection of known sources of error. When an error is found, you should halt the process or alert an operator before a defect can be produced. Zero defects is an achievable goal! Many organizations have attained this level of error proofing. One of the largest barriers to achieving it is the belief that it can’t be done. By changing this belief among your employees, you can make zero defects a reality in your organization. Statistical process control (SPC) is the use of mathematics and statistical measurements to solve your organization’s problems and build quality into your products and services. When used to monitor product characteristics, SPC is an effective technique for diagnosing process-performance problems and gathering information for improving your production process. But because SPC relies on product sampling to provide both product and process characteristics, it can detect only those errors that occur in the sample that you analyze. It gives a reliable estimate of the number of total defects that are occurring, but it cannot prevent defects from happening, nor does it identify all the defective products that exist before they reach your customers.

  3. Error-proofing devices

    The third element of error proofing is the use of error proofing devices: physical devices that enhance or substitute for the human senses and improve both the cost and reliability of your organization’s inspection activities. You can use mechanical, electrical, pneumatic, or hydraulic devices to sense, signal, or prevent existing or potential error conditions and thus achieve 100% inspection of errors in a cost-effective manner. Common error-proofing devices include the following:

    • Guide pins of different sizes that physically capture or limit the movement of parts, tooling, or equipment during the production process.
    • Limit switches, physical-contact sensors that show the presence and/or absence of products and machine components and their proper position.
    • Counters, devices used to count the number of components, production of parts, and availability of components.
    • Alarms that an operator activates when he/she detects an error.
    • Checklists, which are written or graphical reminders of tasks, materials, events, and so on.

    Such industrial sensing devices are the most versatile error-proofing tools available for work processes. Once such a device detects an unacceptable condition, it either warns the operator of the condition or automatically takes control of the function of the equipment, causing it to stop or correct itself. These warning and control steps, known as regulatory functions. These sensing devices can detect object characteristics by using both contact and non-contact methods. Contact sensors include micro-switches and limit switches; non-contact methods include transmitting and reflecting photoelectric switches. Setting functions describe specific attributes that sensing devices need to inspect. All of the four setting functions listed below are effective error-detection methods:

    1. Contact methods involve inspecting for physical characteristics of an object, such as size, shape, or color, to determine if any abnormalities exist.
      Example: A sensor receives a reflective signal (sparks) only when the flint wheel is installed correctly.1
    2. Fixed-value setting functions inspect for a specific number of items, events, and so on, to determine if any abnormalities exist. This technique is often used to ensure that the right quantity of parts has been used or the correct number of activities has been performed.
      Example: All materials must be used to assemble a case, including eight screws. A counter on the drill keeps track of the number of screws used. Another method is to package screws in groups of eight.
    3. Motion-step setting functions inspect the sequence of actions to determine if they are done out of order.
      Example: Materials are loaded into a hopper in a predetermined sequence. If the scale does not indicate the correct weight for each incremental addition, a warning light comes on.
    4. Information-setting functions check the accuracy of information and its movement over time and distance to determine if any gaps or errors exist. Here are some tips for using information-setting functions:
      • To capture information that will be needed later, use work logs, schedules, and action lists.
      • To distribute information accurately across distances, you can use e-mail, bar-coding systems, radio frequency devices, voice messaging systems, and integrated information systems, such as enterprise resource planning (ERP).

      Example: Inventory placed in a temporary storage location must be accurately entered into the storeroom system for later retrieval during the picking operation. Bar-coding is used to identify part numbers and bin locations. This data is transferred directly from the bar-code reader to the storeroom system. Customers access the storeroom system through the internet.

  4. Immediate feedback

    The fourth element of error proofing is immediate feedback. Because time is of the essence in lean operations, giving immediate feedback to employees who can resolve errors before defects occur is vital to success. The ideal response to an error is to stop production and eliminate the source of the error. But this is not always possible, especially in continuous batch or flow operations.  You should determine the most cost-effective scenario for stopping production in your work process when an error is detected. It is often better to use a sensor or other error-proofing device to improve feedback time rather than relying on human intervention. Methods for providing immediate feedback that use sensing devices are called regulatory functions. When a sensing device detects an error, it either warns an operator of the condition or makes adjustments to correct the error. There are two types of regulatory functions.

    1. The warning method: It does not stop operations but provides various forms of feedback for the operator to act upon. Common feedback methods include flashing lights or unusual sounds designed to capture an operator’s attention.
      Example: A clogged meter sets off a warning light on a control panel. However, the operator can still run the mixer and produce bad powder.
    2. The control method: This method is preferred for responding to error conditions, especially where safety is a concern. However, it can also be a more frustrating method for the operator if a machine continually shuts itself down.
      Example: A mixer will not operate until the water meter is repaired. The preventive maintenance program should have “meter visual inspections” on its schedule, and spare nozzles should be made available.

    Warning methods are less effective than control methods because they rely on the operator’s ability to recognize and correct the situation. If the operator does not notice or react to the error quickly enough, defective parts or products will still be produced. However, warning methods are preferred over control methods when the automatic shutdown of a line or piece of equipment is very expensive.

Some common sources of errors

Common sources of error include humans, methods, measurements, materials, machines, and environmental conditions. These are examined in detail below. Any one of these factors alone, or any combination of them, might be enough to cause errors, which can then lead to defects.1

  1. Humans.

    Unfortunately, human error is an unavoidable reality. The reasons are many.

    • Lack of knowledge, skills, or ability. This happens when employees have not received proper training to perform a task and their skill or knowledge level is not verified.
    • Mental errors. These include slips and mistakes. Slips are subconscious actions. They usually occur when an experienced employee forgets to perform a task. Mistakes are conscious actions. They occur when an employee decides to perform a task in a way that results in an error.
    • Sensory overload. A person’s ability to perceive, recognize, and respond to stimuli is dramatically affected by the sharpness of the five senses. When an employee’s senses are bombarded by too many stimuli at once, sensory overload results, and his/her senses are dulled. This increases the chance for error.
    • Mechanical process errors. Some tasks are physically difficult to do and are thus prone to error. They can result in repetitive-strain injuries and physical exhaustion, which are both known to cause errors.
    • Distractions. There are two types of distractions: internal and external. External distractions include high-traffic areas, loud conversations, and ringing phones. Emotional stress and daydreaming are examples of internal distractions. Both types can lead to errors.
    • Loss of memory. Many work tasks require employees to recall information that can be forgotten. In addition, aging, drug or alcohol use, and fatigue can all cause memory loss and lead to errors.
    • Loss of emotional control. Anger, sorrow, jealousy, and fear often work as emotional blinders, hampering employees’ ability to work effectively
  2. Measurements.

    Measurements must be accurate, repeatable, and reproducible if they are to successfully locate a problem. Unfortunately, measurement devices and methods are as equally prone to error as the processes and products that they measure. Inspection measurement practices, measurement graphs and reports, and measurement definitions are all potential sources of misinterpretation and disagreement. For instance, a measurement scale’s being out of calibration can cause errors. Don’t be surprised if a root-cause analysis points to measurement as the source of an error. An accurate measurement is the product of many factors, including humans, machines, and methods.

  3. Methods.

    Industry experts believe that nearly 85% of the errors that occur in a work process are caused by the tasks and technology involved in the process. The sources of error in a work process are as follows:

    • Process steps. These are the physical or mental steps that convert raw materials into products, parts, or services.
    • Transportation. This refers to the movement of materials, information, people, and technology during a work process.
    • Decision making. This is the process of making a choice among alternatives. Make sure that all your employees’ decisions address six basic questions: Who? What? When? Where? How? Why?
    • Inspections. These are activities that compare the actual to the expected. As noted above, they are prone to error.

    The area of work processes is the one where lean enterprises make the largest gains in error reduction and quality improvement. Concentrate your organizational efforts on this area.

  4. Materials.

    This factor can contribute to error in the following ways:

    • Use of the wrong type or amount of raw materials or use of incompatible raw materials, components, or finished products.
    • Inherent product, tool, or equipment designs. A root-cause analysis typically leads back to faulty manufacturing, materials handling, or packaging practices.
    • Missing or ill-designed administrative tools (e.g., forms, documents, and office supplies) that do not support performance requirements.
  5. Machines.

    Machine errors are classified as either predictable or unpredictable. Predictable errors are usually addressed in a preventative or scheduled maintenance plan. Unpredictable errors, which are caused by varying machine reliability, should be considered when your organization purchases equipment. If satisfactory machine reliability cannot be achieved, then you must plan other ways to prevent and catch machine-related errors.

  6. Environmental conditions.

    Poor lighting, excessive heat or cold, and high noise levels all have a dramatic affect on human attention levels, energy levels, and reasoning ability.

In addition, unseen organizational influences—such as pressure to get a product shipped, internal competition among employees, and pressure to achieve higher wage levels—all affect quality and productivity. Error-proofing devices and techniques can be used for some, but not all, sources of environmentally caused errors. Often an organization’s operating and personnel policies must be revised to achieve a goal of zero defects.

Red-Flag Conditions

The probability that errors will happen is high in certain types of situations. These so-called red-flag conditions include the following:

  1. Lack of an effective standard. Standard operating procedures (SOPs) are reliable instructions that describe the correct and most effective way to get a work process done. Without SOPs, employees cannot know the quality of the product or service they produce or know with certainty when an error has occurred. In addition, when there are no SOPs, or if the SOPs are complicated or hard to understand, variations can occur in the way a task is completed, resulting in errors.
  2. Symmetry. This is when opposite sides of a part, tool, material, or fixture are, or seem to be, identical. The identical sides of a symmetrical object can be confused during an operation, resulting in errors.
  3. Asymmetry. This is when opposite sides of a part, tool, material, or fixture are different in size, shape, or relative position. Slight differences are difficult to notice in asymmetrical parts, leading to confusion, delays, or errors.
  4. Rapid repetition. This is when the same action or operation is performed quickly, over and over again. Rapidly repeating a task, whether manually or by machine, increases the opportunity for error.
  5. High or extremely high volume. This refers to rapidly repeated tasks that have a very large output. Pressure to produce high volumes makes it difficult for an employee to follow the SOPs, increasing the opportunity for errors.
  6. Poor environmental conditions. Dim lighting, poor ventilation, inadequate housekeeping, and too much traffic density or poorly directed traffic can cause errors. The presence of foreign materials (e.g., dirt or oils), overhandling, and excessive transportation can also result in errors or damaged products and parts.
  7. Adjustments. These include bringing parts, tooling, or fixtures into a correct relative position.
  8. Tooling and tooling changes. These occur when any working part of a power-driven machine needs to be changed, either because of wear or breakage or to allow production of different parts or to different specifications.
  9. Dimensions, specifications, and critical conditions. Dimensions are measurements used to determine the precise position or location for a part or product, including height, width, length, and depth. Specifications and critical conditions include temperature, pressure, speed, tension coordinates, number, and volume. Deviation from exact dimensions or variation from standards leads to errors.
  10. Many or mixed parts. Some work processes involve a wide range of parts in varying quantities and mixes. Selecting the right part and the right quantity becomes more difficult when there are many of them or when they look similar.
  11. Multiple steps. Most work processes involve many small operations or sub-steps that must be done, often in a preset, strict order. If an employee forgets a step, does the steps in an incorrect sequence, or mistakenly repeats a step, errors occur and defects result.
  12. Infrequent production. This refers to an operation or task that is not done on a regular basis. Irregular or infrequent performance of a task leads to the increased likelihood that employees will forget the proper procedures or specifications for the task. The risk of error increases even more when these operations are complicated.

Always use data as a basis for making adjustments in your work processes. Using subjective opinion or intuition to make adjustments can result in errors—and eventually defects. Any change in conditions can lead to errors that in turn lead to defects. For instance, wear or degradation of production equipment produces slow changes that occur without the operator’s awareness
and can lead to the production of defective parts.1


A Review of Human Error

A brief review of the concepts and language of human error will be useful. Human error has been studied extensively by cognitive psychologists. Their findings provide concepts and language that are vital to this discussion.

Errors of Intent vs. Errors in Execution

The process humans use to take action has been described in several ways. One description divides the process into two distinct steps:

  1. Determining the intent of the action.
  2. Executing the action based on that intention. Failure in either step can cause an error.

Norman divided errors into two categories, mistakes and slips. Mistakes are errors resulting from deliberations that lead to the wrong intention. Slips occur when the intent is correct, but the execution of the action does not occur as intended. Generally, error-proofing requires that the correct intention be known well before the action actually occurs. Otherwise, process design features that prevent errors in the action could not be put in place.  Rasmussen and Reason divide errors into three types, based on how the brain controls actions. They identify skill-based, rule-based, and knowledge-based actions. Their theory is that the brain minimizes effort by switching among different levels of control, depending on the situation. Common activities in routine situations are handled using skill-based actions, which operate with little conscious intervention. These are actions that are done on “autopilot.” Skill-based actions allow you to focus on the creativity of cooking rather than the mechanics of how to turn on the stove.  Rule-based actions utilize stored rules about how to respond to situations that have been previously encountered. When a pot boils over, the response does not require protracted deliberations to determine what to do. You remove the pot from the heat and lower the temperature setting before returning the pot to the burner. When novel situations arise, conscious problem solving and deliberation are required. The result is knowledge-based actions. Knowledge-based actions are those actions that use the process of logical deduction to determine what to do on the basis of theoretical knowledge. Every skill- and rule-based action was a knowledge-based action at one time. Suppose you turn a burner on high but it does not heat up. That is unusual. You immediately start to troubleshoot by checking rule-based contingencies. When these efforts fail, you engage in knowledge-based problem solving and contingency planning. Substantial cognitive effort is involved.

Knowledge in the Head vs. knowledge in the World

Norman introduces two additional concepts that will be employed throughout this book. He divides knowledge into two categories:

  1. Knowledge in the head is information contained in human memory
  2. Knowledge in the world is information provided as part of the environment in which a task is performed

Historically, organization has focused on improving knowledge in the head. A comprehensive and elaborate Quality manual is an example of knowledge in the head. A significant infrastructure has been developed to support this dependence on memory, including lengthy standard operating procedures that indicate how tasks are to be performed. These procedures are not intended to be consulted during the actual performance of the task, but rather to be committed to memory for later recall. Retaining large volumes of instructions in memory so that they are ready for use requires significant ongoing training efforts. When adverse events occur, organizational responses also tend to involve attempts to change what is in the memory of the  worker. These include retraining the worker who errs, certifying (i.e., testing) workers regularly, attempting to enhance and manage worker attentiveness, and altering standard operating procedures. The passage of time will erase any gains made once the efforts to change memory are discontinued.

Putting “knowledge in the world” is an attractive alternative to trying to force more knowledge into the head. Knowledge can be put in the world by providing cues about what to do. This is accomplished by embedding the details of correct actions into the physical attributes of the process. In manufacturing, for example, mental energies that were used to generate precise action and monitor compliance with procedures stored in memory are now freed to focus on those critical, non-routine deliberations required for the best possible customer satisfaction. How do you recognize knowledge in the world when you see it? Here is a crude rule of thumb: if you can’t take a picture of it in use, it probably is not knowledge in the world. Error-proofing involves changing the physical attributes of a process, and error-proofing devices can usually be photographed. Error-proofing is one way of putting knowledge in the world. The rule is crude because there are gray areas, such as work instructions. If the instructions are visible and comprehensible at the point in the process where they are used, then they would probably be classified as knowledge in the world. Otherwise, work instructions are a means of creating knowledge in the head.

Error-Proofing Approaches

There is no comprehensive typology of error-proofing. The approaches to error reduction are diverse and evolving. More innovative approaches will evolve, and more categories will follow as more organizations and individuals think carefully about error-proofing their processes. Tsuda lists four approaches to error-proofing:

  1. Mistake prevention in the work environment.
  2. Mistake detection (Shingo’s informative inspection).
  3. Mistake prevention (Shingo’s source inspection).
  4. Preventing the influence of mistakes.
  1. Mistake Prevention in the Work Environment

    This approach involves reducing complexity, ambiguity, vagueness, and uncertainty in the workplace. An example from Tsuda is having only one set of instructions visible in a notebook rather than having two sets appear on facing pages. When only one set of instructions is provided, workers are unable to accidentally read inappropriate or incorrect instructions from the facing page. In another example, similar items with right-hand and left-hand orientations can sometimes lead to wrong-side errors. If the design can be altered and made symmetrical, no wrong-side errors can occur; whether the part is mounted on the left or right side, it is always correct. The orientation of the part becomes inconsequential. Likewise, any simplification of the process that leads to the elimination of process steps ensures that none of the errors associated with that step can ever occur again. Norman suggests several process design principles that make errors less likely. He recommends avoiding wide and deep task structures. The term “wide structures” means that there are lots of alternatives for a given choice, while “deep structures” means that the process requires a long series of choices. Humans can perform either moderately broad or moderately deep task structures relatively well. Humans have more difficulty if tasks are both moderately broad and moderately deep, meaning there are lots of alternatives for each choice, and many choices to be made. Task structures that are very broad or very deep can also cause difficulties.

  2. Mistake Detection

    Mistake detection identifies process errors found by inspecting the process after actions have been taken. Often, immediate notification that a mistake has occurred is sufficient to allow remedial actions to be taken in order to avoid harm. The outcome or effect of the problem is inspected after an incorrect action or an omission has occurred. Informative inspection can also be used to reduce the occurrence of incorrect actions. This can be accomplished by using data acquired from the inspection to control the process and inform mistake prevention efforts. Another informative inspection technique is Statistical Process Control (SPC). SPC is a set of methods that uses statistical tools to detect if the observed process is being adequately controlled. SPC is used widely in industry to create and maintain the consistency of variables that characterize a process. Shingo identifies two other informative inspection techniques: successive checks and self-checks. Successive checks consist of inspections of previous steps as part of the process. Self-checks employ mistake-proofing devices to allow workers to assess the quality of their own work. Self-checks and successive checks differ only in who performs the inspection. Self-checks are preferred to successive checks because feedback is more rapid.

    Setting functions

    Whether mistake prevention or mistake detection is selected as the driving mechanism in a specific application, a setting function must be selected. A setting function is the mechanism for determining that an error is about to occur (prevention) or has occurred (detection). It differentiates between safe, accurate conditions and unsafe, inaccurate ones. The more precise the differentiation, the more effective the mistake-proofing can be. Chase and Stewart identify four setting functions that are described in Table below.

    Table  Setting functions

    Setting Function Description
    Physical (Shingo’s contact) Checks to ensure the physical attributes of the product or process are correct and error-free.
    Sequencing (Shingo’s motion step) Checks the precedence relationship of the process to ensure that steps are conducted in the correct order.
    Grouping or Counting
    (Shingo’s fixed value methods)
    Facilitates checking that matched sets of resources are available when needed or that the correct number of repetitions has occurred.
    Information Enhancement Determines and ensures that information required in the process is available at the correct time and place and that it stands out against a noisy background.

    Control functions.

    Once the setting function determines that an error has occurred or is going to occur, a control function (or regulatory function) must be utilized to indicate to the user that something has gone awry. Not all mistake-proofing is equally useful. Usually, mistake prevention is preferred to mistake detection. Similarly, forced control, shutdown, warning, and sensory alert are preferred, in that order. The preferred devices tend to be those that are the strongest and require the least attention and the least discretionary behavior by users.

     Control (or regulatory) functions

    Regulator function Mistake prevention Mistake detection
    Forced control Physical shape and size of object or electronic controls detect mistakes that being made and stop them from resulting in incorrect actions or omissions. Physical shape and size of object or electronic controls detect incorrect actions or omissions before they can cause harm.
    Shut down The process is stopped before mistakes can result in incorrect actions or omissions. The process is stopped immediately after an incorrect action or omission is detected.
    Warning A visual or audible warning signal is given that a mistake or omission is about to occur. Although the error is signaled, the process is allowed to continue. A visual or audible warning signal is given that a mistaken action or omission has just occurred.
    Sensory alert A sensory cue signals that a mistake is about to be acted upon or an omission made. The cue may be audible, visible, or tactile. Taste and smell have not proved to be as useful. Sensory alerts signal mistakes but allow the process to continue. A sensory cue signals that a mistake has just been acted upon or an omission has just occurred .
  3. Mistake Prevention

    Mistake prevention identifies process errors found by inspecting the process before taking actions that would result in harm. The word “inspection” as it is used here is broadly defined. The inspection could be accomplished by physical or electronic means without human involvement. The 3.5-inch disk drive is an example of a simple inspection technique that does not involve a person making a significant judgment about the process. Rather, the person executes a process and the process performs an inspection by design and prevents an error from being made. Shingo called this type of inspection “source inspection.” The source or cause of the problem is inspected before the effect—an incorrect action or an omission—can actually occur.

  4. Preventing the Influence of Mistakes

    Preventing the influence of mistakes means designing processes so that the impact of errors is reduced or eliminated. This can be accomplished by facilitating correction or by decoupling processes

    1. Facilitating correction.

      This could include finding easy and immediate ways of allowing workers to reverse the errors they commit. While doing things right the first time is still the goal, effortless error corrections can often be nearly as good as not committing errors at all. This can be accomplished through planned responses to error or the immediate reworking of processes. Typewriters have joined mimeograph machines and buggy whips as obsolete technology because typing errors are so much more easily corrected on a computer. Errors that once required retyping an entire page can now be corrected with two keystrokes. Software that offers “undo” and “redo” capabilities also facilitates the correction of errors. Informal polls suggest that people use these features extensively. Some users even become upset when they cannot “undo” more than a few of their previous operations. Also, computers now auto-correct errors like “thsi” one. These features significantly increase the effectiveness of users. They did not come into being accidentally but are the result of intentional, purposeful design efforts based on an understanding of the errors that users are likely to make. Automotive safety has been enhanced by preventing the influence of mistakes. Air bags do not stop accidents. Rather, they are designed to minimize injuries experienced in an accident. Antilock brakes also prevent the influence of mistakes by turning a common driving error into the correct action. Prior to the invention of antilock brakes, drivers were instructed not to follow their instincts and slam on the brakes in emergencies. To do so would increase the stopping distance and cause accidents due to driver error. Pumping the brakes was the recommended procedure. With anti-lock brakes, drivers who follow their instincts and slam on the brakes are following the recommended emergency braking procedure. What once was an error has become the correct action.

    2. Decoupling

      “Decoupling” means separating an error-prone activity from the point at which the error becomes irreversible. Software developers try to help users avoid deleting files they may want later by decoupling. Pressing the delete button on an unwanted E-mail or computer file does not actually delete it. The software merely moves it to another folder named “deleted items,” “trash can,” or “recycling bin.” If you have ever retrieved an item that was previously “deleted,” you are the beneficiary of decoupling. Regrettably, this type of protection is not yet available when saving work. The files can be overwritten, and the only warning may be a dialogue box asking, “Are you sure?” Sometimes the separation of the error from the outcome need not be large. Stewart and Grout suggest a decoupling feature for telephoning across time zones. The first outward manifestation of forgetting or miscalculating the time difference is the bleary eyed voice of a former friend at 4:00 a.m. local time instead of the expected cheery voice at a local time of 10:00 a.m. One way to decouple the chain would be to provide an electronic voice that tells the caller the current time in the location being called. This allows the caller to hang up the phone prior to being connected and thus avoid the mistake.


Attributes of Error-Proofing

  1. Error-Proofing is Inexpensive

    The cost of Error-proofing devices is often the fixed cost of the initial installation plus minor ongoing calibration and maintenance costs.  A device’s incurred cost per use can be zero, as it is with the 3.5-inch diskette drive. The cost per use can also be negative in cases in which the device actually enables the process to proceed more rapidly than before. In manufacturing, where data are available, mistake-proofing has been shown to be very effective. There are many management tools and techniques available to manufacturers. However, many manufacturers are unaware of error-proofing. The TRW Company reduced its defect rate from 288 parts per million (ppm) defective to 2 parts per million. Federal Mogul had 99.6 percent fewer customer defects than its nearest competitor and a 60 percent productivity increase by systematically thinking about the details of their operation and implementing mistake-proofing. DE-STA-CO manufacturing reduced omitted parts from 800 omitted ppm to 10; in all modes, they reduced omitted parts from 40,000 ppm to 200 ppm and, once again, productivity increased as a result. These are very good results for manufacturing. They would be phenomenal results in health care. Patients should be the recipients of processes that are more reliable than those in manufacturing. Regrettably, this is not yet the case.

  2. Error -Proofing Can Result in Substantial Returns on Investment

    Even in manufacturing industries, however, there is a low level of awareness of error-proofing as a concept. In an article published in 1997, Bhote stated that 10 to 1,100 to 1, and even 1,000 to 1 returns are possible, but he also stated that awareness of error-proofing was as low as 10 percent and that implementation was “dismal” at 1 percent or less. Exceedingly high rates of return may seem impossible to realize, yet Whited cites numerous examples. The Dana Corporation reported employing one device that eliminated a mode of defect that cost $.5 million dollars a year. The device, which was conceived, designed, and fabricated by a production worker in his garage at home, cost $6.00. That is an 83,333 to 1 rate of return for the first year. The savings occur each year that the process and the device remain in place. A worker at Johnson & Johnson’s Ortho-Clinical Diagnostics Division found a way to use “Post-It® Notes” to reduce defects and save time that was valued at $75,000 per year. If the “Post-It® Notes” cost $100 per year, then the return on investment would be 750 to 1. These are examples of savings for a single device. Lucent Technologies’ Power System Division implemented 3,300 devices over 3 years. Each of these devices contributed a net savings of approximately $2,545 to their company’s bottom line The median cost of each device was approximately $100. The economics in medicine are likely to be at least as compelling. A substantial amount of mistake-proofing can be done for the cost of settling a few malpractice suits out of court.

  3. Error-proofing Is Not a Stand-Alone Technique

    It will not obviate the need for other responses to error.

  4. Error-Proofing Is Not Rocket Science

    It is detail-oriented and requires cleverness and careful thought, but once implementation has been completed, hindsight bias will render the solution obvious.

  5. Error-Proofing Is Not a Panacea

    It cannot eliminate all errors and failures from a process. Perrow points out that no scheme can succeed in preventing every event in complex, tightly-linked systems. He argues that multiple failures in complex, tightly-linked systems will lead to unexpected and often incomprehensible events. Observers of these events might comment in hindsight, “Who would have ever thought that those failures could combine to lead to this?” Perrow’s findings apply to error-proofing as they do to any other technique. Error-proofing will not work to block events that cannot be anticipated. Usually, a good understanding of the cause-and-effect relationship is required in order to design effective Error-proofing devices. Therefore, the unanticipated events that arise from complex, tightly-linked systems cannot be mitigated using Error-proofing.

  6. Error-Proofing Is Not New

    It has been practiced throughout history and is based on simplicity and ingenuity. error-proofing solutions are often viewed post hoc as “common sense.”  Bottles of poison are variously identified by their rectangular shape, blue-colored glass, or the addition of small spikes to make an impression on inattentive pharmacists. Most organizations will find that examples of Error-proofing already exist in their processes. The implementation of Error-proofing, then, is not entirely new but represents a refocusing of attention on certain design issues in the process.

  7. Creating Simplicity Is Not Simple

    In hindsight, Error-proofing devices seem simple and obvious. A good device will lead you to wonder why no one thought of it before. However, creating simple, effective, error-proofing devices is a very challenging task. Significant effort should be devoted to the design process. Organizations should seek out and find multiple approaches to the problem before proceeding with the implementation of a solution. Each organization’s error-proofing needs may be different, depending on the differences in their processes. Consequently, some error-proofing solutions will require new, custom-made devices designed specifically for a given application. Other devices could be off-the-shelf solutions. Even off-the-shelf devices will need careful analysis—an analysis that will require substantial process understanding-in the light of the often subtly idiosyncratic nature of their own processes.

Some of the Error-Proofing tools

  1. Just Culture

    Just culture refers to a working environment that is conducive to “blame-free” reporting but also one in which accountability is not lost. Blame-free reporting ensures that those who make mistakes are encouraged to reveal them without fear of retribution or punishment. A policy of not blaming individuals is very important to enable and facilitate event reporting which in turn, enables mistake-proofing. The concern with completely blame-free reporting is that egregious acts, in which punishment would be appropriate, would go unpunished. Just culture divides behavior into three types: normal, risk-taking, and reckless. Of these, only reckless behavior is punished.

  2. Event Reporting

    Event reporting refers to actions undertaken to obtain information about  events and near-misses. The reporting reveals the type and severity of events and the frequency with which they occur. Event reports provide insight into the relative priority of events and errors, thereby enabling the mistake-proofing of processes. Consequently, events are prioritized and acted upon more quickly according to the seriousness of their consequences.

  3. Root Cause Analysis

    Root cause analysis (RCA) is a set of methodologies for determining at least one cause of an event that can be controlled or altered so that the event will not recur in the same situation. These methodologies reveal the cause-and-effect relationships that exist in a system. RCA is an important enabler of mistake-proofing, since mistake-proofing cannot be accomplished without a clear knowledge of the cause-and-effect relationships in the process. Care should be taken when RCA is used to formulate corrective actions, since it may only consider one instance or circumstance of failure. Other circumstances could also have led to the failure. Other failure analysis tools, such as fault tree analysis, consider all known causes and not just a single instance. Anticipatory failure determination facilitates inventing new circumstances that would lead to failure given existing resources.

  4. Corrective Action Systems

    Corrective action systems are formal systems of policies and procedures to ensure that adverse events are analyzed and that preventive measures are implemented to prevent their recurrence. Normally, the occurrence of an event triggers a requirement to respond with counter-measures within a certain period of time. Error-proofing is an effective form of counter-measure. It is often inexpensive and can be implemented rapidly. It is also important to look at all possible outcomes and counter-measures, not just those observed. Sometimes, mistake-proofing by taking corrective action is only part of the solution. For example, removing metal butter knives from the dinner trays of those flying in first class effectively eliminates knives from aircraft, but does not remove any of the other resources available for fashioning weapons out of materials available on commercial airplanes. This is mistake-proofing but not a fully effective counter-measure. Corrective action systems can also serve as a resource to identify likely mistake-proofing projects. Extensive discussion and consultation in a variety of industries, including health care, reveal that corrective actions are often variations on the following themes:

    1. An admonition to workers to “be more careful” or “pay attention.”
    2. A refresher course to “retrain” experienced workers.
    3. A change in the instructions, standard operating procedures, or other documentation.

    All of these are essentially attempts to change “knowledge in the head”. Chappell states that “You’re not going to become world class through just training, you have to improve the system so that the easy way to do a job is also the safe, right way. The potential for human error can be dramatically reduced.” Error-proofing is an attempt to put “knowledge in the world.” Consequently, corrective actions that involve changing “knowledge in the head” can also be seen as opportunities to implement mistake-proofing devices. These devices address the cause of the event by putting “knowledge in the world.” Not all corrective actions deserve the same amount of attention. Therefore, not all corrective actions should be allotted the same amount of time in which to formulate a response. Determining which corrective actions should be allowed more time is difficult because events occur sequentially, one at a time. Responding to outcomes that are not serious, common, or difficult to detect should not consume too much time. For events that are serious, common, or difficult to detect, additional time should be spent in a careful analysis of critical corrective actions.

  5. Specific Foci

    Substantial efforts to improve have been focused on specific events such as Customer complaint, internal rejection, external rejection, accidents, near miss incidents. These specific foci provide areas of opportunity for the implementation of error-proofing.

  6. Simulation

     In aviation, simulation is used to train pilots and flight crews. Logically enough, simulators have also begun to be employed in Other industries such as automotive industries, IT and medicine. In addition to training, simulation can provide insights into likely errors and serve as a catalyst for the exploration of the psychological or causal mechanisms of errors. After likely errors are identified and understood, simulators can provide a venue for the experimentation and validation of new mistake-proofing devices.

  7. Facility Design

    The study of facility design complements error-proofing and sometimes is error-proofing . Adjacency, proper handrails and affordances, standardization, and the use of Failure Modes and Effects Analysis (FMEA) as a precursor are similar to error-proofing. Ensuring non-compatible connectors and pin-indexed medical gases is mistake-proofing.

  8. Revising Standard Operating Procedures

    When adverse events occur, it is not uncommon for standard operating procedures (SOPs) to be revised in an effort to change the instructions that employees refer to when providing care. This approach can either improve or impair patient safety, depending on the nature of the change and the length of the SOP. If SOPs become simpler and help reduce the cognitive load on workers, it is a very positive step. If the corrective responses to adverse events are to lengthen the SOPs with additional process steps, then efforts to improve patient safety may actually result in an increase in the number of errors. Evidence from the nuclear industry suggests that changing SOPs improves human performance up to a point but then becomes counterproductive. Chiu and Frick studied the human error rate at the San Onofre Nuclear Power Generation Facility since it began operation. They found that after a certain point, increasing procedure length or adding procedures resulted in an increase in the number of errors instead of reducing them as intended. Their facility is operating on the right side of the minimum, in the region labeled B. Consequently, they state that they “view with a jaundiced eye an incident investigation that calls only for more rules (i.e., procedure changes or additions), and we seek to simplify procedures and eliminate rules whenever possible.” Simplifying processes and providing clever work aids complement mistake-proofing and in some cases may be mistake-proofing. When organizations eliminate process steps, they also eliminate the errors that could have resulted from those steps.

  9. Attention Management

    Substantial resources are invested in ensuring that workers,  in general,  are alert and attentive as they perform their work. Attention management programs range from motivational posters in the halls and “time-outs” for safety, to team-building “huddles” . Eye-scanning technology determines if workers have had enough sleep during their off hours to be effective during working hours. When work becomes routine and is accomplished on “autopilot” (skill-based), error-proofing can often reduce the amount of attentiveness required to accurately execute detailed procedures. The employee performing these procedures is then free to focus on higher level thinking. Error-proofing will not eliminate the need for attentiveness, but it does allow attentiveness to be used more effectively to complete tasks that require deliberate thought.

  10. Crew Resource Management

    Crew resource management (CRM) is a method of training team members to “consistently use sound judgment, make quality decisions, and access all required resources, under stressful conditions in a time-constrained environment.” It grew out of aviation disasters where each member of the crew was problem-solving, and no one was actually flying the plane. This outcome has been common enough that it has its own acronym: CFIT—Controlled Flight Into Terrain. Error-proofing often takes the form of reducing ambiguity in the work environment, making critical information stand out against a noisy background, reducing the need for attention to detail, and reducing cognitive content. Each of these benefits complements CRM and frees the crew’s cognitive resources to attend to more pressing matters.

  11. FMEA : Please click here for FMEA

  12. Fault Trees

    FMEA is a bottom-up approach in the sense that it starts at the component or task level to identify failures in the system. Fault trees are a top-down approach. A fault tree starts with an event and determines all the component (or task) failures that could contribute to that event. A fault tree is a graphical representation of the relationships that directly cause or contribute to an event or failure.


     The top of the tree indicates the failure mode, the “top event.” At the bottom of the tree are causes, or “basic failures.” These causes can be combined as individual, independent causes using an “OR” symbol. They can be combined using an “AND” symbol if causes must co-exist for the event to occur. The tree can have as many levels as needed to describe all the known causes of the event. These failures can be analyzed to determine sets of basic failures that can cause the top event to occur, cut sets. A minimal cut set is the smallest combination of basic failures that produces the top event. A minimal cut set leads to the top event if, and only if, all events in the set occur. to assess the performance of mistake-proofing device designs. These minimal cut sets are shown with dashed lines. Fault trees also allow one to assess the probability that the top event will occur by first estimating the probability that each basic failure will occur. In  the probabilities of the basic failures are combined to calculate the probability of the top event. The probability of basic failures 1 and 2 occurring within a fixed period of time is 20 percent each. The probability of basic failure 3 occurring within that same period is only 4 percent. However, since both basic failures 1 and 2 must occur before the top event results, the joint probability is also 4 percent. Basic failure 3 is far less likely to occur than either basic failure 1 or 2. However, since it can cause the top event by itself, the top event is equally likely to be caused by minimal cut set 1 or 2. Two changes can be made to the tree to reduce the probability of the top event:

    1. Reduce the probability of basic failures.
    2. Increase redundancy in the system.

    That is, design the system so that more basic failures are required before a top event occurs. If one nurse makes an error and another nurse double checks it, then two basic failures must occur. One is not enough to cause the top event. The ability to express the interrelationship among contributory causes of events using AND and OR symbols provides a more precise description than is usually found in the “potential cause” column of an FMEA. Potential causes of an FMEA are usually described using only the conjunction OR. It is the fault tree’s ability to link causes with AND, in particular, that makes it more effective in describing causes. Gano suggests that events usually occur due to a combination of actions and conditions; therefore, fault trees may prove very worthwhile. FMEA and fault trees are not mutually exclusive. A fault tree can provide significant insights into truly understanding potential failure causes in FMEA.

FMEA and fault trees are useful in understanding the range of possible failures and their causes. The other tools—safety culture, just culture, event reporting, and root cause analysis—lead to a situation in which the information needed to conduct these analyses is available. These tools, on their own, may be enough to facilitate the design changes needed to reduce medical errors. Only fault tree analysis, however, comes with explicit prescriptions about what actions to take to improve the system.These prescriptions are: increase component reliability or increase redundancy. Fault trees are also less widely known or used than other existing tools.


Designing Mistake-Proofing Devices

  1. Select an undesirable failure mode for further analysis.

    In order to make an informed decision about which failure mode to analyze, the RPN or the criticality number of the failure mode must have been determined in the course of performing FMEA or FMECA.

  2.  Review FMEA findings and brainstorm solutions .

    Most existing mistake-proofing has been done without the aid of a formal process. This is also where designers should search for existing solutions.. Common sense, creativity, and adapting existing examples are often enough to solve the problem. If not, continue to Step 3.

  3. Create a detailed fault tree of the undesirable failure mode

    This step involves the traditional use of fault tree analysis. Detailed knowledge regarding the process and its cause-and-effect relationships discovered during root cause analysis and FMEA provide a thorough understanding of how and why the failure mode occurs. The result of this step is a list and contents of minimal cut sets. Since severity and detectability of the failure mode could be the same for all of the minimal cut sets, the probability of occurrence will most likely be the deciding factor in a determination of which causes to focus on initially.

  4. Select a benign failure mode(s) that would be preferred to the undesirable failure.

    FMEA precede multiple fault trees to provide information about other failure modes and their severity. Ideally, the benign failure alone should be sufficient to stop the process; the failure, which would normally lead to the undesirable event, causes the benign failure instead.

  5. Using a detailed fault tree, identify “resources” available to create the benign failure

    These resources, basic events at the bottom of the benign fault tree, can be employed deliberately to cause the benign failure to occur.

  6. Generate alternative mistake-proofing device designs that will create the benign failure

This step requires individual creativity and problem-solving skills. Creativity is not always valued by organizations and may be scarce. If necessary, employ creativity training, methodologies, and facilitation tools like TRIZ  if brainstorming alone does not result in solutions.

7. Consider alternative approaches to designed failures

Some processes have very few resources. If creativity tools do not provide adequate options for causing benign process failures, consider using cues to increase the likelihood of correct process execution. Changing focus is another option to consider when benign failures are not available. If you cannot solve the problem, change it into one that is solvable. Changing focus means, essentially, exploring the changes to the larger system or smaller subsystem that change the nature of the problem so that it is more easily solved. For example, change to a computerized physician order entry (CPOE) system instead of trying to error proof handwritten prescriptions. There are very few resources available to stop the processes associated with handwritten paper documents. Software, on the other hand, can thoroughly check inputs and easily stop the process.

8. Implement a solution.

Some basic tasks usually required as part of the implementation are listed below:

    • Select a design from among the solution alternatives:
      • Forecast or model the device’s effectiveness.
      • Estimate implementation costs.
      • Assess the training needs and possible cultural resistance.
      • Assess any negative impact on the process.
      • Explore and identify secondary problems (side effects or new concerns raised by the device).
      • Assess device reliability.
    • Create and test the prototype design:
      • Find sources who can fabricate, assemble, and install custom devices, or find manufacturers willing to make design changes .
      • Resolve technical issues of implementation.
      • Undertake  trials if required.
    • Trial implementation:
      • Resolve nontechnical and organizational issues of implementation.
      • Draft a maintenance plan.
      • Draft process documentation.
    • Broad implementation leads to:
      • Consensus building.
      • Organizational change.

The eight steps to creating error-proofing devices can be initiated by a root cause analysis or FMEA team, an organization executive, a quality manager, or a risk manager. An interdisciplinary team of 6 to 10 individuals should execute the process steps. An existing FMEA or root cause analysis team is ideal because its members would already be familiar with the failure mode. Help and support from others with creative, inventive, or technical abilities may be required during the later stages of the process. A mistake-proofing device is designed using the eight steps just discussed in the application example that follows.

Some hints on POKA-YOKE








Some Examples of POKA-YOKE

  1. Preventing wrong jig fixing at the time of jig change

  2. Preventing to miss cooling water for high induction heating

    Poka yoke7

  3. Preventing missing and wrong calking


  4. Missing Process on Work


  5. Mistake of Process on Work


  6. Work Set Mistake 


  7. Missing parts 


  8. Mixing with foreign parts


Error proofing Caveats

Error Proof the Error-Proofing

Error-proofing devices should be error-proofed themselves. They should be designed with the same rigor as the processes the devices protect. The reliability of error-proofing devices should be analyzed, and if possible, the device should be designed to fail in benign ways. Systems with extensive automatic error detection and correction mechanisms are more prone to a devious form of failure called a latent error. Latent errors remain hidden until events reveal them and are very hard to predict, prevent, or correct. They often “hide” inside automatic error detection and correction devices. An error that compromises an inactive detection and recovery system is generally not noticed, but when the system is activated to prevent an error, it is unable to respond, leaving a hole in the system’s security. This is an important design issue, although it is quite likely that the errors prevented by the automatic error detection and correction systems would have caused more damage than the latent errors induced by the systems.

Avoid Moving Errors to Another Location

When designing error-proofing devices, it is important to avoid the common problem of moving errors instead of eliminating or reducing them. For example, in jet engine maintenance, placing the fan blades in the correct position is very important. The hub where the blade is mounted has a set screw that is slightly different in size for each blade so that only the correct blade will fit. This solves numerous problems in assembly and maintenance throughout the life of the engine. It also produces real problems for the machine shop that produces the hubs; it must ensure that each set screw hole is machined properly.

Prevent Devices from Becoming Too Cumbersome

How error-proofing devices affect processes is another design issue that must be considered. The device could be cumbersome because it slows down a process while in use or because the process, once stopped, is difficult to restart.

Avoid Type I Error Problems

If error-proofing is used for error detection application and replaces an inspection or audit process in which sampling was used, changing to the 100 percent inspection provided by a error-proofing device may have unintended consequences. Specifically, there will be significantly more information collected about the process than there would be when only sampling is used. Suppose the error of inferring that something about the process is not correct when, in fact, the process is normal (Type I error) occurs only a small percentage of the time. The number of opportunities for a Type I error increases dramatically. The relative frequency of Type I errors is unchanged. The frequency of Type I errors per hour or day increases. It is possible that too many instances requiring investigation and corrective action will occur. Properly investigating and responding to each may not be feasible.

Prevent Workers from Losing Skills

Bainbridge and Parasuraman et al assert that reducing workers’ tasks to monitoring and intervention functions makes their tasks more difficult. Bainbridge asserts that workers whose primary tasks involve monitoring will see their skills degrade from lack of practice, so they will be less effective when intervention is called for. Workers will tend not to notice when usually stable process variables change and an intervention is necessary. Automatic features, like mistake-proofing devices, will isolate the workers from the system, concealing knowledge about its workings, which are necessary during an intervention. And, finally, automatic systems will usually make decisions at a faster rate than they can be checked by the monitoring personnel. Parasuraman, Molloy, and Singh looked specifically at the ability of the operator to detect failures in automated systems. They found that the detection rate improved when the reliability of the system varied over time, but only when the operator was responsible for monitoring multiple tasks.

Back to Home Page

If you need assistance or have any doubt and need to ask any question  contact us at: preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be happy to publish them. Your comment and suggestion is also welcome.

Total Productive Maintenance


Total productive maintenance (TPM) is a series of methods that ensures every piece of equipment in a production process is always able to perform its required tasks so that production is never interrupted. It is a comprehensive, team-based, continuous activity that enhances normal equipment-maintenance activities and involves every worker. TPM helps you focus on and accelerate the equipment improvements required for you to implement methods such as one-piece flow, quick changeover, and load levelling as part of your company’s lean initiative. TPM also helps to improve your first-time through, or FTT, quality levels. It also helps in

  1. Improved equipment performance. Equipment operators and maintenance workers prevent the poor performance by conducting maintenance inspections and preventive maintenance activities. They also capture information about poor machine performance, enabling teams to diagnose declining performance and their causes. By preventing and eliminating these causes, these employees can improve performance efficiency.
  2. Increased equipment availability. TPM enables operators and maintenance workers alike to help prevent equipment failures by performing maintenance inspections and preventive maintenance activities. These employees also capture information regarding machine downtime, enabling your improvement team to diagnose failures and their causes. When you are able to prevent and eliminate the causes of failures, your asset availability improves.
  3. Increased equipment FTT quality levels. Process parameters that have a direct effect on product quality are called key control characteristics. For example, if a thermocouple in a furnace fails and an incorrect measurement is sent to the heating elements, this causes temperatures to fluctuate, which might significantly affect product quality. The goal of a TPM program is to identify these key control characteristics and the appropriate maintenance plan to ensure the prevention of a failure of performance degradation.
  4. Reduced emergency downtime and less need for “firefighting” (i.e., work that must be done in response to an emergency).
  5. An increased return on investment, or ROI, in equipment.
  6. Increased employee skill levels and knowledge.
  7. Increased employee empowerment, job satisfaction, and safety.

Types of maintenance:

  1. Breakdown maintenance:
    In this type of maintenance, no care is taken for the machine, until equipment fails. Repair is then undertaken. This type of maintenance could be used when the equipment failure does not significantly affect the operation or production or generate any significant loss other than the repair cost. However, an important aspect is that the failure of a component from a big machine may be injurious to the operator. Hence breakdown maintenance should be avoided.
  2. Preventive maintenance:
    It is daily maintenance (cleaning, inspection, oiling and re-tightening), design to retain the healthy condition of equipment and prevent failure through the prevention of deterioration, periodic inspection or equipment condition diagnosis, to measure deterioration. It is further divided into periodic maintenance and predictive maintenance. Just like human life is extended by preventive medicine, the equipment service life can be prolonged by doing preventive maintenance.
  3. Periodic maintenance (Time based maintenance – TBM):
    Time-based maintenance consists of periodically inspecting, servicing and cleaning equipment and replacing parts to prevent sudden failure and process problems. E.g. Replacement of coolant or oil every 15 days.
  4. Predictive maintenance: This is a method in which the service life of the important part is predicted based on inspection or diagnosis, in order to use the parts to the limit of their service life. Compared to periodic maintenance, predictive maintenance is condition-based maintenance. It manages trend values, by measuring and analyzing data about deterioration and employs a surveillance system, designed to monitor conditions through an on-line system. E.g. Replacement of coolant or oil, if there is a change in colour. Change in colour indicates the deteriorating condition of the oil. As this is condition-based maintenance, the oil or coolant is replaced.
  5. Corrective maintenance:
    It improves equipment and its components so that preventive maintenance can be carried out reliably. Equipment with design weakness must be redesigned to improve reliability or improving maintainability. This happens at the equipment user level. E.g. Installing a guard, to prevent the burrs falling in the coolant tank.
  6. Maintenance prevention:
    This program indicates the design of new equipment. The weakness of current machines is sufficiently studied (on-site information leading to failure prevention, easier maintenance and prevents of defects, safety and ease of manufacturing). The observations and the study made are shared with the equipment manufacturer and necessary changes are made in the design of a new machine.

What is Total Productive Maintenance (TPM)?

Total Productive Maintenance (TPM) is a maintenance program, which involves a newly defined concept for maintaining plants and equipment. The goal of the TPM program is to markedly increase production while, at the same time, increasing employee morale and job satisfaction. TPM brings maintenance into focus as a necessary and vitally important part of the business. It is no longer regarded as a non-profit activity. Downtime for maintenance is scheduled as a part of the manufacturing day and, in some cases, as an integral part of the manufacturing process. The goal is to hold emergency and unscheduled maintenance to a minimum. Each letter in the acronym of TPM is subtle yet critical.

  • Total implies a comprehensive look at all activities that relate to maintenance of equipment and the impact each has upon availability.
  • Productive relates to the end goal of the effort i.e. efficient production not merely efficient maintenance as is often mistakenly assumed.
  • Maintenance signifies the directional thrust of the program in ensuring reliable processes and maintaining production.

Operational availability has long been recognized as critical in many process-intensive industries. Oil drilling petroleum companies, airlines, chemical process plants, for example, and many other asset-intensive industries simply can not afford to have any downtime. Each minute an oil well is down represents lost barrels of output and a tremendous amount of foregone revenue to the parent company. Airlines also can not afford any downtime for obvious reasons for passenger safety as well as revenue. It is not surprising therefore that these industries are often the benchmarks in terms of operational availability although it is often accomplished via extensive redundant systems as well as excellence in maintenance methods. Other companies, however, should also examine the benefits of TPM as well for additional reasons. First TPM is critical as a precondition for many elements of lean manufacturing to flourish and secondly there are financial benefits as well.

There are four key points regarding TPM implementation.  These points are critical for long term success of the program. These points include a total life cycle approach, total pursuit of production efficiency, total participation, and a total systems approach.

  1. Total life cycle approach
    A total life cycle approach recognizes that much like humans equipment requires different levels of resources and types of attention during the life cycle.  During production, the start-up is when initial trouble is most likely to occur and significant time is spent debugging equipment and learning to fix and maintain processes.  This learning process is started long before the equipment ever reaches the production floor by extensively researching previous processes and continuing what worked well and improving weak points in the machine design. After machine installation, different maintenance techniques is employed in order to efficiently maintain production. As a last resort breakdown maintenance (BM) is employed when all else fails until the root cause is thoroughly identified and the problem can be prevented from recurring. During most of the equipment life cycle time, frequency, or condition-based preventive maintenance (PM) methods are employed to stop problems before they occur. PM intervals and contents are adjusted as experience is gained about the equipment over the life cycle. Daily maintenance (DM) is practised by the operators of the equipment. Occasionally equipment reliability problems result that require the time and attention of the original equipment manufacturer or specialists to resolve. In these instances involving changes to fixtures, jigs, tooling, etc. corrective maintenance (CM) is practised and fundamental improvements to the design of the process are implemented. Lastly, all processes are studied at length over the entire life cycle to see where time, spare parts, and money are being consumed. When future equipment is ordered a list of required improvements are identified for the vendor and analyzed jointly in terms of maintenance prevention (MP) activities.
  2. The total pursuit of production efficiency
    The total pursuit of production efficiency relates to the goal of eliminating all the aforementioned six types of production losses associated with a piece of equipment. Different situations and types of equipment require different improvement activities. For example, during the 1950s the primary source of production loss in a stamping department was the changeover process from one stamping die to another. Frequently this change over from one die to the next might require anywhere from one to two shifts. Over time, however, by studying the changeover and identifying the waste in the process teams were able to improve this loss downward over a ten years period to a few minutes at worst. In some cases, the changeover can now be done in seconds. Today in other processes such as machining lines the predominant equipment loss is machine breakdown time and minor stops which are often hard to identify.
  3.  Total participation
    The total participation aspect of TPM is often much trumpeted by consultants and displayed in articles as a team-based event where a single piece of equipment is cleaned and checked from top to bottom to improve availability. The projects are noble and excellent learning activities. They should not be mistaken however as the primary way to implement participation.
  4. Total systems approach
    Like a chain composed of multiple links, the total strength of the system is only as good as the weakest link in the chain. Constant effort and management attention are placed upon improving the described aspects of the equipment life cycle, the pursuit of efficiency, and participation by all in accordance with their responsibilities. A total systems approach also means effectively linking and improving all support activities such as employee training and development, spare parts and documentation management, maintenance data collection and analysis, and feedback with equipment vendors.

TPM – History:

TPM is an innovative Japanese concept. The origin of TPM can be traced back to 1951 when preventive maintenance was introduced in Japan. However, the concept of preventive maintenance was taken from the USA. Nippondenso was the first company to introduce plant wide preventive maintenance in 1960. Preventive maintenance is the concept wherein, operators produced goods using machines and the maintenance group was dedicated with work of maintaining those machines, however, with the automation of Nippondenso, maintenance became a problem, as more maintenance personnel were required. So the management decided that the operators would carry out the routine maintenance of equipment. (This is Autonomous maintenance, one of the features of TPM). Maintenance group took up only essential maintenance works.
Thus Nippondenso, which already followed preventive maintenance, also added Autonomous maintenance done by production operators. The maintenance crew went in the equipment modification for improving reliability. The modifications were made or incorporated in new equipment. This lead to maintenance prevention. Thus preventive maintenance along with Maintenance prevention and Maintainability Improvement gave birth to Productive maintenance. The aim of productive maintenance was to maximize plant and equipment effectiveness.
By then Nippon Denso had made quality circles, involving the employee’s participation. Thus all employees took part in implementing Productive maintenance. Based on these developments Nippondenso was awarded the distinguished plant prize for developing and implementing TPM, by the Japanese Institute of Plant Engineers (JIPE). Thus Nippondenso of the Toyota group became the first company to obtain the TPM certification.

TPM Targets:

  1. Obtain Minimum 90% OEE (Overall Equipment Effectiveness)
  2. Run the machines even during lunch. (Lunch is for operators and not for machines!)
  3. Operate in a manner, so that there are no customer complaints.
  4.  Reduce the manufacturing cost by 30%.
  5.  Achieve 100% success in delivering the goods as required by the customer.
  6.  Maintain an accident-free environment.
  7. Increase the suggestions from the workers/employees by 3 times. Develop Multi-skilled and flexible workers.

Motives of TPM

  1. Adoption of the life cycle approach for improving the overall performance of production equipment.
  2.  Improving productivity by highly motivated workers, which is achieved by job enlargement.
  3.  The use of voluntary small group activities for identifying the cause of failure, possible plant and equipment modifications.

Uniqueness of TPM

The major difference between TPM and other concepts is that the operators are also made to involve in the maintenance process. The concept of “I (Production operators) Operate, You (Maintenance department) fix” is not followed.

TPM Objectives

  1. Achieve Zero Defects, Zero Breakdown and Zero accidents in all functional areas of the organization.
  2. Involve people in all levels of the organization.
  3. Form different teams to reduce defects and self-Maintenance.

Direct benefits of TPM

  1. Increase in productivity and OEE (Overall Equipment Efficiency)
  2. Reduction in customer complaints.
  3. Reduction in the manufacturing cost by 30%.
  4. Satisfying the customers’ needs by 100 % (Delivering the right quantity at the right time, in the required quality.)
  5.  Reduced accidents.

Indirect benefits of TPM

  1.  Higher confidence level among the employees.
  2.  A clean, neat and attractive workplace.
  3.  Favorable change in the attitude of the operators.
  4.  Achieve goals by working as a team.
  5.  Horizontal deployment of a new concept in all areas of the organization.
  6. Sharing knowledge and experience.
  7. The workers get a feeling of owning the machine.

TPM Basic Concepts And Structures:

TPM is defined as  “Total Productive Manufacturing is a structured equipment-centric continuous improvement process that strives to optimize production effectiveness by identifying and eliminating equipment and production efficiency losses throughout the production system life cycle through active team-based participation of employees across all levels of the operational hierarchy.” The key elements of  TPM is :

  • Structured Continuous Improvement Process.
  • Optimized Equipment (Production) Effectiveness.
  • Team-based Improvement Activity.
  • Participation of employees across all levels of the operational hierarchy

One of the most significant elements of the structured TPM implementation process is that it is a consistent and repeatable methodology for continuous improvement.

OEE (Overall Equipment Efficiency):

The basic measure associated with Total Productive Maintenance (TPM) is the OEE. This OEE highlights the actual “Hidden capacity” in an organization. OEE is not an exclusive measure of how well the maintenance department works. The design and installation of equipment as well as how it is operated and maintained affect the OEE. It measures both efficiency (doing things right) and effectiveness (doing the right things) with the equipment. It incorporates three basic indicators of equipment performance and reliability. Thus OEE is a function of the three factors mentioned below.

  1. Availability or uptime (downtime: planned and unplanned, tool change, tool service, job change etc.)
  2. Performance efficiency (actual vs. design capacity)
  3. Rate of quality output (Defects and rework)
Overall Equipment Effectiveness Model


Thus OEE = A x PE x Q
A – Availability of the machine. Availability is the proportion of time machine is actually available out of time it should be available.1

Production time = Planned production time – Downtime
Gross available hours for production include 365 days per year, 24 hours per day, 7 days per week. However, this is an ideal condition. Planned downtime includes vacation, holidays, and not enough loads. Availability losses include equipment failures and changeovers indicating situations when the line is not running although it is expected to run.
PE – Performance Efficiency. The second category of OEE is performance. The formula can be expressed in this way:

Net production time is the time during which the products are actually produced. Speed losses, small stops, idling, and empty positions in the line indicate that the line is running, but it is not providing the quantity it should.
Q – Refers to the quality rate. Which is the percentage of good parts out of total produced. Sometimes called “yield”. Quality losses refer to the situation when the line is producing, but there are quality losses due to in-progress production and warm-up rejects. We can express a formula for quality like this:

A simple example of how OEE is calculated is shown below.

  • Running 70 percent of the time (in a 24-hour day)
  • Operating at 72 percent of design capacity (flow, cycles, units per hour)
  • Producing quality output 99 per cent of the time

When the three factors are considered together (70% availability x 72% efficiency x 99% quality), the result is an overall equipment effectiveness rating of 49.9 per cent.

The Pillars of TPM

The principal activities of TPM are organized as ‘pillars’. Depending on the author, the naming and number of the pillars may differ slightly, however, the generally accepted model is based on Nakajima’s eight pillars

Focused Improvement Pillar (Kobetsu Kaizen)

The focused improvement includes all activities that maximize the overall effectiveness of equipment, processes, and plants through uncompromising elimination of losses and improvement of performance. Losses may be either a function loss (inability of equipment to execute a required function) or a function reduction (reduced capability without complete loss of a required function). The objective of Focused Improvement is for equipment to perform as well every day as it does on its best day. The fact is machines do virtually 100 percent of the product manufacturing work. The only thing we people do, whether we’re operators, technicians, engineers, or managers, is to tend to the needs of the machines in one way or another. The better our machines run, the more productive our shop floor, and the more successful our business. The driving concept behind Focused Improvement is Zero Losses. Maximizing equipment effectiveness requires the complete elimination of failures, defects, and other negative phenomena – in other words, the wastes and losses incurred in equipment operation.

Leflar identifies a critical TPM paradigm shift that is the core belief of Focused Improvement.

  • Old Paradigm – New equipment is the best it will ever be.
  • New Paradigm – New equipment is the worst it will ever be.

“The more we operate and maintain a piece of equipment, the more we learn about it. We use this knowledge to continuously improve our maintenance plan and the productivity of the machine. We would only choose to replace a machine should its technology become obsolete, not because it has deteriorated into a poorly performing machine.”

Focused Improvement methodologies have led to short-term and long-term improvements in equipment capacity, equipment availability, and production cycle time. Focused Improvement has been, and still is, the primary methodology for productivity improvement.Overall Equipment Effectiveness (OEE) is the key metric of Focused Improvement. Focused Improvement is characterized by a drive for Zero Losses,meaning a continuous improvement effort to eliminate any effectiveness loss. Equipment losses may be either chronic (the recurring gap between the equipment’s actual effectiveness and its optimal value) or sporadic (the sudden or unusual variation or increase in efficiency loss beyond the typical and expected range).

TPM chronic

The loss causal factors may be,

  • Single – a single causal factor for the effectiveness loss.
  • Multiple – two or more causal factors combined result in the effectiveness loss.
  • Complex – the interaction between two or more causal factors results in the effectiveness loss.

Focused Improvement includes three basic improvement activities. First, the equipment is restored to its optimal condition. Then equipment productivity loss modes (causal factors) are determined and eliminated. The learning that takes place during restoration and loss elimination then provide the TPM program a definition of optimal equipment condition that will be maintained (and improved) through the life of the equipment. Equipment restoration is a critical first step in Focused Improvement, maintaining basic equipment conditions is a maintenance practice that is ignored in most companies today. When the maintenance group gets occupied with capacity loss breakdowns and trying to keep the equipment running properly, basic tasks like cleaning, lubricating, adjusting, and tightening are neglected. Equipment failure is eliminated by exposing and eliminating hidden defects (fuguai). The critical steps to eliminate equipment restoration is to expose the hidden defects, deliberately interrupt equipment operation prior to breakdown, and to resolve minor defects promptly. The first aim of attaching importance to minor defects is to ‘cut off synergic effects do to the accumulation of minor defects’. Even though a single minor defect may have a negligible impact on equipment performance, multiple minor defects may stimulate another factor, combine with another factor, or may cause chain reactions with other factors. The elimination of minor defects should be one of the highest priorities of continuous improvement. It is important to realize that even in large equipment units or large-scale production lines, overall improvement comes as an accumulation of improvements designed to eliminate slight defects. So instead of ignoring them, factories should make slight defects their primary focus. Minor Defects are the root cause of many equipment failures and must be completely eliminated from all equipment. Machines with minor defects will always find new ways to fail. Minor or hidden defects result from a number of causal factors such as:

  • Physical Reasons.
    • Contamination (dust, dirt, chemical leaks, etc.).
    • Not visible to the operator.
    • Excessive safety covers.
    • Equipment not designed for ease of inspection.
  • Operator Reasons.
    • Importance of visible defects not understood.
    • Visible defects not recognized.

Tracking OEE provides a relative monitor of equipment productivity and the impact of improvement efforts. Understanding efficiency losses drive the improvement effort. Typically, productivity losses are determined through analysis of equipment and production performance histories. The impact of productivity losses should be analyzed from two perspectives;

  1. The frequency of loss (the number of occurrences during the time period),
  2. The impact of the loss (the number lost hours, lost revenue, cost, etc.).

Companies differ in their approaches to systematic improvement, but all incorporate roughly the same basic elements: planning, implementing, and checking results. A number of tools are commonly used to analyze productivity losses in the Focused Improvement pillar.

  • Pareto Charts.
  • 5-Why Analysis.
  • Fishbone Diagrams.
  • P-M Analysis.
  • Fault Tree Analysis (FTA).
  • Failure Mode and Effects Analysis (FMEA).

It is important to note that Focused Improvement and equipment restoration is not a one-time activity. Usage results in wear and potential deterioration. Restoring normal equipment wear is a process that continues for the entire life of the equipment.

Autonomous Maintenance Pillar (Jishu Hozen):

Autonomous maintenance is the process by which equipment operators accept and share responsibility (with maintenance) for the performance and health of their equipment. The driving concept of Autonomous Maintenance (AM) is the creation of ‘expert equipment operators’ for the purpose of ‘protecting their own equipment’.The paradigm shift that AM addresses is a transition in the operator perception from ‘I run the equipment, Maintenance fixes it’, to ‘I own the performance of this equipment’. In this Autonomous Maintenance environment, The greatest requirements for operators are, first, to have the ability to ‘detect abnormalities’ with regard to quality or equipment, based on a feeling that ‘there is something wrong’.Autonomous Maintenance is closely linked with Focused Improvement in that both TPM pillars support equipment restoration and sustaining basic equipment conditions. Through autonomous activities – in which the operator is involved in daily inspection and cleaning of his or her equipment – companies will discover the most important asset in achieving continuous improvement – its people. Autonomous Maintenance has two aims,

  1. To foster the development and knowledge of the equipment operators, and
  2. To establish an orderly shop floor, where the operator may detect departure from optimal conditions easily.

Autonomous Maintenance offers a significant departure from Taylorism where operators are required to repeat simple structured work tasks with little understanding and knowledge about the equipment they run or the products they manufacture. Autonomous Maintenance involves the participation of each and every operator, each maintaining his own equipment and conducting activities to keep it in the proper condition and running correctly. It is the most basic of the eight pillars of TPM. If autonomous maintenance activities are insufficient, the expected results will not materialize even if the other pillars of TPM are upheld. Autonomous Maintenance empowers (and requires) equipment operators to become knowledgeable managers of their production activities, able to:

  • Detect signs of productivity losses.
  • Discover indications of abnormalities (fuguai).
  • Act on those discoveries.

JIPM(Japan Institute of Plant Maintenance ) describes the critical operator Autonomous Maintenance  skills to be:

  • Ability to discover abnormalities.
  • Ability to correct abnormalities and restore equipment functioning.
  • Ability to set optimal equipment conditions.
  • Ability to maintain optimal conditions.

The operator skill levels required to support Autonomous Maintenance can be defined as:

Level 1

Recognize deterioration and improve equipment to prevent it.

  • Watch for and discover abnormalities in equipment operation and components.
  • Understand the importance of proper lubrication and lubrication methods.
  • Understand the importance of cleaning (inspection) and proper cleaning methods.
  • Understand the importance of contamination and the ability to make localized improvements.

Level 2

Understand the equipment structure and functions.

  • Understand what to look for when checking mechanisms for normal operation.
  • Clean and inspect to maintain equipment performance.
  • Understand the criteria for judging abnormalities.
  • Understand the relationship between specific causes and specific abnormalities.
  • Confidently judge when equipment needs to be shut off.
  • Some ability to perform breakdown diagnosis.

Level 3

Understand the causes of equipment-induced quality defects.

  • Physically analyze problem-related phenomena.
  • Understand the relationship between the characteristics of quality and the equipment.
  • Understand tolerance ranges for static and dynamic precision and how to measure such precision.
  • Understand the causal factors behind defects.

Level 4

Perform routine repair on equipment.

  • Be able to replace parts.
  • Understand the life expectancy of parts.
  • Be able to deduce the causes of breakdown.

The specific goals of Autonomous Maintenance include:

  • Prevent equipment deterioration through correct operation and daily inspections.
  • Bring equipment to its ideal state through restoration and proper management.
  • Establish the basic conditions needed to keep equipment well maintained.

Four significant elements of the Autonomous Maintenance effort are

  1. Initial Clean,
  2. 5-S,
  3. Manager’s Model and Pilot Teams.
  4. Visual Controls and One Point Lessons.
  1. Initial Clean:

    Cleaning equipment is typically the first phase in Autonomous Maintenance. Known as the Initial Clean within the AM program, this really means inspection of equipment. The philosophy is that in the process of cleaning the operator discovers fuguai. From the TPM perspective, cleaning is aimed at exposing and eliminating hidden defects. Prior to starting the Initial Clean process, the team should receive training in equipment operation and safety precautions so that the Initial Clean can proceed at no risk to the equipment or the team members. The TPM Initial Clean is part of the early TPM training and is performed by a small team that includes the operator responsible for the area, maintenance personnel who work on the tool, the area production supervisor, and others with a vested interest in the performance of the production area. A qualified TPM trainer should act as a facilitator for the Initial Clean activity. Seven types of abnormalities that should be the focus of the Initial Clean activity.

    Type of Abnormality Abnormality Examples
    Minor Flaws Contamination Dust, dirt, powder, grease, rust, paint
    Damage Cracking, crushing, deformation, chipping,
    Play Shaking, falling out, tilting, eccentricity, wear, distortion, corrosion
    Slackness Belts, chains
    Abnormal phenomena Unusual noise, overheating, vibration, strange smells, discolouration, incorrect pressure or other parameters
    Adhesion Blocking, hardening, accumulation of debris, peeling, malfunction
    Unfulfilled Basic Conditions Lubrication Insufficient, dirty, unidentified, unsuitable, leaking
    Lubrication supply Dirty, damaged, deformed inlets, faulty lubricant pipes
    Oil level gauges Dirty, damaged, leaking, no indication of correct level
    Tightening Nuts and bolts – slackness, missing, cross threaded, too long, crushed, corroded, washer unsuitable, backwards
    Inaccessible Places Cleaning Machine construction, covers, layout, footholds, access space
    Checking Covers, construction, layout, instrument position and orientation, operating-range display
    Lubricating Position of lubricant inlet, construction, height, footholds, lubricant outlet, space
    Operation Machine layout, the position of valves/switches/levers, footholds
    Adjustment Position of pressure gauges/thermometers/flow meters/etc.
    Contamination Sources Product Leaks, spills, spurts, scatter, overflow
    Raw materials Leaks, spills, spurts, scatter, overflow
    Lubricants Leaking, split, seeping
    Gases Leaking
    Liquids Leaking, split, spurting
    Scrap Flashes, cuttings, packaging materials, scrap/rework product
    Other Contaminants brought in by people/equipment
    Quality Defect Sources Foreign matter Inclusion, infiltration, entrainment
    Shock Dropping, jolting, collision, vibration
    Moisture Control (too little/too much), infiltration, defective elimination
    Filtration Abnormalities in filter mechanisms
    Unnecessary and Non-Urgent items Machinery Excessive or unused
    Piping Pipes, hoses, ducts, valves, etc
    Measurement instruments Temperature, pressure, vacuum, etc.
    Electrical Equipment Wiring, switches, plugs, etc.
    Jigs and tooling General tools, jigs, moulds, dies, frames, etc.
    Spare parts Equipment spares, process spares, etc.
    Repairs in progress Components and maintenance tooling
    Unsafe Places Floors Uneven, projections, cracking, peeling, wear
    Steps Too steep, irregular, pealing, corrosion, missing handrails
    Lights Dim, out of position, dirty, broken covers
    Rotating machinery Displaced, broken/ missing covers, no emergency stops
    Lifting gear Hooks, brakes, cranes, hoists
    Others Special/dangerous substances, danger signs, protective clothing/ gear

    A TPM jingle associated with Initial Clean summarizes the driving concept.

    The purpose of the Initial Clean is threefold.

    1. Small Work Groups (also known as Small Group Activity- SGA) are able to join together to accomplish a common goal, the cleaning of particular equipment or area.
    2. Promote a better understanding of, and familiarity with, the equipment or process area.
    3. Uncover hidden defects that, when corrected, have a positive effect on equipment performance.
  2. 5-S:

    For 5-S please see my post on 5-S at  https://isoconsultantkuwait.com/5S

  3. Manager’s Model and Pilot Teams:

    A common approach to proliferating Autonomous Maintenance is through the Manager’s Model and Pilot Teams. The Manager’s Model and Pilot Teams develop individual Autonomous Maintenance skills, train leaders for Autonomous Maintenance teams, and demonstrate the effectiveness of Autonomous Maintenance implementation, and refine the Autonomous Maintenance implementation process.

    The objectives for the Manager’s Model are:

    1. Change employee attitudes (foster positive attitudes) about TPM.
    2. Demonstrate the power of TPM implementation.
    3. Prove and improve the TPM implementation process.
    4. Show the results of effective teamwork.
    5. Test the water – experiment with TPM methodologies.
    6. Identify and address initial barriers to TPM implementation.
    7. Build local TPM policies and procedures.
    8. Plan further TPM rollout and supporting infrastructure.
    9. Take academic TPM and turn it into results.
    10. Customize TPM activities to fit the organization.
    11. Prove that TPM can be implemented successfully.
    12. Develop and provide tools, procedures, and infrastructure for further TPM activity.

    Continuous learning is the heart of continuous improvement. Machines do only what people make them do – right or wrong – and can only perform better if people acquire new knowledge and skills regarding equipment care. The proliferation of Autonomous Maintenance can be viewed as a series of cascading activities starting with the Manager’s Model

The key to the establishment and development of the basic TPM plan is ensuring the support of the plan’s priorities and activities by the top management who drive it forward. The most important point is how well the top and middle managers recognize the necessity for and future value of TPM activities. During the Manager’s Model, the site management team engages in an Autonomous Maintenance project. Managers trained during the Manager’s Model become the leaders of the subsequent Autonomous Maintenance Pilot Teams that continue Autonomous Maintenance proliferation in specific work areas. Depending on the size of the operation, there may be a number of Pilot Teams operating within a work area. Many times a company will embark on a TPM journey to have it fail because it was not supported at a high enough organizational level or management failed to follow the manager’s model of experiential tops down management involvement and participation. Likewise, the Pilot Teams spawn work area Autonomous Maintenance teams and provide training and experience for the leaders of those teams. Candidate equipment for Manager’s Model and Pilot Team Autonomous Maintenance deployment should be selected with the following criteria in mind.

    • The equipment and the results of the AM activity are visible to the employees.
    • There is a high probability that AM activity will improve the performance of the equipment and the improvement will be meaningful to the operation.
    • Improving equipment performance through AM activity presents sufficient challenge to validate the Autonomous Maintenance improvement process.

4. Visual Controls:

Visual controls can be defined as visual or automated methods which indicate the deviation from optimal conditions, indicate what to do next, display critical performance metrics, or control the movement and/or location of product or operation supplies. Visual controls present to the manufacturing operator;

  • WHAT the user needs to know.
  • WHEN the user needs to know it.
  • WHERE the user needs to see it.
  • In a format that is CLEARLY UNDERSTOOD by the user.

Visual controls are varied and may be specific to a particular production environment. Some examples of visual controls include the following.

  • Graphic Visual Controls. Gauges and meters.
    • Kanban systems.
    • Slip marks.
    • Labels.
    • Storage or location identification.
    • Color-coding.
  • Audio Visual Controls.
    • Alarms (sirens, buzzers, etc.).
    • Verbal (commands, warnings, etc.).
  • Automated Visual Controls.
    • Closed-loop automation (detect and respond).

Activity boards are a specific type of visual control that is commonly utilized in TPM. JIPM refers to activity boards as a guide to action. They present the TPM team with “a visual guide to its activities that makes the improvement activities so clear that anyone can immediately understand them. JIPM suggests that the activity board include the following components.

  1. The team name, team members, and team roles (pictures).
  2.  Company policy and/or vision.
  3. Ongoing results from team activities (charted by month).
  4. The improvement theme addressed by the team activity. The current problems being solved.
  5. The current situation and the causes.
  6. Actions to address the causes and the effects of specific actions (annotated graphs where appropriate).
  7. Improvement targets.
  8. Remaining problems or issues for the team.
  9. Future planned actions.

Activity boards, used as a visual control for Autonomous Maintenance, provide the following functions.

  • A visual guide to team improvement activities.
  • Scorecard for improvement activity goals and activity effectiveness.
  • Translate and present the company vision to employees.
  • Encourage, support, and motivate the team members.
  • Share learning between improvement teams.
  • Celebrate team successes.

Activity boards are posted so that the employees easily access them. They are typically located in the work area or common areas where employees meet.

Another common visual control tool that is used in Autonomous Maintenance is the One Point Lesson. A one-point lesson is a 5 to 10-minute self-study lesson drawn up by team members and covering a single aspect of equipment or machine structure, functioning, or method of inspection. Regarding the education of operators, in many cases sufficient time cannot be secured for the purpose of education at one time or operators cannot acquire such learning unless it is repeated through daily practice. Therefore, study during daily work, such as during morning meetings or other time, is highly effective. One-point lessons are therefore a learning method frequently used during ‘Jishu-Hozen’ (Autonomous Maintenance) activities. One-point lessons are:

  • Tools to convey information related to equipment operation or maintenance knowledge and skills.
  • Designed to enhance knowledge and skills in a short period of time (5-10 minutes) at the time they are needed.
  • A tool to upgrade the proficiency of the entire team.

The basic principle is for individual members to personally think, study, and prepare a sheet [one-point lesson] with originality and to explain its content to all the other circle members, to hold free discussions on the spot and to make the issue clearer and surer.  One-point lessons and are one of the most powerful tools for transferring skills. The teaching technique helps people learn a specific skill or concept in a short period of time through the extensive use of visual images. The skill being taught is typically presented, demonstrated, discussed, reinforced, practised, and documented in thirty minutes or less. Single-point lessons are especially effective in transferring the technical skills required for a production operator to assume minor maintenance responsibilities. Some key concepts of the one-point lesson are:

  • The OPL is visual in nature. Pictures, charts, and graphics are emphasized rather than words.
  • The OPL discusses a single topic or action being shared.
  • The OPL is developed and researched by the employee doing the work to share learning with other employees doing the work.
  • The creating employee at the workstation or during team meetings presents OPL’s.

The significant themes for the effective development and use of one-point lessons are:

  1. One-point lessons contain a single theme to be learned.
  2. The information being shared should fit on one page.
  3. OPL’s contain more visual information than text.
  4. Any text should be straightforward, easy to understand, and to the point.
  5. When delivering the OPL, explain the need for the knowledge (what problem is being solved).
  6. Design OPL’s to be read and understood by the intended audience in 5-10 minutes.
  7. Those who learn the OPL’s continue to teach others.
  8. OPLs are delivered at the workstation.
  9. OPLs are retained for reference.

One-point lessons can share information on basic knowledge (fill in knowledge gaps and ensure people have the knowledge needed for daily production), examples of problems (communicate knowledge or skills needed to prevent and resolve problems), or discussion of improvements to equipment or methods (communicate how to prevent or correct equipment abnormalities). After delivery, the one-point lessons become part of the operator training documentation. One-point lessons can also be included as attachments to equipment operating or maintenance specifications.

Planned Maintenance Pillar (PM):

The objective of Planned Maintenance is to establish and maintain optimal equipment and process conditions. Devising a planned maintenance system means raising output (no failures, no defects) and improving the quality of maintenance technicians by increasing plant availability (machine availability). Implementing these activities efficiently can reduce input to maintenance activities and build a fluid integrated system, which includes:

  • Regular preventive maintenance to stop failures (Periodic maintenance, predictive maintenance).
  • Corrective maintenance and daily MP (maintenance prevention) to lower the risk of failure.
  • Breakdown maintenance to restore machines to working order as soon as possible after failure.
  • Guidance and assistance in ‘Jishu-Hozen’ (Autonomous Maintenance).

Like Focused Improvement, Planned Maintenance supports the concept of zero failures. “Planned maintenance activities put a priority on the realization of zero failures. The aim of TPM activities is to reinforce corporate structures by eliminating all losses through the attainment of zero defects, zero failures, and zero accidents. Of these, the attainment of zero failures is of the greatest significance, because failures directly lead to defective products and a lower equipment operation ratio, which in turn becomes a major factor for accidents.

  1. Breakdown Maintenance (BM): Breakdown Maintenance refers to maintenance activity where repair is performed following equipment failure/stoppage or upon a hazardous decline in equipment performance. TPM strives for zero equipment failures, and thus considers any event that requires breakdown maintenance to be a continuous improvement opportunity.
  2. Time-Based Maintenance: Time-Based Maintenance also known as Periodic Maintenance refers to preventive maintenance activity that is scheduled based on an interval of time (for instance daily, weekly, monthly, etc.) Preventive maintenance keeps equipment functioning by controlling equipment components, assemblies, subassemblies, accessories, attachments, and so on. It also maintains the performance of structural materials and prevents corrosion, fatigue, and other forms of deterioration from weakening them.
  3. Usage-Based Maintenance: Usage-Based Maintenance refers to preventive maintenance activity that is scheduled based on some measure of equipment usage (for example number of units processed, number of production cycles, operating hours, etc.) Usage-Based Maintenance is significantly different from Time-Based Maintenance in that it is scheduled based on the stress and deterioration that production activity places on equipment rather than just a period of time. Since equipment may run different levels of production from one time period to another, Usage-Based Maintenance allows preventive maintenance to be aligned with the actual workload placed on the equipment.
  4. Condition-Based Maintenance: Condition-Based Maintenance is a form of preventive maintenance that is scheduled by actual variation or degradation that is measured on the equipment. Condition-Based Maintenance expands on the concept of Usage-Based Maintenance by scheduling maintenance based on observed (or measured) wear, variation, or degradation caused by the stress of production on equipment. Examples of monitored equipment parameters include vibration analysis, ultrasonic inspection, wear particle analysis, infrared thermography, video imaging, water quality analysis, motor-condition analysis, jigs/fixtures/test gauges, and continuous condition monitoring. To execute Condition-Based Maintenance, the user must determine observation points or parameters to be measured that accurately predict impending loss of functionality for equipment. Observations and measurements are taken during scheduled inspection cycles. Visual controls play a role in Condition-Based Maintenance by providing graphic indications for out-of-specification measurements or conditions.
    Two types of equipment degradation that should be considered when developing the site Planned Maintenance TPM pillar.

    1. Graceful Deterioration: Degradation is gradual and the thresholds of acceptable performance can be learned and failures projected within scheduled inspection cycles. Since the deterioration progresses slowly, the pre-failure degradation is identifiable within the scheduled Condition-Based Maintenance inspection cycles.
    2. Non-graceful Deterioration: Deterioration progresses rapidly (from normal measurement to failure in less than the inspection cycle) and may not be detected within the inspection cycle of Condition-Based Maintenance. Non-graceful deterioration may be learned, which allows the life expectancy of the component or function to be projected. In this case, Calendar-Maintenance or Usage-Based Maintenance preventive maintenance scheduling will be effective.
  5. Predictive Maintenance: Predictive Maintenance takes Condition-Based Maintenance to the next level by providing real-time monitors for equipment parameters (for example voltages, currents, clearances, flows, etc.). The objective of predictive maintenance is to prevent the function of equipment from stopping. This is done by monitoring the function or loss of performance of the parts and units of which equipment is composed, to maintain the normal operation. Predictive Maintenance can be considered the ‘crystal ball’ of Planned Maintenance. Predictive Maintenance measures physical parameters against a known engineering limit to detect, analyze, and correct equipment problems before capacity reductions or losses occur. The key to the predictive method is finding the physical parameter that will trend the failure of the equipment. Preventive maintenance is then scheduled when a monitored parameter is measured out-of-specification. The flow of predictive maintenance is divided into three broad elements,
    1. Establishment of diagnostic technologies (monitoring techniques),
    2. Diagnosis (comparing actual to target readings), and
    3. Maintenance action (responding to variation).

    Where Condition-Based Maintenance occurs as the result of scheduled inspections, Predictive Maintenance identifies variation or degradation as it occurs and initiates maintenance activity.

  6. Closed-Loop Automation: Simple Closed-Loop Automation describes an advanced automation capability in which equipment performance variation or degradation is monitored real-time and automated corrective input is made to the equipment (when possible within acceptable performance conditions) to adjust for the variation or degradation and continue normal in-specification processing.
    Advanced Closed-Loop Automation looks beyond just the equipment performance and monitors production flow as well as equipment, including the following functionality

    1. Sense changes.
    2. Execute real-time decision logic acting on all data available to factory automation.
      1. Work in Progress (WIP).
      2. Maintenance Repair Operations (MRO).
      3. Production inventory.
      4. Resource capacity.
    3. Issue work directives according to enterprise goals.
    4. Coordinate equipment and material processing.
    5. Continuously monitor and report the status of equipment, material, and other factory resources.
  7. Corrective Maintenance: Corrective Maintenance is planned maintenance that makes permanent continuous improvement changes (versus repair activity) to equipment. Within the TPM framework, identification of desirable corrective action activity occurs within the Focused Improvement, Autonomous Maintenance, and Planned Maintenance TPM pillar activity. Corrective Maintenance may reduce/eliminate failure modes, improve variation/degradation identification (visual controls), or simplify scheduled or unscheduled maintenance activity. The key to effective Planned Maintenance is to have a PM plan for every tool. The PM plan is based on the history and analysis of failure modes to determine preventive practices. The PM plan consists of five elements.
    1. A set of checklists for PM execution.
    2. A schedule for every PM cycle.
    3. Specifications and part numbers for every checklist item.
    4. Procedures for every checklist item.
    5. Maintenance and parts log (equipment maintenance history) for every machine.

    The PM plan is then executed with precision; meaning that is implemented 100% of the time, completed 100% as specified, and implemented without variation by knowledgeable people. The PM  plan is continually improved to make it easier, faster, and better. Equipment failures suggest the need for further improvement of the PM plan. To this end, two questions must be answered for every equipment failure post-mortem.
    1. Why did we not see the failure coming?
    2. Why did the PM plan not prevent the failure?

Maintenance Prevention Pillar (MP)

Maintenance Prevention refers to “design activities carried out during the planning and construction of new equipment, that imparts to the equipment high degrees of reliability, maintainability, economy, operability, safety, and flexibility, while considering maintenance information and new technologies, and to thereby reducing maintenance expenses and deterioration losses.  Maintenance Prevention is also known as Early Management, Initial
Phase Management or Initial Flow Control. The objective of MP is to minimize the Life Cycle Cost (LCC) of equipment.  In TPM, the concept of MP design is expanded to include design that aims at achieving not only no breakdowns (reliability) and easy maintenance (maintainability) but also prevention of all possible losses that may hamper production system effectiveness and pursuit of ultimate system improvement. To be specific, MP design should be so done as to satisfy reliability, maintainability, ‘Jishu-Hozen’, operability, resource-saving, safety, and flexibility. Effective Maintenance Prevention supports the reduction of the vertical startup lead-time by improving the initial reliability and reducing the variability of equipment and processes. In large part, MP improvements are based on learning from the existing equipment and processes within the Focused Improvement, Autonomous Maintenance, and Planned Maintenance TPM pillar activities. MP design activity minimizes future maintenance costs and deterioration losses of new equipment by taking into account (during planning and construction) maintenance data on current equipment and new technology and by designing for high reliability, maintainability, economy, operability, and safety. Ideally, MP-designed equipment must not break down or produce non-conforming products…The MP design process improves equipment and process reliability by investigating weaknesses in existing equipment [and processes] and feeding the information back to the designers. One of the goals of MP design is to break free of equipment-centred design mentality by adopting a human-machine system approach.  In addition to equipment/process reliability and performance attributes, the systems approach will also look at the man-machine interface as it relates to operability and maintainability and safety.

Quality Maintenance Pillar:

Quality maintenance, in a nutshell, is the establishment of conditions that will preclude the occurrence of defects and control of such conditions to reduce defects to zero. Quality Maintenance is achieved by establishing conditions for ‘zero defects’, maintaining conditions within specified standards, inspecting and monitoring conditions to eliminate variation, and executing preventive actions in advance of defects or equipment/process failure. The key concept of Quality Maintenance is that it focuses on preventive action ‘before it happens’ (cause-oriented approach) rather than reactive measures ‘after it happens’ (results-oriented approach). Quality Maintenance, like Maintenance Prevention, builds on the fundamental learning and structures developed within the Focused Improvement, Autonomous Maintenance, Planned Maintenance, and Maintenance Prevention TPM pillars. Quality Maintenance supports a key objective of TPM – ensuring that equipment and processes are so reliable that they always function properly. The core concept of Quality Maintenance is integrating and executing the structures, practices, and methodologies established within Focused Improvement, Autonomous Maintenance, Planned Maintenance, and Maintenance Prevention. Quality Maintenance occurs during equipment/process planning and design, production technology development, and manufacturing production and maintenance activity. The precondition for implementation of quality maintenance is to put the equipment, jigs, and tools for ensuring high quality in the manufacturing process, as well as processing conditions, human skills, and working methods, into their desired states. Pre-conditions for successful Quality Maintenance implementation include the abolishment of accelerated equipment deterioration, elimination of process problems, and the development of skilled and competent users.

Training and Education pillar:

The objective of Training and Education is to create and sustain skilled operators able to effectively execute the practices and methodologies established within the other TPM pillars. The Training and Education pillar establishes the human-systems and structures to execute TPM. Training and Education focus on establishing appropriate and effective training methods, creating the infrastructure for training, and proliferating the learning and knowledge of the other TPM pillars Training and Education may be the most critical of all TPM pillars for sustaining the TPM program in
the long-term. A test of TPM success is to look at organizational learning, TPM is about continual learning. It is aimed to have multi-skilled revitalized employees whose morale is high and who has eager to come to work and perform all required functions effectively and independently. Education is given to operators to upgrade their skill. It is not sufficient to know only “Know-How” by they should also learn “Know-why”. By experience, they gain, “Know-How” to overcome a problem what to be done. This they do without knowing the root cause of the problem and why they are doing so. Hence it becomes necessary to train them on knowing “Know-why”. The employees should be trained to achieve the four phases of skill. The goal is to create a factory full of experts. The different phase of skills is
Phase 1: Do not know.
Phase 2: Know the theory but cannot do.
Phase 3: Can do but cannot teach
Phase 4: Can do and also teach.

The objective of the Training and Education pillar is

  1. Achieve and sustain downtime due to wanting men at zero on critical machines.
  2. Achieve and sustain zero losses due to lack of knowledge/skills/techniques.
  3. Aim for 100 % participation in suggestion scheme.

While conducting training and Education  Focus should on the improvement of knowledge, skills and techniques. Creating a training environment for self-learning based on felt needs. Training curriculum/tools/assessment etc conducive to employee revitalization. Training to remove employee fatigue and make, work enjoyable.

The Steps in Educating and training activities are :

  1. Setting policies and priorities and checking the present status of education and training.
  2. Establish of a training system for operation and maintenance skill up-gradation.
  3. Training the employees for upgrading the operation and maintenance skills.
  4. Preparation of training calendar.
  5. Kick-off of the system for training.
  6. Evaluation of activities and study of future approach.

Administrative TPM Pillar

Administrative TPM applies TPM activities to continuously improve the efficiency and effectiveness of logistic and administrative functions. These logistic and support functions may have a significant impact on the performance of manufacturing production operations. Consistent with the view of a ‘production system’ that includes not only the manufacturing but also manufacturing support functions, TPM must embrace the entire company, including administrative and support departments. Manufacturing is not a stand-alone activity but is now fully integrated with, and dependent on, its support activities. These departments increase their productivity by documenting administrative systems and reducing waste and loss. They can help raise production-system effectiveness by improving every type of organized activity that supports production. Like equipment effectiveness improvement Administrative TPM focuses on identifying and eliminating effectiveness losses in administrative activities. Implementing Administrative TPM is similar to equipment/process related TPM continuous improvement. The methodologies used in Focused Improvement, Autonomous Maintenance, Planned Maintenance, Maintenance Prevention, and Quality Maintenance are applied to administrative and support tasks and activity. Training and Education, of course, supports Administrative TPM also.

Safety and Environmental Pillar

Although it is the last pillar of TPM, the TPM Safety and Environmental pillar is equally, if not more, important than the seven others. No TPM program is meaningful without a strict focus on safety and environmental concerns. “Ensuring equipment reliability, preventing human error, and eliminating accidents and pollution are the key tenets of TPM. Example of how TPM improves safety and environmental protection is shown here.

  • Faulty or unreliable equipment is a source of danger to the operator and the environment. The TPM objective of Zero-failure and Zero-defects directly supports Zero-accidents.
  • Autonomous Maintenance teaches equipment operators on how to properly operate equipment and maintain a clean and organized workstation. 5-S activity eliminates unsafe conditions in the work area.
  • TPM-trained operators have a better understanding of their equipment and processes and are able to quickly detect and resolve abnormalities that might result in unsafe conditions.
  • Operation of equipment by unqualified operators is eliminated through effective deployment of TPM.
  • Operators accept responsibility for safety and environmental protection at their workstations.
  • Safety and environmental protection standards are proliferated and enforce as part of the TPM Quality Maintenance pillar.

Implementing the TPM Safety and Environmental pillar focuses on identifying and eliminating safety and environmental incidents. According to the Heinrich Principle, for every 500,000 safety incidents there are 300 ‘near misses’, 29 injuries, and 1 death. Investigating industrial accidents, Heinrich found that 88% of accidents were caused by unsafe acts of people, 10% where the result of unsafe physical conditions, and 2% he considered ‘acts of God’.

TPM uses Why-Why Analysis to probe for the root causes (incidents in the Heinrich model) that result in safety or environmental near misses.

There are six phases that an operation passes through during an industrial accident.

  • Phase 1 – Normal operation, stable state.
  • Phase 2 – Signs of abnormality, the system becomes more and more disordered.
  • Phase 3 – Unsteady state, difficult to restore to normal.
  • Phase 4 – Obvious danger as a result of failure or abnormality. Damage and injury can still be contained and minimized.
  • Phase 5 – Injury and severe damage occur.
  • Phase 6 – Recovery after the situation is under control.

TPM practices, such as those listed below, allow quick operator intervention and prevent incidents from approaching Phase 3.

  1. Monitor equipment and processes and quickly correct abnormalities.
  2. Install and check safety equipment.
  3. Identify and eliminate hidden equipment abnormalities and defects.

Environmental safety is becoming an increasing point of focus for TPM implementation. Manufacturing management in the 21st century will not be effective if the environmental issues are ignored. Manufacturing management that does not take environmental issues into consideration will be removed from society. One of the causes of environmental issues is that industries, academic institutions, and government agencies have been specialized in research, development, promotion, and diffusion of design technologies to produce more artificial products. There is very little concern about setting conditions for equipment to the most favourable ones after it is put into operation or diagnostic techniques to maintain those conditions. Environmental safety goes beyond simply eliminating accidents. In today’s manufacturing environment, environmental safety includes reduction of energy consumption, elimination of toxic waste, and reduction of raw material consumption. Ichikawa proposes that TPM address the following key environmental objectives within the Safety and Environmental pillar.

  1. Construct an Environmental Management System (EMS) that integrates environmental issues as a system. This objective is consistent with ISO 14001.
  2. Implement activities, through the TPM program, to reduce the
    environmental impact of manufacturing operations.
  3. Create systems to reduce the environmental impact of manufacturing product and process development.
  4. Enhance the environmental awareness and education of all employees.
    Ichikawa emphasizes that the Environmental Management System is part and parcel of the work and this implementation should be done through TPM. In concrete terms, this consists of environmental education, products and equipment development that implement improvements for environmental aspects reduction and give consideration to environmental load, and it is considered to be appropriate to develop these themes along the conventional TPM pillars.

Twelve steps TPM Implementation process

TPM Implementation Phase TPM Implementation Step Key Points Actions Preparation

TPM Implementation Phase:  Preparation

TMP Step No 1: Formally announce the decision to introduce TPM.

Key Points: Top management announcement of TPM introduction at a formal meeting and through the newsletter.


  • Top management receives TPM overview training.
  • TPM case studies or pilot team results.
  • TPM readiness assessment.
  • Top management buy-in.
  • Top management commitment to TPM implementation.

TMP Step no 2: Conduct TPM introductory education and publicity campaign.

Key Points:

  • Senior management group training.
  • Slide-show overview presentation for remaining employees.


  • Management training.
  • TPM philosophy promotion to employees.
  • TPM Overview and management responsibility presentation to all management levels.
  •  Presentation of TPM overview to all employees.

TMP Step no3: Create a TPM promotion organization.

Key Points:

  • TPM Steering Committee and Specialist subcommittees.
  • TPM Promotions Office.


  • Create a TPM Steering Committee composed of top management representing all functions.
  • Identify and staff a TPM Promotion Office reporting to top management. Promotion Office to include a TPM Coordinator, TPM Facilitator(s) (1 per 12 teams), and a TPM content expert.
  • Identify TPM champion(s) and their responsibilities.
  • Determine mission and strategy.
  • Include TPM in the business plan.
  •  Develop the TPM step-by-step plan.
  •  Determine TPM education sourcing.
  • Establish a TPM budget.
  • Create a TPM pillar subcommittee (chairman).
  • Train the TPM trainer.
  • Pilot project training for supervisors and managers.
  • TPM facilitator training (include supervisors).

TMP Step no 4: Establish basic TPM policies and goals.

Key Points: Set baselines and targets.


  • Determine TPM initiative objectives.
  • Define TPM policies.
  • Define OEE methodology and loss category definitions.
  • Implement data collection system.
  • Create an OEE data reporting mechanism.
  • Acquire data from the current source of data.
  • Determine bottleneck (constraint) operations and equipment.
  • Determine pilot project tool(s).
  • Select sponsor(s) for pilot project(s).
  • Determine the TPM compensation, reward, and recognition system.

TMP Step no 5: Draft a master plan for implementing TPM.

Key Points: Draft a master plan for implementing TPM.


  • The master plan from the preparation stage to the application for TPM prize.
  • Create the TPM sustaining plan.
  • Define the basic skills required.
  • Training course development.
  •  Created a timeline (3 to 5 years) for each planned TPM activity.

TPM Implementation Phase:  Introduction

TMP Step no 6: Kick off the TPM initiative.

Key Points: The master plan from the preparation stage to the application for TPM prize.


  • Top management presents the TPM policies, goals, and master plan for all employees.
  • Ensure long-term commitment of the management team.

TPM Implementation Phase: Implementation

TMP Step no 7: Establish a system for improving production efficiency.

  1. Focused Improvement Pillar.
  2. Autonomous Maintenance Pillar.
  3. Planned Maintenance Pillar.
  4. Education and Training Pillar

Key Points:

  • Conduct Focused Improvement activities.
  • Establish and deploy the Autonomous Maintenance program.
  • Implement the Planned Maintenance program.
  • Conduct operation and maintenance skill training.


  • Team skills training.
  • Problem-solving skills training.
  • Communication skills training.
  • Business meeting skills training.
  • Project management skills training.
  • TPM process training.
  • TPM activity board training.
  • Establish cross-team communications.
  • Structure team communication to management.
  • OEE training.
  • Launch team projects.
  • Establish the TPM process audits.
  • Execute mid-project project progress reviews (progress, problems, plans, learning).
  • Establish and execute periodic team reports to management.
  • Establish a cost savings analysis (ROI) for team projects.
  • Identify, demonstrate, and communicate contribution to customer success.
  • Share success stories with other teams and management.
  • Establish end-of-project reviews.
  • Implement standard procedures and methodologies for Visual Controls and One Point Lessons.
  • Renew and repeat the cycle.

TMP Step no 8: Establish and deploy Maintenance Prevention activities -Maintenance Prevention Pillar

Key Points: Develop optimal vertical startup for products, processes, and equipment.

Action: TPM team training.

TMP Step no 9: Establish Quality Maintenance systems – Quality Maintenance Pillar

Key Points: Establish, maintain, and control conditions for zero failures, zero defects, zero accidents.

Action: TPM team training.

TMP Step no 10: Create systems for eliminating efficiency losses in administrative and logistic functions – Administrative Maintenance Pillar

Key Points:

  • Increase production support efficiency.
  • Improve and streamline administrative and office functions.


  • TPM team training.
  • Proliferate throughout the company.

TMP Step no 11: Create systems for managing health, safety, and the environment – Safety and Environmental Pillar

Key Points: Create systems to ensure zero safety and environmental accidents.

Action: TPM team training.

TPM Implementation Phase: Consolidation and Sustaining

TMP Step no 12: Sustain full TPM implementation and continually improve the TPM process.

Key Points:

  • Raise TPM team goals.
  • Establish ongoing audits.


  • Review and raise the TPM team goals.
  •  Audit the TPM process.

 Back to Home Page

If you need assistance or have any doubt and need to ask any question contact us at preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be very happy to publish them in this blog. Your comment and suggestion are also welcome.

5S-Sort, Shine, Set in order,  Standardize, and Sustain

Visual Management

Visual management is a set of techniques that

  1.  expose waste so you can eliminate it and prevent it from recurring in the future,
  2. make your company’s operation standards known to all employees so they can easily follow them, and
  3. improve workplace efficiency through the organization.

Creating an organized, efficient, cleaner workplace that has clear work processes and standards helps your company lower its costs. Also, employees’ job satisfaction improves when their work environment makes it easier for them to get the job done right. Implementing these techniques involves three steps:

  • Organizing your workplace by using a method known as the 5 S’s (sort, shine, set in order, standardize, and sustain);
  • Ensuring that all your required work standards and related information are displayed in the workplace.
  • Controlling all your workplace processes by exposing and stopping errors—and preventing them in the future.

Using visual management techniques enables your company to do the following:

  1. Improve the “first-time-through” quality of your products or services by creating an environment that:
    • Prevents most errors and defects before they occur.
    • Detects the errors and defects that do occur and enables rapid response and correction.
    • Establishes and maintains standards for zero errors, defects, and waste.
  2.  Improve workplace safety and employee health by:
    • Removing hazards.
    • Improving communication by sharing information openly throughout the company.
    • Creating compliance with all work standards, reporting deviations, and responding quickly to problems.
  3. Improve the overall efficiency of your workplace and equipment, enabling your organization to meet customer expectations.
  4. Lower your total costs.

You can effectively gain control over your company’s manufacturing or business processes by focusing on the following areas:

  • Value-added activities. These are activities that change the form or function of your product or service.
  • Information sharing. This is the distribution of the right information to the right people at the right time, in the most useful form possible.
  • Source inspections. The goal of these inspections is to discover the source of errors that cause defects in either your products or
    business processes.
  • Material quantities and flow. All work operations should result in the correct quantities of materials or process steps moving as required for downstream operations.
  • Health and safety. All work processes, facilities, and equipment design and procedures should contribute to the maintenance of a safe and healthy workplace.

It is most effective to focus on the areas listed above as they relate to six aspects of your production or business processes:

  1. The quality of incoming, in-process, and outgoing materials.
  2. Work processes and methods of operation.
  3.  Equipment, machines, and tools.
  4.  Storage, inventory, and supplies.
  5. Safety and safety training.
  6. Information sharing.

To gain control over your processes, you must understand the “three actuals”:

  • The actual place or location in which a process occurs.
  • The actual employees working in that location.
  • The actual process occurring in that location.
    Mapping the process will help you understand all three actuals.

5S  Workplace Organization

Implementing 5S is a fundamental first step for any manufacturing company wishing to call itself world-class. The presence of a 5S program is indicative of the commitment of senior management to workplace organization, lean manufacturing, and the elimination of Muda (Japanese for waste). The 5S program mandates that resources be provided in the required location, and be available as needed to support work activities. The five Japanese “S” words for workplace organization are:

  • Seiri (proper arrangement)
  • Seiton (orderliness)
  • Seiso (cleanup)
  • Seiketsu (standardize)
  •  Shitsuke (personal discipline)

The translated English equivalents are:

  • Sort: Separate out all that is unneeded and eliminate it
  • Straighten: Put things in order, everything has a place
  • Scrub (or shine): Clean everything, make the workplace spotless
  • Standardize: Make cleaning and checking routine
  • Sustain: Commit to the previous 4 steps and improve on them

The 5S approach exemplifies a determination to organize the workplace, keep it neat and clean, establish standardized conditions, and maintain the discipline that is needed to do the job. Numerous modifications have been made on the 5S structure. It can be reduced to 4S. It can be modified to a 5S + 1S or 6S program, where the sixth S is safety. The 5S concept requires that a discipline will be installed and maintained. There is a story of a Japanese team’s initial site visit to a prospective supplier. Before allowing the supplier to unveil their grand presentation, the Japanese visitors insisted on a tour of Gemba (the shop floor). After just a few minutes in the factory, the visitors knew that the plant was not committed to the highest level of manufacturing and terminated the visit. It is very easy to tell whether a plant is practising a 5S program. In day-to-day operations, it is possible to have some dirt around the plant, but the visual signs of a 5S committed facility are obvious. Details of a 5S program are itemized below in a step-by-step approach.

Step 1: Sort (Organize):

    • Set up a schedule to target each area
    • Remove unnecessary items in the workplace
    • Red tag unneeded items, record everything that is thrown out
    • Keep repaired items that will be needed
    • Major housekeeping and cleaning is done by area
    • Inspect the facility for problems, breakages, rust, scratches and grime
    • List everything which needs repair
    • Deal with causes of filth and grime
    • Red tag grime areas and prioritize conditions for correction
    • Perform management reviews of this and other steps I

Step 2: Straighten:

    • Have a place for everything and everything in its place to ensure neatness
    • Analyze the existing conditions for tooling, equipment, inventory and supplies
    • Decide where things go, and create a name and location for everything
    • Decide how things should be put away, including the exact locations
    • Use labels, tool outlines, and colour codes
    • Obey the rules. Determine everyday controls and out-of-stock conditions
    • Define who does the reordering and reduce inventories
    • Determine who has missing items or if they are lost
    • Use aisle markings, placement for dollies, forklift, boxes
    • Establish pallet zones for work in process (WIP)

Step 3: Scrub (Shine and Clean)

    • This is more than keeping things clean, it includes ways to keep things clean
    • Establish a commitment to be responsible for all working conditions
    • Clean everything in the workplace, including equipment
    • Perform root cause analysis and remedy machinery and equipment problems
    • Complete training on basics of equipment maintenance
    • Divide each area into zones and assign individual responsibilities
    • Rotate difficult or unpleasant jobs
    • Implement 3-minute, 5-minute and 10-minute 5S activities
    • Use inspection checklists and perform white glove inspections

Step 4: Standardize

    • Make 5S activities routine so that abnormal conditions show up
    • Determine the important points to manage and where to look
    • Maintain and monitor facilities to ensure a state of cleanliness
    • Make abnormal conditions obvious with visual controls
    • Set standards, determine necessary tools, and identify abnormalities
    • Determine inspection methods
    • Determine short-term countermeasures and long-term remedies
    • Use visual management tools such as colour-coding, markings and labels
    • Provide equipment markings, maps, and charts I

Step 5: Sustain

    • Commit to the 4 previous steps and continually improve on them
    • Acquire self-discipline through the habit of repeating the 4 previous steps
    • Establish standards for each of the 5S steps
    • Establish and perform evaluations of each step

Management commitment will determine the control and self-discipline areas for an organization. A 5S program can be set up and operational within 5 to 6 months, but the effort to maintain world-class conditions must be continuous. A well run 5S program will result in a factory that is in control.

Steps in implementing 5 S

  1. Getting started

    Before you begin to implement 5s techniques, make sure you do the following:

    • Elect an employee from each work team to lead the program and remove any barriers his or her team encounters along the way.
    • Train all involved employees about the 5s techniques outlined below.
    • Tell everyone in the areas of your plant or office that will be involved in the program. Also, give a “heads up” to other employees or departments that might be affected by it.
    • Create storage (“red tag”) areas for holding materials you will remove from work sites in your plant or building.
    • Create a location for supplies you will need as you progress through your visual management programs, such as tags, cleaning materials, paint, labels, masking tape, and sign materials.
    • Coordinate the program with your maintenance department and any other departments that you might need to call on for help.
    •  Make sure that all employees understand and follow your company’s safety regulations and procedures as they make changes.
  2.  Sort.

    Sort through the items in your work area, following the steps below. Your goal is to keep what is needed and remove everything else.

    • Reduce the number of items in your immediate work area to just what you actually need.
    •  Find appropriate locations for all these items, keeping in mind their size and weight, how frequently you use them, and how urgently you might need them.
    •  Find another storage area for all supplies that you need but do not use every day.
    •  Decide how you will prevent the accumulation of unnecessary items in the future.
    • Tape or tie red tags to all the items you remove from your work area. Place the items in a temporary “red-tag storage” area for five days. Either use the Sorting Criteria chart as shown below as a guide for disposing of items or develop your own criteria.
    •  After five days, move any item that you haven’t needed to a central red-tag storage area for another thirty days. You can then sort through all items stored there to see if they might be of any use and throw away everything else, remembering to follow your company policy. Use a logbook to track what you do with all red-tag items.
    • If employees disagree about what to do with some of the materials, try to resolve the conflict through discussion. They can also consult their managers about the materials’ value, current and potential use, and impact on workplace performance.
  3. Shine.

    Clean and “shine” your workplace by eliminating all forms of contamination, including dirt, dust, fluids, and other debris. Cleaning is also a good time to inspect your equipment to look for abnormal wear or conditions that might lead to equipment failure. Once your cleaning process is complete, find ways to eliminate all sources of contamination and to keep your workplace clean at all times. Keeping equipment clean and “shiny” should be a part of your maintenance process. Your company’s equipment maintenance training should teach the concepts of “cleaning as inspection” and “eliminating sources of contamination.” Remember that your workplace includes not just the plant floor, but your administrative, sales, purchasing, accounting, and engineering areas as well. You can clean these areas by archiving project drawings when they are completed and properly storing vendor catalogues and product information. Decide what methods (local or shared hard drives, floppy disks, or CDs) are the best for storing your electronic files.

  4. Set in order.

    During this step, you evaluate and improve the efficiency of your current workflow, the steps and motions employees take to perform
    their work tasks.

    1. Create a map of your workspace that shows where all the equipment and tools are currently located. Draw lines to show the steps that employees must take to perform their work tasks.
    2.  Use the map to identify wasted motion or congestion caused by excessive distances travelled, unnecessary movement, and improper placement of tools and materials.
    3. Draw a map of a more efficient workspace, showing the rearrangement of every item that needs to be moved.
    4.  On your map, create location indicators for each item. These are markers that show where and how much material should be kept in a specific place. Once you create your new workspace, you can hang up location indicators within it.e. Make a plan for relocating items that need to be moved so you can make your new, efficient workspace a reality. As you do this step, ask yourself the following questions:
      • Who will approve the plan?
      • Who will move the items?
      • Are there any rules, policies, or regulations that affect the location of these items? Will employees be able to adhere to these rules?
      • When is the best time to relocate these items?
      • Do we need any special equipment to move the items?

      As a team, brainstorm your ideas for new ways to layout your workspace. If it is impractical or impossible to move an item the way you would like, redesign the rest of the workspace around this item’s location.

    5. Post the drawing of the new workplace layout in your area.
  5. Standardize.

    Make sure that team members from every work area follow the sort, shine, and set-in order steps. Share information among teams so that there is no confusion or errors regarding:

    • Locations
    • Delivery
    • Destinations
    • Quantities
    •  Schedules
    •  Downtime
    • Procedures and standards

    As you begin to use your newly organized workplace, have everyone write down their ideas for reducing clutter, eliminating unnecessary items, organizing, making cleaning easier, establishing standard procedures, and making it easier for employees to follow the rules. Once you have standardized your methods, make your standards known to everyone so that anything out of place or not in compliance with your procedure will be immediately noticed.

  6. Sustain.

    The gains you make during the above four steps are sustained when:

    • All employees are properly trained.
    • All employees use visual management techniques.
    • All managers are committed to the program’s success.
    • The workplace is well ordered and adheres to the new procedures all your employees have agreed upon.
    • Your new procedures become a habit for all employees.

    Reevaluate your workspace using the Sustain Evaluation Form (see the figure below) as needed. Encourage and recognize the achievement of all work areas that are able to sustain their visual management efforts. This helps your company to maintain a cycle of continuous improvement.

5S Implementation Guide

5S Implementation Checklist

The purpose of this checklist is to provide reliable steps to preparing for and performing 5S activities in the work area. Included in this checklist is a preferred sequence of events and corresponding “how-to” guides for each step.

Task 5S Guide
Develop your implementation plan:

  • Create a 5S documentation system
  • Determine the pace of implementation
  • Draft “straw man” 5S Map
  • Determine “before 5S” photo logistics
  • Establish visible ways to communicate 5S activities
  • Coordinate and schedule services required from support organizations
  • Make a list of internal arrangements to be made
  • Draft timeline
  • Communicate your plan to upper management
“Develop Implementation Plan”
Photograph the work area “Take Area Photograph”
Educate workgroup 5S Overview – Lean Training Module
Finalize 5S Map Finalize 5S Map
Perform Work Area Evaluation “Perform Area Evaluation”
Perform Sorting “Perform Tagging Technique”
“Conduct Sorting Auction”
“Prepare for Simplifying”
Perform Simplifying “Using Labeling Technique”
“Using Outlining Technique”
“Using Shadow Board Technique”
Perform Sweeping “Perform Sweeping”
Perform Standardizing “Perform Standardizing”
Perform Self-Discipline “Perform Team Self-Discipline”
“Perform Individual Self-Discipline”
Measure Results “ Measure Results”

Develop a 5S Implementation Plan

Purpose: To help work-group leaders plan for 5S implementation in their areas.
When: Prior to beginning implementation.

Materials required:

  • Paper
  • Pen or pencil

Steps to implement 5S

1. Create a 5S documentation system to organize and store pertinent data.

  • Determine the type of file.
  • Determine the file location.
  • Inform workgroup of location.


  • The purpose is to have one location, accessible to all, for organizing miscellaneous 5S materials.
  • Pertinent data may include, but is not limited to:
    • 5S map
    • Area check sheets
    • Historical Action Item Lists
    • 5S agreements
    • Measures, goals and progress against them
  • Examples: 3-ring binder, file folder, etc.
  • The location should be accessible to team members.

2. Determine the pace of implementation


  • The purpose is to help you understand the impacts of implementation and determine the pace that best supports your needs and to clarify expectations.
  • Consider the following questions:
    • How much time can we allocate? (Be innovative and realistic.)
    • Where will we begin – which area, group, etc.? (Be sensitive to personal work areas versus common work areas.)
    • How many shifts are involved?
    • How will we coordinate cross-shift activities?
    • What will make sense for us? (Pilot small area; use lessons learned to proceed?)
IF: Then:
You allocate a full day for implementation. Schedule an entire day to conduct a 5S overview and initiate 5S activities.
Continue on a weekly or monthly basis until 5S methods become the norm.
You desire implementation through team meetings. Schedule one team meeting to conduct the 5S overview, then proceed with implementation in subsequent team meetings until 5S methods become the norm.
You desire an all-shift kick off implementation meeting.
  • Schedule 5S overview meeting for employees from all shifts.
  • Deliver 5S overview
  • Finalize 5S map.
  • Conduct work area evaluation.
  • Then proceed with implementation in shifts until 5S methods become the norm.

3. Draft “straw man” 5S map.

a. Obtain approved layout of the entire work area.  Verify all relevant dimensions.
b. Coordinate area boundaries.
c. Divide the map into workable sections.
d. Determine the number of people per team (1 team per section).


  • The purpose is to take a proposed map to the workgroup for finalizing after you have coordinated the external boundaries with adjacent organizations.  The map will be used throughout 5S activities to clarify boundaries, assign responsibilities, and divide tasks into bite-size pieces.
  • This 5S map will be finalized with the workgroup following the 5S Overview.
IF Then
The 2nd-level manager has prepared a 5S map. Use the boundaries identified.
5S map has not been prepared. Obtain area map from Facilities or draw a map (get boundary approval from your 2nd-level manager).
  • Communicate with organizations adjacent to your assigned areas to ensure agreement on boundaries.
  • Define who is responsible for common aisle-ways, stairways, etc.
  • Sizes of sections should require equal amounts of effort to organize and maintain.
  • Label each section (a,b,c,d, etc.)
  • Be sensitive to ownership of files and personal workspaces in office areas, when considering boundaries and team assignments.
  • Optimum team size: 4-5 people.
    Assess the size of your workgroup. Divide the total number of employees by the number of sections of the 5S map.

4. Determine “before 5S” photograph logistics (who, how, and when).


  • The purpose is to prepare yourself and your group for whom, how and when photos will be taken, considering time and budget.
  • Based on your implementation pace (see step #2 above):

Consider TIME:

If you are planning And Then
A full day of implementation. Have photos taken before conducting a 5S overview?
Implementation in steps. You want to have the team present. Wait until after 5S overview, and before initial Sorting activities.

Consider BUDGET:

If you are planning Then
Arrange for support from Photography. Contact Photographer at least two (2) days prior to date needed.
Have the team take photographs. Obtain a camera. Obtain a camera permit (available from Security).

5. Establish visible ways to communicate 5S activities.


Communicate your plans to the workgroup, especially if you are taking photographs prior to the 5S Overview, so they will understand and feel comfortable with the process. The purpose is to serve as a communication tool.

Examples: Consider having enough room to post:
Bulletin Board 5S Map
Visibility Wall Area Photographs
Notebook/binder Workgroup 5S Action Item Log Area evaluation, etc

6. Coordinate and schedule services required from the support organization(s).


  • The purpose is to build collaborative relationships within and across functional lines and to help ensure a smooth implementation with no surprises.
  • Consider cross-shift schedules.
  • Acquaint yourself with procedures of support organizations.
  • Establish a contact person in each support organization to let them know IN ADVANCE:
    • What you are doing
    • When
    • How they might be affected
    • What you need from them
  • Find out what the support organization needs from you.
    Examples of support organizations (include, but are not limited to): Surplus, Material, Tool Rooms, Office Supply, Transportation, Photography, Lean Implementation, etc.

7.Make a list of internal arrangements to be made.


  • The purpose identify all the arrangements needed to proceed with 5S implementation and to help ensure that nothing slips through the cracks
  • Read the “Sorting” 5S Guides: “Tagging Technique” and “Auction”.
  • Your list of arrangements includes, but is not limited to:
    • Research, obtain and review any documentation about facility standards, filing guidelines, etc.
    • Determine the date for your work group’s initial Sorting Activity.
    • Schedule 5S Overview with your group.
    • Order materials for 5S Overview.
    • Practice 5S Overview delivery.
    • Schedule upper-level manager to conduct the auction, if appropriate.
    • Designate a holding area for items to be auctioned.
    • Determine which organizations you might be sending surplus items to (for example, Tool Rooms, Material, Office Supply, Salvage, etc.).
    • Consult with Facilities/ Maintenance/ Housekeeping to help determine (and provide) types of containers needed to transport unnecessary items (for example, boxes, dollies, tub skids, etc.).

8.Draft a timeline for 5S planning and implementation activities.


  • The purpose is to schedule all planning and implementation activities and notify all those involved (internally and externally) so everyone can be prepared.
  • Include estimated dates for completing all planning activities.
  • Include estimated dates for performing all other 5S activities.
  • Post in 5S communication.

9.Communicate your plan to upper management.


  • The purpose is to get buy-in and support from your management.
  • Solicit feedback.
  • Gain agreement.

10. Check your work. You will know you have completed this task when:


  • A 5S documentation system has been created, storage location determined, and the team notified of its location.
  • The pace of 5S implementation plan for the work area has been determined.
  • You have determined who, how, and when photos will be taken, and have communicated this to your group.
  • A “straw man” 5S map has been drafted.
  • A visible method for communicating 5S activities has been established.
  • All services required from support organizations have been coordinated and scheduled.
  • All the necessary internal arrangements have been made.
  • A timeline for 5S planning and implementation activities has been created and posted.
  • You have communicated your plan to upper management.

Take Area Photographs

Purpose: To provide the team with a photographic record of their work area serving as a baseline to measure improvements from.
When:  Do this per group’s previously determined implementation plan

Materials required to take Area Photograph

  • 5S Implementation Plan
  • “Straw man” 5S map
  • Camera, camera permit and photographer from the workgroup, or approved arrangements made with Photography.

Steps Required to take Area Photograph

1. Visually survey the work area.

Workgroup/individuals have agreed, and are aware their area is being photographed.

2.Determine best photo angle for each section.

Try to show as much of each section as possible – widest angle.

3.Mark agreed- upon photo angles on the 5S map.

You will be taking “after 5S” photos from the same angle.

4.Have photos taken.

Open doors of cabinets, desks, etc.
5.Have photos developed.
6.Post in the 5S communication area.
7.Check your work.
You will know this task is complete when:

  • Photos have been taken for each section according to angles identified on the 5S map.
  • Photos are posted in the 5S communication area.

Finalize a 5S Map

Purpose: To assist workgroup and 5S Leader in laying out boundaries and determining team responsibilities for 5S activities.
When: Do this immediately following the 5S Overview.
Materials required to finalise a 5S map

  • “Straw man” 5S Map
  • Flip chart stand/pad, if applicable
  • Pen or pencil

Steps to finalise the 5S map

1. Assemble

2.Post “straw man” 5S map to view.

For example Flip chart, wall, or whiteboard.

3.Agree on how the work area is divided.


  • The 5S leader explains how the sections were determined.
  • The group provides input – discussing any suggestions.
  • Any changes are agreed upon by the team.

4.Identify a place on 5S map to write in the names of members for each team.


  • Leave enough room for all team members.
  • Office operations might consider limiting team activities to common work areas like conference rooms, coffee areas, etc.
  • Areas can be identified for individual activities, (cubes, desks, files, etc.).

5. Record team members’ names for each section on the 5S map.


  • Team members volunteer and/or are assigned to a section of the map.
  • Ensure at least 1 person assigned to each section works in that section.
  • It is helpful to have a fresh pair of eyes (someone not normally working in that area).  Sometimes we can’t see the forest for the trees.

6. Post 5S map in the communication area.

7.Check your work.

You will know you have completed this task when:

  • Team members for each section have been identified and recorded.
  • At least 1 person that works in each section is on the team for that section.
  • The finalized 5S map has been posted in the communication area.

Perform Area Evaluation

Purpose: To assist workgroup in assessing their work area’s current condition.
When: Do this after 5S Overview and immediately before Sorting.

Materials required

  • Blank “Area Check sheet” – (1 per team)
  • Pen or pencil – 1 per team
  • Blank “Levels of Excellence” form (1 per work area)
  • 5S Map
  • Flip chart stand or pad, if applicable

Steps to perform area evaluation

1.Review “Area Check sheet” for additions and/or deletions needed for team’s work area.

2.Assemble in the work area.

3.Complete “Area Check sheet”

  • Give 1 blank “Area Check sheet” to each team.
  • Each team selects a scribe to read check sheet and place check marks in appropriate boxes.
  • Team members go to the area assigned (see 5S map).
  • “Area Check sheet” is completed per instructions on the form.
  • The team returns to the meeting area when done.
  • After each area has completed an “Area Check sheet” and reassembled in the meeting area, continue with Step #3, below.

4. Determine “Levels of Excellence” for the work area.

  • Post the blank “Levels of Excellence” form on the flip chart stand.
  • Discuss Area Check sheets completed by each team to help determine the “Levels of Excellence” for the work area.
  • Fill out the “Levels of Excellence” form for the entire area.

5 File Area Check sheets in 5S document system.

6.Post “Levels of Excellence” in the 5S communication area.

7.Check your work.

You will know this task is complete when:

  • Area Check sheets are completed per instructions for each section.
  • Work area “Levels of Excellence” form is completed and posted in the 5S communication area.

5S Area Check sheet


  1. Scribe reads each statement out loud and records team member’s response in the appropriate box. A consensus of the team members is needed for each response.
  2. If team members respond “yes” place a checkmark in the “yes” column for that statement.
  3. Team members respond “no” place a checkmark in the “no” column for that statement.

NOTE: The team can add or delete items from the checklist as appropriate for their area.

Sorting Yes No
Do employees know why these 5S activities are taking place?
Have criteria been established to distinguish necessary from unnecessary items?
Have all unnecessary items been removed from the area? Examples:  Excess materials, infrequently used tools, defective materials, personal items, outdated information, etc.
Do employees understand the procedure for disposing of unnecessary items?
Do employees understand the benefits to be achieved from these activities?
Has a reliable method been developed to prevent unnecessary items from accumulating?
Is there a process for employees to pursue and implement further improvements?
Simplifying Yes No
Is there a visually marked specified place for everything?
Is everything in its specified place?
Is storage well organized and items easily retrievable?
Are items like tools, materials, and supplies conveniently located?
Do employees know where items belong?
Has a process been developed to determine what quantities of each item belongs in the area?
Is it easy to see (with visual sweep) if items are where they are supposed to be?
Are visual aids in use? (For example signboards, colour-coding or outlining).
Sweeping Yes No
Are work/break areas, offices and conference rooms clean and orderly?
Are floors/carpets swept and free of oil, grease and debris?
Are tools, machinery, and office equipment clean and in good repair?
Is trash removed on a timely basis?
Are manuals, labels, and tags in good condition?
Are demarcation lines clean and unbroken?
Are cleaning materials easily accessible?
Are cleaning guidelines and schedules visible?
Do employees understand expectations?
Standardizing Yes No
Are current processes documented?
Do employees have access to the information they require?
Is there a method in place to remove outdated material?
Do employees understand the processes that pertain to them?
Does a process exist that enables employees the opportunity to improve existing processes?
Self-Discipline (Sustaining) Yes No
Are safety and housekeeping policies followed?
Is safety data posted in appropriate locations?
Are safety risk areas identified?
Are employees wearing appropriate safety apparel?
Are fire extinguishers and hoses in working order?
Is general cleanliness evident?
Are break areas cleaned after use?
Do employees know and observe standard procedures?
Do employees have the training and tools that are necessary to make this program work?
Is there a confident understanding of and adherence to the 5S’s?

5S Levels of Excellence


  1. The team discusses the results of the “5S Area Check sheet”(s) completed for all sections of the work area.
  2. The team uses the check sheets as a basis for determining the level of excellence for each of the 5S categories. There is no one-to-one correspondence between the number of marks in the “yes” column on the check sheet(s) and the level of excellence.  The check sheet(s) provides additional information on which to base the team’s subjective opinion.
  3. As levels are determined, write the date in the appropriate column for that level (one level per category).

NOTE:  The “Levels of Excellence” form pertains to the entire work area.  Work area sections are probably at different levels.  When this happens, the entire work area defers to the lowest level. This applies to the area’s overall rating also.

Level Sorting Date
1 Necessary and unnecessary items are mixed together in the work area.
2 Necessary and unnecessary items separated (includes excess inventory).
3 All unnecessary items have been removed from the work area.
4 The method has been established to maintain the work area free of unnecessary items.
5 Employees continually seeking improvement opportunities.
Level Simplifying
1 Tools, supplies and materials randomly located.
2 Designated location established for all items.
3 Designated locations are marked to make the organization more visible. (For example colour-coding or outlining)
4 The method has been established to recognize; with a visual sweep, if items are out of place or exceed quantity limits.
5 Process in place to provide continual evaluation and to implement improvements.
Level Sweeping
1 Factory/Offices and machinery/office equipment are dirty and/or disorganized.
2 Work/break areas are cleaned on a regularly scheduled basis.
3 Work/break areas, machinery and office equipment are cleaned daily.
4 Housekeeping tasks are understood and practised continually.
5 Area employees have devised a method of preventative cleaning and maintenance.
Level Standardizing
1 No attempt is being made to document or improve current processes.
2 Methods are being improved but changes haven’t been documented.
3 Changes are being incorporated and documented.
4 Information on process improvements and reliable methods is shared with employees.
5 Employees are continually seeking the elimination of waste with all changes documented and information shared with all.
Level Self-Discipline (Sustaining)
1 Minimal attention is spent on housekeeping and safety and standard procedures are not consistently followed.
2 A recognizable effort has been made to improve the condition of the work environment.
3 Housekeeping, safety policies, and standard procedures have been developed and are utilized.
4 Follow-through of housekeeping, safety policies, and standard procedures is evident.
5 The general appearance of confident understanding and adherence to the 5S program

Sorting Activity – Tagging Technique

Purpose: To assist workgroup in identifying unnecessary items in the work area.

When: Do this after area evaluation and before you conduct the Sorting Auction

Materials Required

  • 5S map
  • Pen or pencil
  • Post-It notes or other methods for tagging item
  • Paper for listing auction items

Steps for Tagging Technique  

1.Assemble in the work area.

Clarify criteria for tagging.  Refer to step #2 and expand if necessary.

2.  Team members individually identify unnecessary items in the assigned work area.

Caution: Focus on Company-owned versus the personal property.


Every individual walks through the assigned area and physically touches everything.  As each item is touched, do the following:

If: Then:
Item has a defined purpose and is used often enough to be considered necessary. Do not tag item.
Item has no defined purpose or is not needed. Tag item.
Item is determined unsafe and needed. Tag item to be repaired or replaced.
Item is unsafe and not needed. Tag item to be removed from the work area
Unsure about the item’s purpose. Tag item for discussion at “Sorting Auction”

3.  Remove all tagged items to the designated holding area for auction.


If: And: Then:
Single shift


The auction can take place immediately following tagging activity according to the team’s plan.
Tagged item is too large for the team to move. Determine the appropriate process for disposition of item during auction walk-through.
Multiple Shifts


Hold tagged item for the predetermined period before conducting the auction.
List all tagged items and post list in the communication area for all shifts to preview prior to the auction.
Tagged item is too large for the team to move. Determine the appropriate process for disposition of item during auction walk-through.

4.Check your work.

You will know you have completed this task when:

  • All items determined to be unnecessary or unsafe are tagged.
  • Provisions have been made for cross-shift viewing of tagged items, if applicable.
  • All easily removed, tagged items have been taken to the designated holding area.
  • Plan for the disposition of all other tagged items has been determined.

Sorting Activity -Sorting Auction

Purpose: To assist auctioneer in conducting Sorting Auction to dispose of tagged items.
When: Do this after “Perform Tagging Technique”.
Materials Required:

  • Blank Surplus  Items Form (attachment at end of this guide)
  •  Work Group 5S Action Item Log (attachment at end of this guide)
  • All tagged items

Steps for Sorting Action

1.Assemble in the auction area.

2.  Designate 2 scribes.


  • 1 scribe to document surplus items.
  • 1 scribe to document action items.

3.  Distribute blank surplus form and action item log.


To designated scribes.

4.  Hold up each item for auction.


One item is handled at a time.

5.  Ask criteria questions for each item.


  • Who needs it?
  • What is it used for?
  •  How often do you use it?
  •  How much of it do you need?
  • Is it safe?

6.Dispose of each tagged item.


If the item is: And: Then follow these steps


Claimant determines a location for the item.
The scribe records action taken on the 5S Action Item Log.

Unclaimed Is still usable.

Record the unnecessary item on the surplus form and place in an appropriate container for removal.

Unusable by anyone.

Discard immediately.
Do not record on the form.

Item is too large for the team to move.

Conduct a walk-through of the area and develop a plan for the disposition of large tagged items.
The scribe records action to be taken on the 5 Action Item Log.

7. Collect Action Item Log


Post in 5S communication area to be followed up during the next 5S meeting.

8. Distribute copies of the surplus form as appropriate.


Others may have a need for your surpluses items. Note on the surplus from the date items will be removed from the work area.

9.Remove all unnecessary items from the work area.

On date determined.

10.Check your work.


You will know you have completed this task when:

  • All tagged items have been dis-positioned.
  • The surplus form is routed to other organizations which may have use/need for listed items.
  • Unnecessary items are prepared for the return to appropriate organizations (Tool Rooms, Office Supply, etc.).

Simplifying Activity – Prepare for Simplifying

Purpose: To assist the workgroup in preparing to organize the work area.
When: Do this after all unnecessary items have been removed from the workplace.
Materials Required

  • Workgroup 5S Action Item Log.
  • Pen or pencil.
  • “Outlining Techniques,” “Labeling Techniques,” and “Shadow Board Technique” (1 set per team).

Steps for simplifying the activity

1.  Assemble in the work area.

2.Review section boundaries.

Notes: Refer to 5S Map

3.Review Simplifying criteria

Consider the following criteria:

  • Items used daily, store close at hand.
  •  Apply the 45-degree rule, minimize at hand.
  • Use strike zone rule, store items above the knees and below the chest.
  • One is best, reduce the number of duplicated items and storage locations whenever possible.

4.Distribute 5S Guides for performing Labeling, Outlining and Shadow boards.


  • 1 set per section team.
  •  Review techniques outlined in the guides.

5.Designate a coordinator to order required labels for the entire workgroup.

Notes: Labelling is one of the most common techniques. It is generally best to have one label coordinator for the entire workgroup.

6.Teams are prepared to go to assigned sections.

Notes: Refer to the 5S map.

7.Check your work.


You will know you have completed this task when:

  • You have reviewed assigned sections on 5S map.
  • You have reviewed Simplifying guidelines.
  • You have reviewed Simplifying techniques.
  • A label order coordinator has been designated.
  • Each section team has a set of “Simplifying” 5S Guides.
  • Section teams are prepared to go to assigned areas.

Simplifying Activity -Using Outlining Technique

 Purpose: To assist workgroup in outlining all appropriate items /areas in the workplace.

When: Do this after locations for all items have been designated according to their use.

Materials Required

  • Floor tape or masking tape
  • Marking pen

Steps to use the outlining technique

1.Assemble in the work area.

2.Identify and agree on the items or areas that require outlining.

Examples: (may not be appropriate in all areas)

  •  External work area boundaries.
  •  Movable carts
  •  The positioning of overhead projectors on tables
  •  Location of garbage cans
  •  Walkways
  • Stationery items in cabinets.
  • Designated receiving area.

3.  Outline the items or areas identified.


  • Use masking tape for outlining.
  • If using floor tape, contact Facilities/Maintenance for list, availability, and proper usage of approved materials.

4.  Label each item or outlined area.

Notes: Legibly print the name of the outlined item or area on the tape.

5.Check your work.

You will know that you have completed this task when:

  • All items and areas identified by the team are outlined to show a specific location.
  • All appropriate outlined items and areas are labelled.

Simplifying Activity -Using LABELING Technique

Purpose:  To assist workgroup in labelling all appropriate items in the workplace.
When: Do this after locations for all items have been designated according to their use.
Materials Required:

  • Masking tape for temporary labels
  • Marking pens
  • Computer-generated labels (as needed)
  •  Workgroup 5S Action Item Log
  • Label machine, if available
  • Blank notebook, paper and pen or pencil

Steps for using Labeling Technique

1.Assemble in the work area and designate scribe.

2. Apply temporary labels to ALL items and locations deemed necessary.

Use masking tape as a temporary label to identify ALL items determined to be necessary for the work area (may not be appropriate in all areas).

  • File cabinets  & Files
  • Drawers & Shelves
  • Tools & Boxes
  • Garbage cans
  • Books
  • Chairs
  • Computers
  • Supplies
  • Stationery
  • Cleaning Supplies

3. Mark each label

Notes{ Print legibly:

  • The name of the item.
  • The minimum/maximum number of items (only applicable to multiple items).

4.  Identify items that require restocking.

Notes: Record on the label when an item should be reordered (by date or by item count)

5.  Prepare a list for ordering permanent labels.

Notes: Print each label legibly and exactly as it should read.

6.  Add order labels to workgroup 5S Action Item Log.

Notes: Add to Action Item Log.

7.  Forward list of label names to label coordinator.

Notes: One person coordinates label generation and/or ordering for the entire work area.

8.  Check your work.


You will know you have completed this task when:

  •  All appropriate items have visible labels.
  • A list for ordering labels has been prepared.
  • Label list has been forwarded to label coordinator.
  •  Action item has been recorded on 5S Action Item Log.

Simplifying Activity – Using SHADOW BOARD Technique

Purpose:  To assist workgroup in making a shadow board for organizing supplies and tools in the workplace.  This technique may not be applicable to all areas.

When: Do this after locations for all items have been designated according to their use.

Materials required:

Possible construction materials you might need: Pegboard, Styrofoam, Hooks, Label maker, Plywood,  Cardboard, Form board,  Markers for outlining, Plexiglas case,  Hangers, Masking tape

Steps for using shadow board technique

1.  Assemble in the work area.

2.  Identify what supplies/tools require a shadow board.

Examples: (may not be appropriate in all areas)

  • Small hand tools
  • Copier supplies
  • Desk supplies

3.  Have team draft on paper the design of the shadow board.

Include in your design:

  • An outline of the supplies/tools to be put in the display.
  • The layout of how they are to be organized.
  • What materials will be required to build the display?
  •  Where the display will be located when complete.

4.  Post the mock-up of the design in the communication area for everyone to see.


  • Leave design posted for the predetermined period (two days is generally sufficient) to allow viewing by all shifts.
  • Provide name and phone number of a contact person to receive feedback on the mockup.

5.  Gather materials to build the board.

6.  Layout all supplies and tools on the board per the design.

7.  Outline supplies and tools as they will be placed on the board.


  • Use a pencil to outline the initial placement of the item on board.
  • Use a more permanent marker when satisfied.

8.  Label each outlined item and its location with its names.


  • Use masking tape for the labels.
  • Write the item name legibly on tape.
  • Use the same name for the item and its location on the board.
  • Make/order permanent labels (see “Labeling Techniques”).

9.  Place shadow board in the work area.

Notes: Refer to design for pre-determined location.

10.  Check your work.

Notes: You will know you have completed this task when:

  •  All items to be displayed on the board are arranged per the design.
  • All supplies and tools have been outlined.
  • All displayed supplies and tools and their locations are labelled.

Sweeping Activities -Perform SWEEPING

Purpose:  To assist workgroup in developing daily visual and physical sweeping activities to assess and maintain the work area.
When: Do this after the Simplifying activities have been completed.
Materials Required:

  • Tape
  •  Paper and pencil

Steps required to perform sweeping

1. Assemble in the meeting area and designate a scribe.

 2.  Prepare a list of “Visual Sweeping” activities that need to occur in the work area.


  • The list shows frequency and responsibility for individual and common areas.
  •  Activities on the list should support “Visual Sweeping” of the work area.
  •  Examples of what to check:
    • Items are orderly and safe
    • Equipment is in designated location/
    • Supplies/tools are in designated locations
    • Supplies/tools are in stock
    • Labels
    • Item location outline(s)
    • Shadow board

3.  Post the finished “Visual Sweeping” list in the 5S communication area.  
Notes: Identify list as “Visual Sweeping” list
4.  Prepare a list of “Physical Sweeping” activities.

  •  The list shows frequency and responsibility for individual and common areas.
  • Activities on the list should support “Physical Sweeping” of the work area to maintain cleanliness and order of the work environment.
  •  Examples:
    • Dust cabinets
    • Clean computer
    • Empty hole punch
    • Clean tools
    • Clean trash can
    • Sweep floor

5.  Post finished “Physical Sweeping” list in the communication area.
Notes: Identify list as “Physical Sweeping” list.

6.  Check your work.

You will know you have completed this task when:

  •  A list of “Visual Sweeping” activities and responsibilities is posted for both individual and common areas.
  •  A list of “Physical Sweeping” activities and responsibilities is posted for both individual and common areas.

Standardizing Activity – Perform STANDARDIZING Activity

Purpose: To assist workgroup in documenting agreements made during 5S activities and to develop a plan for periodic repetition of 5S activities.

When: Do this after Sweeping Activities

Materials required:

  • Paper or pen
  •  Visual and Physical Sweeping Lists from Sweeping activity.

Steps to perform standardizing activity

1.  Assemble in the meeting area and designate a scribe.

 2.  Review and document Sorting activity. Do the following:


  • Ask, “What criteria did we establish for sorting?”
  • Write down on paper the criteria identified.
  • Ask, “Are the criteria acceptable?”
    If Then follow these steps
    Yes No change required
    No Ask, “What improvements are needed?”
    Document change in criteria and place agreement in the 5S file
  • Examples to consider:
    • “Is there an area designated as ‘holding’?”
    • “Do we tag items and hold in the area until auction?”
    • “Do we bring ”Surplus Items” list to crew meetings weekly for disposition?”
  • Document process and place agreement in the 5S file.

3.  Review and document Simplifying activity.


Document agreements (including, but not limited to) those made for:

  • Labelling
  • Outlining
  • Shadow boards
  • Storage and stock quantities of supplies and tools
  • Safety

4.  Review and file Sweeping activity lists.


  • Obtain Visual and Physical Sweeping lists
  • Ask, “Are the activity lists rigorous enough to maintain a safe, clean, and orderly work area?”
    If Then follow these steps
    Yes No change required
    No Ask, “What improvements are needed?”
    Document change in criteria and place agreement in the 5S file

5.  Establish a schedule for periodic repetition of 5S activities.


  • Document agreed-upon schedule on note-paper.
  • Post in the 5S communication area.

6.  Check your work.

You will know you have completed this task when:


  • Documented agreements have been placed in the 5S file.
  • The schedule is in place for periodic repetition of 5S activities.

Self-Discipline Activity – Team

Purpose To assist workgroup in following through on all 5S agreements made for the work area. When Do this 1-2 weeks after Standardizing Activity has been completed in your area.  Repeat on a regular basis.

Materials required: 

  • Paper and pen
  •  Workgroup 5S Activity Item Log
  • Documented 5S agreements
  •  Individual Self-Discipline 5S Guide

Steps for team self-discipline activity

1.  Assemble in the work area for visual assessment and designate a scribe.

 2.  Determine if the 5S agreements are being followed in the work area.


Ask, “Are we following the agreements we put in place as a result of our 5S activities?”

IF Then follow these steps
Yes Acknowledge and congratulate
No List those agreements not being followed.
Ask, “Why not?”
Ask, “How can we fix it?”
Document agreed-upon solutions.
Place agreement(s) in the 5S file.”

3.  Develop a plan to address the needed improvements.


  • Be specific.
  • Identify responsibilities.
  •  Record on 5S Action Item Log.
  • Post in the communication area.

4.  Review Individual Self-Discipline 5S Guide.


  • Point out its location.
  • Review purpose.

5.  Check your work.


You will know you have completed this task when:

  • A group assessment has been performed on what has and what has not been followed through in the work area.
  •  A written plan has been prepared to detail issues that need to be addressed.
  • Any action items have been added to the 5S Action Item Log.

Self-Discipline Activity – INDIVIDUAL

Purpose:  To assist individuals in applying 5S agreements to the personal work area.

When:  Do this after Standardizing Activity has been completed in your area.

Materials required:

  •  Paper and pen
  • 5S agreement

Steps for self-discipline activity

1.  Go to your work area.

Notes: The immediate area where you perform most of your daily activities, i.e. cubicle, bench, etc.

2.  Determine the effectiveness of your individual organizing methods in support of the 5S agreement.


Considering your own personal work style, ask yourself:

  •  “Am I following the guidelines put in place as a result of 5S efforts?”
  • “Is my work area safe?”
  • “Is it neat and organized?”

Examples (add/delete as appropriate):

  •  Notebooks neatly stacked and labelled?
  •  In basket cleaned daily?
  •  Posted items neat and organized on the wall?
  •  Method for planning/prioritizing work assignments?
  •  Routine use of proper tools and methods?
  • Daily schedule posted?
  •  Use of in-out boards in the area?
  • Method for responding to phone messages?
  • Respond to the following:
IF Then follow these steps
You are following 5S agreements

Ask: “How will I maintain and improve?”

You are not following 5S agreements.

Ask: “What steps can I take to  improve?”

3.  Prepare a personal 5S plan.


  • This is your own personal plan.
  • Be realistic as you decide what improvements you want to make.
  • Revisit this plan frequently and make adjustments accordingly.

4.  Check your work.


You will know you have completed this task when:

  •  A self-assessment has been performed on what is and what is not being followed.
  • You have prepared a written plan to improve your area using 5S methods.

Measure Results

Purpose:  To assist the team in measuring improvements resulting from Implementation of 5S methods.

When: Do this after each completed repetition of 5S activities.

Materials required:

  •  “Before 5S” photographs
  • “Before” Area Check sheets
  •  “Levels of Excellence”
  •  Surplus list
  •  “Take Area Photographs” and “Perform Area Evaluation”
  • Pen or pencil

Steps to measure Results

1.  Take “after 5S” photographs.

Notes: Follow all steps in, “Take Area Photographs ”

2.  Complete “after 5S” area evaluation.


Follow all steps  “Perform Area Evaluation”
Following the development of “after 5S” photographs and completion of “after 5S” area evaluation, reassemble workgroup in the communication area and continue with Step #3 below.

3.  Analyze results following evaluation.
a)     Review “before” and “after” photos.
b)    Review “before” and “after” evaluations.
c)     Review a list of surplus items.


  • Observe improvements (such as organization, cleanliness).
  • Compare “before 5S” and “after 5S” evaluations.
  • Estimate the value of surplus inventory items.  Additional measures to consider.
    • Safety (number of injuries/time away from the job).
    • Cycle time.
    • Reduced inventory.
    • Increased usable floor space.

4. Acknowledge improvements in your area.

5.  Establish your next “Levels of Excellence” Goal.


  • Use the above analysis and the plan established during Team Self-Discipline.
  • Refer to “Levels of Excellence” and write down the agreed-upon next Levels of Excellence Goal.

6.  Post your goal in the communication area.

7. Communicate results with upper management.

8.  Check your work.


You will know you have completed this task when:

  • “After 5S” photos are posted in the communication area.
  • “After 5S” evaluation is posted in the communication area.
  • Improvement results have been analyzed and communicated to upper management.
  • You have established your next “Levels of Excellence” Goal.

Back to Home Page

If you need assistance or have any doubt and need to ask any question contact us at preteshbiswas@gmail.com. You can also contribute to this discussion and we shall be very happy to publish them in this blog. Your comment and suggestion are also welcome.

Lean Enterprise

Many companies today are becoming lean enterprises by replacing their outdated mass-production systems with lean systems to improve quality, eliminate waste, and reduce delays and total costs. A lean system emphasizes the prevention of waste: any extra time, labor, or material spent producing a product or service that doesn’t add value to it. A lean system’s unique tools, techniques, and methods can help your organization reduce costs, achieve just-in-time delivery, and shorten lead times. A lean enterprise fosters a company culture in which all employees continually improve their skill levels and production processes. And because lean systems are customer focused and driven, a lean enterprise’s  products and services are created and delivered in the right amounts, to the right location, at the right time, and in the right condition. Products and services are produced only for a specific customer order rather than being added to an inventory. A lean system allows production of a wide variety of products or services, efficient and rapid changeover among them as needed, efficient response to fluctuating demand, and increased quality.

The Philosophy:

Consider a the following  Venn Diagram. Two circles, one inside of the other. The large circle represents the Value Stream (all of the activity and information streams that exist between the raw material supplier and the possession of the customer). The smaller circle represents Waste (Cost without Benefit)1

Lean manufacturing is simply a group of strategies for identification and elimination of waste inside the value stream. The Identification and elimination of waste from the value stream is the central theme of the Lean Manufacturing Philosophy. Lean manufacturing is a dynamic and constantly improving process dependent on the understanding and involvement of all of the company’s employees. Successful implementation requires that all employees be trained to identity and eliminate waste from their work. Waste exists in all work and at all levels in the organization. Effectiveness is the result of the integration of. Man. Method, Material and Machine at the worksite.

  • The Problem – Waste exists at all levels and in all activities
  • The Solution – The Identification and Elimination of Waste
  • Responsibility – All of the employees and departments  comprising the organization

The Goals of the Lean Enterprise:

Your organization can apply lean methods and techniques to your product-production and business processes to deliver better value to your customers. A lean initiative has four main goals:

Goal #1: Improve quality.

Quality is the ability of your products or services to conform to your customers’ wants and needs (also known as expectations and requirements). Product and service quality is the primary way a company stays competitive in the marketplace. Quality improvement begins with an understanding of your customers’ expectations and requirements. Once you know what your customers want and need, you can then design processes that will enable you to provide quality products or services that will meet their expectations and requirements. In a lean enterprise, quality decisions are made every day by all employees.

Steps to improve Quality

  1. Begin your quality-improvement activities by understanding your customers’ expectations and requirements. Tools such as quality function deployment are helpful ways to better understand what your customers want and need.
  2. Review the characteristics of your service or product design to see if they meet your customers’ wants and needs.
  3. Review your processes and process metrics to see if they are capable of producing products or services that satisfy your customers.
  4. Identify areas where errors can create defects in your products or services.
  5. Conduct problem-solving activities to identify the root cause(s) of errors.
  6. Apply error-proofing techniques to a process to prevent defects from occurring. You might need to change either your product/service or your production/business process to do this.
  7. Establish performance metrics to evaluate your solution’s effectiveness.

Goal #2: Eliminate waste.

Waste is any activity that takes up time, resources, or space but does not add value to a product or service. An activity adds value when it transforms or shapes raw material or information to meet your customers’ requirements. Some activities, such as moving materials during product production, are necessary but do not add value. A lean organization’s primary goal is to deliver quality products and services the first time and every time. As a lean enterprise, you accomplish this by eliminating all activities that are waste and then targeting areas that are necessary but do not add value. To eliminate waste, begin by imagining a perfect operation in which the following conditions exist:

  • Products or services are produced only to fill a customer order—not to be added to inventory.
  • There is an immediate response to customer needs.
  • There are zero product defects and inventory.
  • Delivery to the customer is instantaneous

 By imagining a perfect operation like this, you will begin to see how much waste there is hidden in your company. Using lean initiatives will enable you to eliminate waste and get closer to perfect operation.

The seven types of waste

As you use the tools and techniques of lean production, you will work to eliminate seven types of waste, which are defined below:

  1. Overproduction.

    It can be defined as producing more than is needed. faster than needed or before needed. The worst type of waste, overproduction occurs when operations continue after they should have stopped. The results of overproduction are 1) products being produced in excess quantities and 2) products being made before your customers need them.

    The Characteristics of waste due to overproduction are :

    • Batch Processing
    • Building Ahead
    • Byzantine inventory Management
    • Excess Equipment/Oversized Equipment
    • Excess Capacity/investment
    • Excess Scrap due to Obsolescence
    • Excess Storage Racks
    • Inflated Workforce
    • Large Lot Sizes
    • Large WIP and Finished Goods Inventories
    • Outside Storage
    • Unbalanced Material Flow

    Over Production can be Caused by:

    • Automation in the Wrong Places
    • Cost Accounting Practices
    • Incapable Processes
    • Just in Case Reward System
    • Lack of Communication
    • Lengthy Setup times
    • Local optimization
    • Low Uptimes
    • Poor Planning

    Example of Overproduction can be Units which were produced in anticipation of future demand are often scrapped due to configuration changes.

  2. Waiting.

    It can be defined as the idle time that occurs when Codependent events are not fully synchronized.  Also known as queuing, this term refers to the periods of inactivity in a downstream process that occurs because an upstream activity does not deliver on time. Idle downstream resources are then often used in activities that either don’t add value or, worse, result in overproduction.

    The Characteristics of waste due to Waiting are

    • Idle Operators waiting for Equipment
    • Lack of Operator Concern for equipment Breakdowns
    • Production Bottlenecks
    • Production Waiting for Operators
    • Unplanned Equipment Downtime

    It can be caused by:

    • Inconsistent Work Methods
    • Lack of Proper Equipment/Materials
    • Long Setup Times
    • Low Man/Machine Effectiveness
    • Poor Equipment Maintenance
    • Production Bottle Necks
    • Skills Monopolies

    Examples of Waiting can be 1) An operator arrives at a work station only to find he must wait because someone else is using the equipment for production. 2)  A production lot arrives at a processing center only to find that the only qualified operator is not available

  3. Transport.

    Transportation waste can be defined as any material movement that does not directly support immediate production. This is the unnecessary movement of materials, such as work-in-progress (WIP) materials being transported from one operation to another. Ideally, transport should be minimized for two reasons: 1) it adds time to the process during which no value-added activity is being performed, and 2) goods can be damaged during transport.

    The Characteristics of waste due to transportation are :

    1. Complex Inventory Management
    2. Difficult and Inaccurate Inventory Counts
    3. Excessive Material Racks
    4. Excessive Transportation Equipment and shortage of associated packing spaces
    5. High rates of Material Transport Damaged
    6. Multiple Material storage Locations
    7. Poor Storage to Production floor space ratio

    It can be caused by:

    • Improper Facility Layout
    • Large Buffers and In-Process Kanbans
    • Large Lot Processing
    • Large Lot Purchasing
    • Poor Production Planning
    • Poor Scheduling
    • Poor Workplace Organization

    Some Example is Production units are moved off the production floor to a parking area in order to gather a “Full Lot’ tor a batch operation.

  4. Extra Processing.

    This term refers to extra operations, such as rework, reprocessing, handling, and storage, that occur because of defects, overproduction, and too much or too little inventory.  It can be defined as any redundant effort (production or communication) which adds no value to a product or service.  It is more efficient to complete a process correctly the first time instead of making time to do it over again to correct errors.

    The Characteristics of waste due to Extra Processing are:

    • Endless Product/Process Refinement
    • Excessive Copies/Excessive information
    • Process Bottlenecks
    • Redundant Reviews and Approvals
    • Unclear Customer Specifications

    It can be caused by:

    • Decision Making at Inappropriate Levels
    • inefficient Policies and Procedures
    • Lack of Customer input Concerning Requirements
    • Poor Configuration Control
    • Spurious Quality Standards

    Some of the examples are Time spent manufacturing product features which are transparent to the customers or which the customer would be unwilling to pay for. Work which could be combined into another process. Another example of extra processing is when an inside salesperson must obtain customer information that should have been obtained by the outside salesperson handling the account.

  5. Inventory.

    This refers to any excess inventory that is not directly required for your current customer orders.  It can be defined as any supply in excess of process requirements necessary to produce goods or services in a Just-in-Time manner. It includes excess raw materials, WIP, and finished goods. Keeping an inventory requires a company to find space to store it until the company finds customers to buy it. Excess inventory also includes marketing materials that are not mailed and repair parts that are never used.

    The Characteristics of waste due to Inventory are :

    • Additional Material Handling Resources (Men, Equipment. Racks. Storage space)
    • Extensive Rework of Finished Goods
    • Extra space on receiving docks
    • Long Lead Times for Design Changes
    • Storage Congestion Forcing LIFO (Last In First Out) instead of FIFO (First In First Out)

    It can be caused by:

    • Inaccurate Forecasting Systems
    • Incapable Processes
    • Incapable suppliers
    • Local Optimization
    • Long Change Over Times
    • Poor Inventory Planning
    • Poor Inventory tracking
    • Unbalanced Production Processes

    An example can be a large lot of purchases of raw material which must be stored while production catches up

  6. Motion.

    It can be defined as any movement of people which does not contribute added value to the product or service. This term refers to the extra steps taken by employees and equipment to accommodate inefficient process layout, defects, reprocessing, overproduction, and too little or too much inventory. Like transport, motion takes time and adds no value to your product or service. An example is an equipment operator’s having to walk back and forth to retrieve materials that are not stored in the immediate work area.

    The  Characteristics of waste due to motion are :

    • Excess Moving Equipment
    • Excessive Reaching or Bending
    • Unnecessarily Complicated Procedures
    • Excessive Tool Gathering
    • Widely Dispersed Materials/Tools/Equipment.

    It can be caused by

    • Ineffective Equipment, Office and Plant Layout
    • Lack of Visual Controls
    • Poor Process Documentation
    • Poor Workplace Organization

    For example, it is not uncommon to see operators make multiple trips to the tool crib at the beginning of a job. A lack of proper organization and documentation is in fact the cause for many types of waste.

  7. Defects.

    It can be defined as repair or rework of a product or service to fulfill customer requirements as well as scrap waste resulting from materials deemed to be un-repairable or un-reworkable. These are products or aspects of your service that do not conform to specification or to your customers’ expectations, thus causing customer dissatisfaction. Defects have hidden costs, incurred by product returns, dispute resolution, and lost sales. Defects can occur in administrative processes when incorrect information is listed on a form.

    The Characteristics of waste due to  defects are:

    • Complex Material flow
    • Excess finished goods Inventory
    • Excessive floor space /Tools/ Equipments.
    • Excessive Manpower To rework/repair/inspect
    •  Hugh Customer Complain/ returns
    • High Scrap Rates
    • Poor Production Schedule Performance
    • Questionable Quality
    • Reactive Organization

    It can be caused by:

    • Excessive Variation
    • High Inventory level
    • Inadequate tools/equipments
    • Incapable/Incompatible Process
    • Insufficient training
    • Poor Layouts/Unnecessary Handling (Transport Damage)

Steps to eliminate waste

As you begin your lean initiative, concentrate first on overproduction, which is often a company’s biggest area of waste. It can also hide other production-related waste. As your lean initiative progresses, your company will become able to use its assets for producing products or services to customer orders instead of to inventory.

  1. Begin your team-based waste-reduction activities by identifying a product or operation that is inefficient.
  2. Identify associated processes that perform poorly or need performance improvement. If appropriate, select the operation in your organization with the lowest production output as a starting point for your waste-reduction activities.
  3. Begin by creating a value stream map for the operation you are reviewing.
  4.  Review the value stream map to identify the location, magnitude, and frequency of the seven types of waste associated with this operation.
  5. Establish metrics for identifying the magnitude and frequency of waste associated with this operation.
  6. Begin your problem-solving efforts by using lean principles to reduce or eliminate the waste.
  7. Periodically review the metrics you have identified to continue eliminating waste associated with this operation.
  8. Repeat this process with other inefficient operations in your organization.

Goal #3: Reduce lead time.

Lead time is the total time it takes to complete a series of tasks within a process. Some examples are the period between the receipt of a sales order and the time the customer’s payment is received, the time it takes to transform raw materials into finished goods, and the time it takes to introduce new products after they are first designed. By reducing lead time, a lean enterprise can quickly respond to changes in customer demand while improving its return on investment, or ROI. Reducing lead time, the time needed to complete an activity from start to finish is one of the most effective ways to reduce waste and lower total costs. Lead time can be broken down into three basic components:

  1. Cycle time. This is the time it takes to complete the tasks required for a single work process, such as producing a part and/or completing a sales order
  2. Batch delay. This is the time a service operation or product unit waits while other operations or units in the lot, or batch, are completed or processed. Examples are the period of time the first machined part in a batch must wait until the last part in the batch is machined, or the time the first sales order of the day must wait until all the sales orders for that day are completed and entered into the system.
  3. Process delay. This is the time that batches must wait after one operation ends until the next one begins. Examples are the time a machined part is stored until it is used by the next operation, or the time a sales order waits until it is approved by the office manager.

As you think about places where you can reduce lead time in your product production or business process, consider the following areas:

  • Engineering design and releases
  • Order entry
  • Production planning
  • Purchasing
  • Order fulfilment
  • Receiving
  • Production
  • Inspection/rework
  • Packaging
  • Shipping
  • Invoicing and payment collection

Below is a list of possible lead-time solutions to consider and their goals. They are divided into three categories: product design, manufacturing, and supply.

  1. Product design

    Product rationalization. This involves simplifying your product line or range of services by reducing the number of features or variations in your products or services to align more directly with your customers’ wants and needs.

  2. Manufacturing

    • Process simulations. These enable you to model your work processes to reveal waste and test the effects of proposed changes.
    • Delayed product configuration. This means waiting until the end of your production cycle to configure or customize individual products.
    • One-piece, or continuous, the flow of products and information. This enables you to eliminate both batch and process delays.
    • Technology (i.e., hardware and software) solutions. These enable you to reduce cycle time and eliminate errors.
    • Quick changeover. This involves making product/service batch sizes as small as possible, enabling you to build to customer order.
    • Work process standardization. This means identifying wasteful process steps and then standardizing “best practices” to eliminate them.
  3. Supply

    Demand/supply–chain analysis.

      This reveals wasteful logistical practices both upstream and downstream in your demand/supply chain. It often reveals excess inventories being held by your customers, your organization, and/or your suppliers due to long manufacturing lead times that result in overproduction. Freight analysis sometimes reveals that overproduction occurs in an effort to obtain freight discounts. However, these discounts do not necessarily offset the costs of carrying excess inventory.

Steps to reduce Lead Time

The steps your improvement team must take to reduce lead time are similar to the ones you take to eliminate waste.

  1. Begin your team-based lead-time-reduction activities by creating a value stream map for the business process you are targeting.
  2. Calculate the time required for the value-added steps of the process.
  3. Review the value stream map to identify where you can reduce lead time. Brainstorm ways to make the total lead time equal the time required for the value-added steps that you calculated in step 2.
  4. Determine what constraints exist in the process and develop a plan to either eliminate them or manage them more efficiently.
  5. Establish metrics to identify the location, duration, and frequency of lead times within the process.
  6. Once you have established a plan for improving the process, measure the improvement.
  7. Repeat this process for other inefficient operations in your organization.

Goal #4: Reduce total costs.

Total costs are the direct and indirect costs associated with the production of a product or service. Your company must continually balance its products’ and services’ prices and its operating costs to succeed. When either its prices or its operating costs are too high, your company can lose market share or profits. To reduce its total costs, a lean enterprise must eliminate waste and reduce lead times. For cost management to be successful, everyone in your organization must contribute to the effort. When you implement a process to reduce total costs, your goal is to spend money wisely to produce your company’s products or services. To minimize the cost of its operations, a lean enterprise must produce only to customer demand. It’s a mistake to maximize the use of your production equipment only to create overproduction, which increases your company’s storage needs and inventory costs. Before you can identify opportunities to reduce costs, your team should have some understanding of the way that your company tracks and allocates costs and then uses this information to make business decisions. A company cost structure usually includes variable and fixed costs, which are explained below:

  • Variable costs. These are the costs of doing business. These costs increase as your company makes each additional product or delivers each additional service. In manufacturing operations, variable costs include the cost of raw materials.
  • Fixed costs. These are the costs of being in business. These costs include product design, advertising, and overhead. They remain fairly constant, even when your company makes more products or delivers more services.

Cost-Reduction Methods

Use one or more of the methods listed on the next page to identify places to reduce the costs related to your company’s current processes or products/services. These methods are useful for analyzing and allocating costs during the new-product-design process.

  • Target Pricing. This involves considering your costs, customers, and competition when determining how much to charge for your new product or service. It’s important to remember that pricing has an impact on your sales volumes, and thus your production volumes. The rise and fall of production volumes impact both the variable and fixed costs of the product—and ultimately how profitable it will be for your company.
    Target Costing. This involves determining the cost at which a future product or service must be produced so that it can generate the desired profits. Target costing is broken down into three main components, which enables designers to break down cost factors by product or service, components, and internal and external operations.
    Value Engineering. This is a systematic examination of product cost factors, taking into account the target quality and reliability standards, as well as the price. Value engineering studies assign cost factors by taking into account what the product or service does to meet customer wants and needs. These studies also estimate the relative value of each function over the product’s or service’s life cycle.

The following techniques are useful for analyzing and improving the cost of your organization’s operations.

  • Activity-based costing (ABC). ABC systems allocate direct and indirect (i.e, support) expenses—first to activities and processes, and then to products, services, and customers. For example, your company might want to know what percentage of its engineering and procurement costs should be allocated to product families to determine product-contribution margin. In addition, you can do indirect cost allocations for each customer account, which enables you to do a customer-profitability analysis.
  • Kaizen (i.e., continuous improvement) costing. This focuses on cost-reduction activities (particularly waste reduction and lead-time reduction) in the production process of your company’s existing products or services.
  • Cost maintenance. This monitors how well your company’s operations adhere to cost standards set by the engineering, operations, finance, or accounting departments after they conduct target costing and kaizen-costing activities.

Steps to reduce Total cost

  1. Decide whether your cost-improvement efforts will begin with new or existing product lines.
  2.  If new products or services are the focus of your improvement efforts, techniques to consider using are target pricing, target costing, and value engineering.
  3. If existing products or services are your focus, begin by reviewing your company’s high-cost products and processes. Apply ABC, Kaizen costing, and cost maintenance to assist your cost-improvement initiatives. If your product-production process is inherently costly, first consider applying the lean manufacturing
    techniques Then focus your efforts on reducing total costs. This typically involves company-wide participation.

Why are these goals important?

  • Implementing lean tools and techniques will enable your company to meet its customers’ demand for a quality product or service at the time they need it and for a price they are willing to pay.
  • Lean production methods create business and manufacturing processes that are agile and efficient.
  • Lean practices will help your company manage its total costs and provide a fair ROI to its stakeholders.

Lean Metrics

Lean metrics are measurements that help you monitor your organization’s progress toward achieving the goals of your lean initiative. Metrics fall into three categories: financial, behavioral, and core-process. Lean metrics help employees understand how well your company is performing. They also encourage performance
improvement by focusing on employees’ attention and efforts on your organization’s lean goals. Lean metrics enable you to measure, evaluate, and respond to your organization’s current performance in a balanced way—without sacrificing the quality of your products or services to meet quantity objectives or increasing your product inventory levels to raise machine efficiency. Properly designed lean metrics also enable you to consider the important people factors necessary for your organization’s success.

Objectives of using lean metrics

  1. After you use lean metrics to verify that you are successfully meeting your company’s lean goals, you can do the following:
    • Use the data you have collected to determine existing problems. Then you can evaluate and prioritize any issues that arise based on your findings.
    • Identify improvement opportunities and develop action plans for them.
    • Develop objectives for performance goals that you can measure (e.g., 100% first-time through quality capability = zero defects made or passed on to downstream processes).
    • Evaluate the progress you have made toward meeting your company’s performance goals.
  2. Lean metrics help you analyze your business more accurately in the following areas:
    • Determining critical business issues, such as high inventory levels that drive up operational costs, poor quality levels that create customer dissatisfaction, and extended lead times that cause late deliveries and lost orders.
    • Determining whether you are adhering to lean metrics. These differ from traditional metrics, which can actually work against you. For example, adhering to traditional metrics such as machine efficiency can spur overproduction, and improving your inventory turnover can worsen your on-time-delivery performance.
    • Determining the best way to use your organization’s resources. For example, you can ask questions such as “What is our most frequent problem?” and “What is our costliest problem?”

Before your team begins to collect data, ask the following questions:

  1. What is our purpose for collecting this data?
  2. Will the data tell us what we need to know?
  3.  Will we be able to act on the data we collect?

Your goal is to create an easy-to-use, high-impact measurement system. An easy-to-use system must require minimal human involvement. The higher the level of human involvement required, the lower the accuracy of the data and the more time needed for data collection. Try to find ways to automate your data collection and charting. A high-impact measurement system is one that results in information that is useful and easily interpreted.
Use a standard definition form for your metrics. The form should answer the following questions:

  • What type of metric is it (financial, behavioral, or core-process)?
  • Why was it selected?
  • Where will the data be obtained?
  • How will the data be collected?
  • What formula will be used for calculating the metric?
  •  How often will it be calculated?
  • How often will the metric be used?

Revise your definition form as needed. Use basic graphs (e.g., line, bar, and pie graphs) and statistical process control (SPC) charts to display your data. These charts give you insight into data trends, reveal whether true process changes have occurred, and show if the process is capable of achieving your desired performance objectives. Other data analysis techniques might be required to conduct effective problem-solving.

Designing  a data-collection process

When you design your data-collection process, keep the following points in mind:

  • Make sure that all employees who will collect the data are involved in the design of your data collection process.
  • Tell employees that the main driver for data collection is process improvement, not finger-pointing.
  • Tell all involved employees how the data will be used.
  • Design data-collection forms to be user-friendly.
  • When developing a data-collection procedure, describe how much data is to be collected, when the data is to be collected, who will collect the data, and how the data is to be recorded.
  • Automate data collection and charting whenever possible.
  • Involve employees in the interpretation of the data. Avoid the following pitfalls:
  • Measuring everything. Focus instead on the few critical measures that can verify performance levels and guide your improvement efforts.
  • Misinterpreting data. Show employees why and how the data was captured. Also, tell how the data will be used in your lean enterprise initiative.
  • Collecting unused data. Data collection is time-consuming. Ensure that all the data you collect will be put to good use.
  • Communicating performance data inappropriately. Avoid creating harmful faultfinding, public humiliation, or overzealous competition.

Remember to use the appropriate tools for your analysis. Less-experienced teams can use basic tools such as Pareto Charts, Histograms, Run Charts, Scatter Diagrams, and Control Charts. More-expert teams can use advanced tools such as regression analysis, design of experiments, and analysis of variance (ANOVA). Most metrics reveal ranges of values and averages of multiple measures. However, your customers rarely experience an “average.” Each opportunity for a defect is an opportunity for failure in your customers’ eyes. As you work toward improvement, you might find that solving the smallest problems takes up most of your time. You might spend 80% of your improvement efforts fixing 20% of the things that go wrong.

  1. Financial metrics

    You improve your organization’s financial performance by lowering the total cost of operations and increasing revenue. If your company can become a lower-cost producer without sacrificing quality, service, or product performance, it can strengthen its performance and market position. It’s also important to avoid cost-shifting, which is the act of moving costs from one account to another without creating any real savings. Cost shifting often hides waste rather than removing it. Your ultimate goal is to reduce both your hard- and soft cost savings for the benefit of the whole organization.

    Examples of Financial Metrics

    • Costs
      • Cash flow
      • Direct and indirect labour costs
      • Direct and indirect materials cost
      • Facility and operational costs
      • Production systems
      • Information systems
      • Inventory-carrying costs
      • The total cost of ownership
    • Revenue
      • Sales
      • Gross margins
      • Earnings before interest and taxes
      • Return on assets
      • Return on investment
      • Warranty costs
      • Product profitability
  2. Behavioral metrics

    Behavioral metrics are measurements that help you monitor the actions and attitudes of your employees. Employees’ commitment, communication, and cooperation all have a significant impact on your organization’s success. Financial and core-process metrics alone cannot show whether employees are working together in a cooperative spirit. Your company’s long-term success is possible only when employees’ behavior is aligned and everyone works for the benefit of the entire organization. Customer and employee satisfaction surveys and core-process metrics measure behavioral performance only indirectly. More effective and direct ways to measure it include project feedback, meeting evaluations, employee appraisals, and peer evaluations.  Conduct teamwork and facilitation training to improve cooperation and communication within your organization.   Make sure your reward-and-recognition system is aligned with your company’s lean goals
    Behavioural Categories and Metrics

    1. Category: Commitment
      • Adherence to policies and procedures
      • Participation levels in lean improvement activities
      • Availability and dedication of the human resources department
      • Efforts to train employees as needed
    2. Category: Communication
      • Customer/employee surveys regarding the quantity and quality of company communications efforts
      • Elimination of service or production errors caused by ineffective communications
      • Error-reporting accuracy and timeliness
      • Formal recognition of employees’ communication effort
    3. Category: Cooperation
      • Shared financial risks and rewards
      • Effective efforts toward reporting and resolving problems
      • Joint recognition activities
      • Formal recognition of employees’ cooperation efforts
  3. Core-process metrics

    There are many different types of core-process metrics, which allow you to measure the performance of your core processes in different ways. Be sure to measure all your core processes for both productivity and results. Productivity, the ratio of output to input, provides data about the efficiency of your core processes. Tracking the results and then comparing them to your desired outcomes provides you with information about their effectiveness. Some general core-process metrics are shown below.

    Core-Process Metrics

    • New product launches
    • New product extensions
    • Product failures
    • Design-cycle time
    • Time to market
    • Product life-cycle profitability

    Product life-cycle metrics include the identification of market potential, product design, new product launches, model extensions, product use, and product obsolescence. Order-fulfillment-cycle metrics include activities related to sales, engineering, procurement, production planning and scheduling, the production process, inventory management, warehousing, shipping, and invoicing. Some specific core-process metrics are shown below.

    Results Metrics

    • Health and safety (HS)
    • First-time-through (FTT) quality
    • Rolled-throughput yield (RTY)
    • On-time delivery (OTD)
    • Dock-to-dock (DTD)
    • Order-fulfilment lead time (OFLT)

    Productivity Metrics

    • Inventory turnover (ITO) rate
    • Build to schedule (BTS)
    • Overall equipment effectiveness (OEE)
    • Value-added to non-value-added (VA/NVA) ratio
  4. Health and safety metrics

    Health and safety (HS) metrics measure the impact of your production processes on employees’ health and safety. A wholesome and safe workplace improves the availability and performance of your organization’s human resources. Operations costs improve when insurance rates are lowered, the cost of replacing workers is reduced, and production assets are more available. In addition, improved morale and a sense of well-being increase employee productivity and participation in your company’s improvement initiatives. HS conditions can be measured in several ways. Metrics to consider when evaluating HS include days lost due to accidents, absenteeism, employee turnover, and experience modification ratio (EMR), a method used by insurance companies to set rates.

  5. The first time through (FTT)

    The first time through (FTT) is a metric that measures the percentage of units that go through your production process without being scrapped, rerun, retested, returned by the downstream operation, or diverted into an off-line repair area. This metric is also applicable to processes related to the services your company provides. For example, you can use it to measure the number of sales orders processed without error the first time they go through your work processes. Increased process/output quality reduces the need for excess production inventory, improving your dock-to-dock (DTD) time.  It improves your ability to maintain proper sequence throughout the process, improving the build-to-schedule (BTS) metric. Increasing quality before the constraint operation occurs ensures that that operation receives no defective parts. This enables you to increase your quality rate and reduce defects at the constraint operation. This in turn improves the overall-equipment effectiveness (OEE) metric. Your organization’s total cost is improved due to lower warranty, scrap, and repair costs. FTT is calculated using the following formula. (Remember that “units” can be finished products, components, or sales orders; FTT’s use is not limited to a production environment.)1

  6. Rolled throughput yield (RTY)

    Rolled throughput yield (RTY) is a metric that measures the probability that a process will be completed without a defect occurring. Six Sigma programs use this metric either instead of or in parallel with FTT. RTY is based on the number of defects per opportunity (DPO). An opportunity is anything you measure, test, or inspect. It can be a part, product, or service characteristic that is critical to customer-quality expectations or requirements. FTT measures how well you create units of product; RTY measures how well you create quality. While FTT measures at the unit level and finds the percentage of defective parts, RTY measures at the defect level and finds how many defects a particular part has. The RTY metric is sensitive to product complexity, as well as the number of opportunities for defects present in a production process or aspect of a service. RTY can help you focus an investigation when you narrow down a problem within a complex or multi-step process. To calculate RTY, you must first calculate defects per unit (DPU) and defects per opportunity (DPO). The result is then used to calculate RTY.1
    Defects per opportunity (DPO) is the probability of a defect occurring in any one product, service characteristic, or process step. It is calculated as follows:1
    Finally, RTY is calculated as follows: RTY = 1 – DPO

  7. On-time delivery (OTD)

    On-time delivery (OTD) is a metric that measures the percentage of units you produce that meet your customer’s deadline. For this metric, a unit is defined as a line item on a sales order or delivery ticket. OTD provides a holistic measurement of whether you have met your customer’s expectations for having the right product, at the right place, at the right time. You can use OTD to track deliveries at both the line-item and order levels. OTD alerts you to internal process issues at the line-item level and shows their effect on your customers at the order level. OTD ensures that you are meeting optimum customer-service levels. When you balance OTD with the other internally focused core-process metrics—build-to-schedule (BTS), inventory turnover (ITO) rate, and dock-to-dock (DTD)— you can meet your customer-service goals without making an excessive inventory investment. OTD is calculated on an order-by-order basis at the line item level using the following formula: