Statistical Methods of Quality Control Research Paper

Academic Writing Service

Sample Statistical Methods of Quality Control Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

1. Introduction

Modern quality improvement by statistical methods began in the 1920s through efforts of W. A. Shewhart (see Shewhart 1931) of the Bell Telephone Laboratories. Rather than wait until manufactured items arrived at an inspection center at the end of the production line, he conceived of the idea of occasionally sampling during the manufacturing process and making corrections immediately, if needed. To monitor the need for possible adjustments, certain characteristics—called statistics—of each sample were recorded. If any of these statistics had unusual values that indicated something was wrong with the process, corrective action was taken before too many more defective items were produced. Accordingly, the quality of the final output was improved.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


While Shewhart started this modern quality improvement by trying to reduce the number of defects in manufacturing, many organizations soon recognized that there was much waste throughout their operations. There were simply many activities that added nothing to satisfactory outcomes. This is not only true in manufacturing but in most service areas as well, including hospitals, banks, and educational institutions. Estimates of how much waste and worthless activities exist range from 25 to 50 percent of the total effort, although there are some organizations that probably exceed that bound of 50 percent. In any case, most industries and businesses could make huge improvements in quality products and services if they only recognized how poorly they are doing. Often such recognition is found by taking measurements, collecting data, and turning these data into information through statistics.

There are many philosophies and techniques about improving quality of products and services. Some of these have been formalized by considering the ISO 9000 or Baldrige awards (see Ledolter and Burrill 1999). Many companies have earned these awards and have had continued success; but some have then gone on to fail because they have not had a continuous commitment by top management to maintaining quality products and services. Bob Galvin, former CEO of Motorola, truly believed that ‘quality improvement is a daily, personal, priority obligation,’ and his commitment made Motorola what it is today.




At the end of the twentieth century it is worth making note of one quality guru who almost lived that entire twentieth century. W. Edwards Deming was a statistician who was born October 14, 1900 and died December 20, 1993 and was giving short courses in quality improvement ten days before his death. Deming went to Japan after World War II and taught the Japanese how to make quality products. For his work there, he was awarded the Emperor’s Medal, and the Japanese established the Deming Prize to be awarded each year to a company or individual contributing the most to quality improvement.

There are others too; certainly Joe Juran and Japan’s Kaoru Ishikawa (see Ishikawa 1982) must be mentioned. However, it was Deming’s approach to optimizing the entire system that was a major point. In addition, his thoughts on ‘profound knowledge’ are extremely valuable, particularly the important part of it about understanding variation and statistical theory.

Understanding variation is a key to quality improvement. Many of Deming’s 14 points (Deming 1986), while not mentioning statistics, centered on reducing variation. Deming believed that barriers between departments, between management and workers, and among workers, must be broken down to improve communication and the ability to work as a team.

The lines of communication, all the way from suppliers to customers, must be open to help reduce variation and improve products and services. For example, he argued that a company should not only buy on price tag but have a few reliable suppliers (possibly one) for a single part because that will reduce variation. That is, many different suppliers would obviously increase variation. Moreover, he argued that you should become partners, friends if you like, with your suppliers. You learn to trust the other; in that way, you can use methods like ‘just in time,’ in which your inventory can be kept reasonably small, to keep costs down.

Deming also preached constancy of purpose. If management ideas tend to change too much, employees really do not know what to do and cannot do their best. That is, they are mixed up, increasing variation. It is better for them to receive a constant signal from their employer, a signal that only changes if research dictates ways of improving. More training and education for everyone associated with a company also decreases variation by teaching how to make the product more uniform. Workers must know that they should not be afraid to make suggestions to improve the process.

Being team members will make it easier for workers to speak up without fear of reprisal. Deming also noted that requiring quotas does not help quality. A foreman who has a quota of 100 per day will often ship out 90 good and 10 bad items just to make the quota. Clearly, it would reduce the variation and satisfy the customer better if only 90 good ones were shipped.

This leads to the point that a final inspection of products does not really improve the quality. With such a procedure, you can only eliminate the bad ones and send on the good ones. Improvements in the design of the products and manufacturing processes are needed. If these are done well, often with the help of statistical methods, that final inspection can be eliminated. That is, improvements should be made early in the process rather than try to correct things at an end inspection by weeding out the bad items.

Deming’s points are still extremely important in the quality movement today. Understanding variation and reducing it still ‘makes the doors fit better.’ However, most of Deming’s ideas also apply to service activities, ranging from market surveys to financial institutions (see Latzko 1986). Since so many more persons are employed in the service areas than in manufacturing, there are many more opportunities for quality improvement there. Why are some restaurants, clothing stores, law offices, banks, accounting firms, and hospitals better than others? Of course, those involved must know the technical aspects of their work, but the really excellent managers know that quality begins by treating their employees and customers right. The cost of a few dissatisfied persons is difficult to assess, but it is usually much greater than most think because these people will tell many others about their unpleasant experiences. Most of this quality improvement is common sense but it is not commonly used.

2. Statistical Methods For Quality Improvement

Let us begin our statistical methods by starting with one of the Shewhart charts, the mean chart. We sample from the output of a process. The frequency of sampling of the output of the process is really determined by the number of items being produced. If there are a large number produced in a given day, the sampling might occur every hour. On the other hand, few in number might suggest sampling once per day is enough. Often these samples are small, and a typical size is n = 5, particularly when the measurement taken is something like length, weight, or strength. For example, in the manufacturing of paper, there is interest in its ‘tear strength’ because most often the manufacturer does not want paper to tear easily.

All of the underlying process have some mean µ and standard deviation σ. Hopefully the level µ is acceptable; and, in most cases, it is desirable to reduce the spread, as measured by σ, to make the output as consistent as possible. Hence, from each sample of n = 5 items, two statistics are computed: one to measure the level (middle) and one to measure the spread. We first concentrate on measure of the middle and that statistic is denoted by x and called the sample mean. Theory shows that once µ and σ of measurements of the underlying process are known (possibly estimated), the statistic will be between µ – 3 σ/√n and µ + 3σ/√n unless there has been some change in the process producing these items. That is, if is not within the control limits µ±3σ/√ n, then the process is investigated and usually some corrective action taken. The usual measures of spread are the sample range R and the sample standard deviation σ, and they have control charts similar to that of x, and are called Rand σcharts.

There are two other Shewhart control charts that must be described. The first of these is when we observe the number of flaws, say c, on a certain unit; for illustration, count the number of deaths on a certain highway each week. Here c can equal one of the nonnegative integers: 0, 1, 2, 3, 4, and so on. A Poisson probability model is used for the statistic c, and these probabilities can be determined from the mean µ of the number of flaws. Again µ is often estimated using past data. However, once it is determined, then, for the Poisson model, the standard deviation of c equals. Thus, µ±3√µ gives the control limits for the c-chart. Future values of a c are plotted sequentially on this chart, taking some action if one is outside the control limits: if above µ+3√ µ, find out why and hopefully learn how to improve the process; if below µ-3√µ, again find out why and hopefully learn how to continue this gain in quality.

The last control chart arises in a situation in which the items are measured as being ‘good’ or ‘bad.’ If n of these items are observed, let D equal the number of defective items in the n items. The binomial probability model is appropriate for the distribution of D. Here n is much larger than 5, often equal to as much as 100. Of course, p = D/n is an estimate of p, the fraction of defective items being produced. The binomial theory states that p = D/n has mean p and standard deviation √ p(1-p)/n.

Once some estimate of the fraction defective, p, is found from past data, the control limits p±3√p(1-p)/n are calculated for the p-chart. Future values of p = D/n are plotted on this control chart, and appropriate action is taken if one value is outside the control limits. Again, very small values of D/n would indicate some sort of improvement, and we would try to maintain that gain. When the statistics are between the control limits, which are usually estimated from the data, the process is said to be in statistical control.

The manufacturing team must make the decision whether or not the actual items from an ‘in control’ process are satisfactory. That is, they might find that the items are not satisfactory even though the process is in statistical control; then some adjustments must be made. However, once this is done, Shewhart control charts have proved to be extremely valuable in the past and have helped maintain the gain in the quality improvement of manufactured products. In addition, these control charts are now being used in many service areas, for example, studying errors made in recording the amounts on checks by banks.

Since most major manufacturing companies are requiring their suppliers to use statistical methods that help improve their products, the need for acceptance sampling to check upon the quality of incoming products has diminished. That is, partnerships with suppliers have been established, and hence the trust among the various parties is usually high.

Nevertheless, there are enough cases in which the incoming lot of items is poor; accordingly a large number of companies still use acceptance sampling. To describe acceptance sampling, suppose a lot has N = 5,000 items. Say that these materials are acceptable if two percent or less is defective.

This number, 2 percent = 0.02 is called the acceptable quality level. Understand that 0.02 is used only as an example, and some might insist that the incoming lot is only acceptable if there are almost zero defectives. In the latter case, this requires 100 percent inspection, which has some disadvantages, particularly if the item must be destroyed to inspect it, like ‘blowing a fuse.’ So most manufacturers are willing to accept lots with a small percentage of defects; needless to say, this percentage varies depending upon the importance of the item under consideration. To continue with the example, say a sample of n = 200 is taken from the lot of N = 5000, and it is decided to accept the lot if the number of defectives, D, in the sample is less than or equal to Ac = 7, the acceptance number.

Now clearly, as with any sampling acceptance plan, there are two types of errors that can be made. Say p is the fraction defective in the lot. If p = 0.02 and D > 7, then the lot is rejected when it should be accepted. This is an error of the ‘first type.’ The ‘second type’ error occurs if p is large, say p = 0.06 and D = 7, so that the lot is accepted when it should be rejected. It is desirable to have the probabilities of these errors as small as possible. Using a Poisson probability approximation, it is found in this illustration that the probabilities of the two errors are a = 0.051 and b = 0.090, respectively. If these probabilities are not small enough, then the sample size n must be increased and the acceptance number Ac changed.

Typically, the manufacturer and the customer require certain specifications associated with the product. These are usually given in terms of a target value, and the lower specification limit, LSL, and the upper specification limit, USL. These ‘specs’ are set, often by the engineer or customer, so that if the product is within specs, it will work satisfactorily. One of the best ways to see if the products are within specs is to construct a histogram concerning the measurement of interest. If the product is satisfactory, the lower and upper values associated with the histogram should be within specifications. If several items fall outside of specs, then seemingly the process, without proper adjustments, is not capable of producing satisfactory products. If adequate improvements cannot be made with the current machinery, possibly 100 percent inspection must be used to eliminate the bad items, or else new machinery should be installed.

Clearly, economic characteristics of the situation are needed to make a final decision on the appropriate action. While a histogram seems to be a satisfactory way to measure the capability of the process, a large number of companies are resorting to various measures called capability indices. Possibly the most popular one is Cpk. Say µ and σ are the mean and the standard deviation, respectively, of the measurements associated with a process which is in statistical control.

The index Cpk is equal to distance of µ to the closest specification divided by 3σ. For example, suppose LSL = 48.1 and USL = 49.9 and µ = 49.1 and σ = 0.2, then Cpk = 0.8/3(0.2) = 1.33, because USL = 49.9 is the closest spec to µ = 49.1 and the distance between them is 0.8.

The reason that such a measure seems good is that if the underlying distribution is normal, then 99.73 percent of the items are within µ±3σ. If Cpk>1, then the closest spec is more than 3σ away from µ. Most companies using Cpk require that the Cpk be greater than 1.33 or even 1.5, so that µ is 4σ or even 4.5σ away from the nearest spec.

However, there are some difficulties associated with the use of Cpk, or any of the capability indices. One is that it is assumed that the underlying distribution of measurements of an item is normally distributed, when often it is skewed with possibly a very long tail. But even worse, usually m and s have been estimated using some sample mean and standard deviation.

Knowing these are only estimates with some error structure, the computed Cpk using these estimates may be in error as much as 0.2 or 0.3, even with samples of sizes as large as n = 50 or 100. For illustration, the computed Cpk using estimates might be 1.19 while the real Cpk might be 1.39 or 0.99.

Frequently, the management might be very satisfied with a computed Cpk of 1.39 and not one of 1.28, if it is using the 1.33 cutoff; and yet there may not be any ‘real’ difference between the two values. Management should be aware of this possibility in making certain judgments and decisions.

Only some basic, but important, statistical methods for quality improvement have been touched upon. There are many more quantitative methods that can help, and these are considered in the references. These range from simple ideas like flowcharts, Pareto diagrams, and cause-and-effect diagrams, through CUSUMS and exponential weighted moving average charts to more complicated techniques in the design of experiments, including Taguchi’s methods. Most of these techniques are explained in such books as those of Montgomery (1996) or Ledolter and Burrill (1999). Accordingly only a few remarks are made about the simple ones, and the references should be used for the more complicated statistical techniques.

A flowchart of the total process shows the steps that are needed to complete the process. Often by constructing a flowchart, the persons involved can see where certain steps can be eliminated or worthwhile shortcuts taken. A Pareto diagram is named for an Italian economist; it is essentially a bar graph in which the height of each bar represents the percentage of defects caused by a certain source. The possible sources are ordered so that the corresponding bars are in order of heights, with the tallest being on the left. Often one or two (possibly three) sources account for more than 85 percent of the defects; hence the Pareto diagram indicates clearly where the most effort should be placed to achieve improvement. The ‘cause and effect’ diagram is often referred to as an Ishikawa ‘fish bone’ diagram, because it looks like one. The main horizontal line points to the ‘trouble’, which is labeled to the right side of the horizontal line. Four to six diagonal lines leave the horizontal line, slanting to the ‘northwest’ or ‘southwest’ with labels at their ends that might be major sources of the trouble, like people, materials, machines, environment, etc. Then little ‘bones’ are attached to each of these with appropriate labels. It is amazing how completely these can be filled in with four or five knowledgeable persons ‘brainstorming’ for 30 minutes or so. All labels on the bones should then be considered for possible sources of the trouble, and often such a diagram calls attention to sources that might otherwise have been overlooked.

More advanced techniques like CUSUMS and exponential weighted moving average charts that can be used in place of or along side of Shewhart charts have proved most worthwhile. However, the designing of new products, or troubleshooting to find factors that are causing defects, are best served through statistical design of experiments. Some of these methods, in particular 2k and 2k-p designs, have been exceptional tools; they are considered in the book by Box et al. 1978. Overall, quality improvement has been greatly enhanced by statistical theory and methods. The major contributors, including Shewhart, Deming, Juran, Ishikawa, Taguchi, and many more, deserve the appreciation of many in the quality movement. Clearly, from various experiences that have not been successful, much more remains to be done, and statisticians should play a major role in future efforts in improving the quality of products and services.

Bibliography:

  1. Box G E P, Hunter W G, Hunter J S 1978 Statistics for Experimenters. Wiley, New York
  2. Deming W E 1986 Out of the Crisis. Massachusetts Institute of Technology, Cambridge, MA
  3. Hogg R V, Ledolter J 1992 Applied Statistics for Engineers and Physical Scientists. Macmillan, New York
  4. Ishikawa K 1982 Guide to Quality Control. Asian Productivity Organization, Tokyo
  5. Latzko W J 1986 Quality and Productivity for Bankers and Financial Managers. Marcel Dekker, New York
  6. Ledolter J, Burrill C W 1999 Statistical Quality Control. Wiley, New York
  7. Montgomery D C 1996 Introduction to Statistical Quality Control. Wiley, New York
  8. Shewhart W A 1931 Economic Control of Quality of Manufactured Product. Nostrand, New York
Quality Control and Reliability Research Paper
Marketing Strategies Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!