Contact Waples for all of your machining and fabrication needs.

Blog / Industry Related Articles


Calibration Explained

By Mike Gilstrap | August 13, 2014 at 05:20 PM EDT | No Comments

Every day there are numerous applications proving how useful calibration is.

By Ed Rocheleau in August 4, 2014 edition of Quality Magazine


Metrology comes from the Greek word metro,meaning “to measure something.” Therefore metrology is the science and practice of measurement. The measurement is conducted using quantitative equipment which requires calibration on a regular basis.


Calibration is a comparison of two measurement devices or systems, one from a known uncertainty (the standard) and one of unknown uncertainties (the test equipment or instruments being used). Every day there are numerous applications proving how useful calibration is. Without calibration, or with incorrect calibration procedures, we may pay more at the gas pump or for food incorrectly weighed at the checkout counter or even encounter problems as simple as our car door not shutting properly.


All calibrations should be traceable to a measurement or the value of a standard whereby it can be related to stated references. This is usually national or international standards, through a valid chain of calibrations all having stated uncertainties. Uncertainty is an estimate of the limits, at a given confidence level, which contains true values.


Metrology laboratories are places where both metrology and calibration work are performed while calibration laboratories generally specialize in calibration work only. Calibration laboratories must demonstrate accuracy and repeatability that figures the state of closeness of a measured value to a known reference value.


Both metrology and calibration laboratories must make every attempt to isolate the work performed from influences that might affect it. These influences can include temperature, humidity, vibration, electrical power supply, radiated energy and others. Generally, it is the rate of change or instability that is more detrimental than whatever value prevails.


The sensitivity and stability of our measurement instruments must also be considered to provide us with precise measurements to achieve the utmost accurate results for our customers. All of these factors must be factored into the accuracy figures provided to your customers.


Stability is often expressed as the change in percentage in the calibrated output of an instrument over a specified period, spanning anywhere from 90 days to 12 months, under normal operating conditions. These variables determine when we must consider if re-calibration is needed. It is very difficult to judge the stability and performance of your instrument without a set of calibration results. With standard certificates, these results should have been compared to the published specification by the calibration laboratory and will normally be categorized to show conformance to specification. ISO/IEC 17025 certificates often give the results in more detail, with an indication of any results that fall outside specification.


Calibration laboratories that are accredited to international standard ISO/IEC17025:2005 must demonstrate competence in both the technical aspects of the measurements and in the quality assurance aspects that ensure that you will get the service that you ask for, if you have specific requirements. It also ensures that you will receive a useful and valid certificate and set of results if you wish to leave the detailed requirements to the laboratory.


One example to review is the calibration of your coordinate measuring machine (CMM). We are primarily going to discuss two different performance tests: the international standard, ISO 10360, and the U.S. standard, ASMEB89.4.1.


The volumetric performance test involves measuring the length of the ball bar as the distance between its two end spheres. This measurement is repeated in many positions throughout the machine volume. Any change in the measured length reflects machine geometry errors. Though this is a good test, repeatability errors are averaged out and since the ball bar is not calibrated, this test does not provide any traceability. The ASME standard does provide tests to handle these other issues (the repeatability and linear displacement accuracy tests), but these tests must be used. In addition, any influence of probe tip calibration on size measurement must be tested for separately using the ASME bidirectional length test, as none of the other tests are sensitive to this error.


The ISO 10360 standard is really a series of standards. The most important part of the series is Part 2, which was first published in 1994. The official designation is ISO 10360-2:1994. As with all standards, the part number (2) and the date (1994) are both very important as the standard could, and will, change over time. The ISO standard is the youngest of all the CMM standards, but since it is the ISO standard, it is fast becoming the most popular, both in the U.S. and worldwide. The first is the length measuring performance, designated as “E,” and the second is the probing performance, designated as “R.” The E test is a complete test of the CMM to measure length, an important fundamental characteristic of a machine. The test procedure calls for a series of measurements of either calibrated gage blocks or a step gage.


So is this the best method of calibrating your CMM? In general, yes, this is the most practical industrial method for CMM calibration. This method shifts the burden of calibration completely to the manufacturer and ensures there is no difference in machine performance over the years. However, although it is relatively unimportant which standard is used, it is critical that the standard be used correctly and completely. This can be a big problem when using the ASME standard for calibration. As discussed before, the ASME standard is often misinterpreted as just being the ball bar test. The volumetric performance test is a good test, but it doesn’t do enough to calibrate a CMM. At a minimum, the linear displacement accuracy test is also needed, as that is the key ASME test for the purposes of provided traceability.


Finally, be wary of anyone who is calibrating your CMM. All CMMs today are software corrected at some level, which means the wrong person could really mess up your machine. Recent quality standards have started requiring calibration services to be accredited in many industries. Accreditation is your way of knowing that the work being done on your machine has been validated by external experts. Whether you hire a third party for your CMM calibration or use the original machine manufacturer, make sure they are accredited for the calibration work they are doing.



All CMMs today are software corrected at some level, which means the wrong person could really mess up your machine.

Recent quality standards have started requiring calibration services to be accredited in many industries.

Accreditation is your way of knowing that the work being done on your machine has been validated by external experts.



Natural gas boom yielding big benefits, Congress told

By Mike Gilstrap | June 25, 2014 at 10:52 AM EDT | No Comments

Posted on June 24, 2014 @ 5:36 p.m. by Jennifer A. Dlouhy in Natural gas, Politics/Policy


WASHINGTON — The rapid growth in domestic natural gas production is sending benefits rippling across the U.S. economy, business leaders and an energy expert told Congress on Tuesday.

The uplift — including lower energy bills, increased jobs and economic growth — extends well beyond the oil patch, Daniel Yergin, vice chairman of the research firm IHS, told the House and Senate’s Joint Economic Committee.

“Because of the nature of the supply chains across the economy, we see a great impact on states that don’t have significant shale gas or oil activity,” Yergin said. About a quarter of the 2.1 million jobs supported by unconventional natural gas and oil activity are located in non-producing states, he added.

Yergin also credited domestic natural gas production — and the relatively low prices for the fossil fuel — with helping to lure foreign dollars to stand up new manufacturing facilities in the United States.

Companies have announced tens of billions of dollars in planned spending on new manufacturing facilities. At least some of that investment is being driven by lower prices for the natural gas used both as a chemical building block and to generate electricity used at the plants.

“A vibrant oil and gas industry makes other industries more productive,” said Charles Meloy, Anadarko Petroleum Corp.’s executive vice president for onshore exploration and production.

Tax dollars

It also ensures more tax revenue flows to state and federal coffers, noted Rep. Kevin Brady, R-The Woodlands.

“More American-made energy means more American tax revenues,” said Brady, chairman of the Joint Economic Committee. “More natural gas production in America helps to balance the budget and fund necessary services for families who need the services.”

UPS Vice President of Corporate Public Affairs Jim Bruce described how the company is increasingly turning to natural gas to fuel its fleet of heavy trucks.

“Natural gas is revolutionizing trucking at UPS,” he said, noting that the company’s natural gas-field trucks are racking up 2 million miles a week, displacing over 300,000 gallons of diesel.

For the biggest rigs and tractor trailers, Bruce said, UPS found that only liquefied natural gas “would give us the power, the torque and the distance . . . we needed to go from hub to hub and back.”

But Elgie Holstein, senior director for strategic planning at the Environmental Defense Fund, sounded a cautious note.

“There’s no question that unconventional natural gas development is lowering energy costs, creating new jobs, supporting more domestic manufacturing and even delivering some measurable environmental benefits,” Holstein told the joint House-Senate committee. “But it is also posing localized public health and environmental risks, and it is accelerating global climate change.”

Holstein cited concerns about smog near drilling activity and leaks of the greenhouse gas methane from natural gas infrastructure. The Obama administration has announced plans to broadly crack down on methane emissions, with environmental regulators expected to announce possible new rules later this year. Holstein suggested those federal mandates are necessary to encourage reductions.

While some “progressive” oil and gas companies are moving to clamp down on emissions of methane and smog-forming volatile organic compounds themselves, “they are to some extent constrained by competitors who may not be so anxious to move in the same direction, Holstein said.

Financial incentive

Anadarko’s Meloy noted that oil and gas companies have a vested financial interest in plugging methane leaks and stopping gas flaring.

“It is in our best interest to put every molecule into the pipeline,” he said. “We’ve worked hard with EDF and others to make sure that’s the standard (in the places) where we operate.”

Colorado regulators imposed stiff controls on those emissions earlier this year, after Anadarko, Noble Energy and Encana Corp., agreed on a strategy with the Environmental Defense Fund.

Holstein said that collaborative Colorado approach could be a model for other states.


Ten Mistakes CEOs Make About Quality

By Mike Gilstrap | April 14, 2014 at 01:47 PM EDT | No Comments

By Willard I. Zangwill

Quality Progress, June 1994


Fifth in a 5 part series:


THE WINNERS OF THE MALCOLM BALDRIGE National Quality Award and other organizations noted for excellent quality programs think differently about quality than most organizations.  A professor of management science and students at the University of Chicago interviewed executives at a number of organizations that have excellent quality programs.  These interviews revealed 10 mistakes that many corporate executive officers (CEOs) make that might prevent their companies from developing excellent quality programs.



Mistake 9:  Failing to follow the best practices


Near the end of the 1970s, Xerox was confronted by formidable competition from Japanese organizations.  The Japanese were selling copiers at a cost comparable to Xerox’s manufacturing cost.  Xerox’s market share was plunging.  After years of disregarding its Japanese competition, Xerox had to confront reality.


Although Xerox launched several programs in its counterattack, perhaps the most crucial was benchmarking.  With benchmarking, employees in the organization determined the best practices in the industry.  They learned about these best practices, implemented them and became the best at them.  All functions of the organization, not just manufacturing, were required to benchmark, including shipping, internal auditing, treasury and training.  Xerox, like most manufacturing organizations, had most of its costs not in manufacturing, but in overhead and general administration.  Therefore, all parts of the organization had to benchmark and learn how to become the best at what they did.


An important aspect of benchmarking is to look for the best practices not just inside one’s industry but also outside.  For warehousing, Xerox benchmarked against L.L. Bean.  Going outside one’s industry might, in fact be easier because direct competitors are less likely to share information.  IBM Rochester, a Baldrige Award winner, for example, benchmarks itself against 200 other leading organizations from both inside and outside its industry.


Benchmarking is a powerful concept.  But despite the obvious value of learning from the best, many managers will perceive the process as threatening and deny its value.  They will object that other organizations are different and not comparable and argue that what another organization does is not relevant.  At Xerox, people gave repeated rationalizations and justifications for not benchmarking such as the fact that the Japanese have a different culture, get government support and have a better school system.  Denial of the situation was rampant and Xerox leadership had to convince people that they could learn and improve by studying other organizations.  The most difficult part of benchmarking is not the process itself, but in getting people to do it.


Benchmarking requires leadership to help people face the fact that they are not the best and must therefore improve.  It also requires hard work to know one’s own process thoroughly and to understand and learn from the benchmarking process.  Most of all, it requires a CEO who knows that world-class success cannot be achieved with second-class operations.


Mistake 10:  Believing Baldrige Award examiners are stupid


The Baldrige Award is an award given for outstanding quality to companies applying in three categories: manufacturing, service and small business (no more than 500 employees).  To apply, companies must complete the Baldrige Award examination process, which requires making detailed descriptions of the company’s total quality systems.  Many organizations, however make the mistake of submittingmaterial that resembles a public relations piece.  This might occur because of the company’s natural enthusiasm for its achievement. More likely, it is because the company lacks well-defined, well-documented and measurable quality systems.


A rule of thumb for determining whether a good quality system exists is to audit the process.  An audit will determine whether the company is making progress in a particular area and what to do if it is not.  A poor system, for example, will likely have one or more of the following problems: no clearly established goals, no means to measure progress toward the goals, and no well-defined process of identifying and correcting problems.


Most Baldrige Award applicants have fairly good systems in the processes that directly generate the products and services.  The weak systems are generally seen in leadership, planning, product development and administrative activities.  The lack of systems in these areas often becomes apparent in applicants’ responses to questions asking how these processes are improved.  The companies might respond by indicating that they have meetings or that a particular person, such as the division manager, has responsibility for the activity.  Or they might try a public relations approach such as, “We at the XYZ Company are always looking for ways to improve our strategic quality planning process.  We believe that quality is one of the most important components of our business plan and thus, quality is an integral part of our strategic planning process.”  The company, however never directly describes how the process is systematically improved.


Another example is an organization’s description of a “process that ensures that customer service requirements are understood and responded to throughout the company.”  An inadequate response would be:  “We believe that a customer focus is a key element in our continued success.  Every customer contact department has signs posted stating ‘Service and courtesy are our business.’  In addition, all internal stationery for memos has ‘Treat the Customer Right’ imprinted prominently in the letterhead.”  Again, there is no discussion of the system.


Virtually every issue addressed in the Baldrige Award criteria asks for a description of the management system or process the company uses for that issue.  This means how the process is monitored, how it is improved and what the results have been both in terms of improvements over time and in comparison to competitors and world-class companies.  When no system exists it is tempting to resort to public relations statements about the importance of quality and customer satisfaction.  The Baldrige Award examiners, however are not stupid enough to be fooled.



Ten Mistakes CEOs Make About Quality

By Mike Gilstrap | April 04, 2014 at 08:38 AM EDT | No Comments

By Willard I. Zangwill

Quality Progress, June 1994


Fourth in a 5 part series:


THE WINNERS OF THE MALCOLM BALDRIGE National Quality Award and other organizations noted for excellent quality programs think differently about quality than most organizations.  A professor of management science and students at the University of Chicago interviewed executives at a number of organizations that have excellent quality programs.  These interviews revealed 10 mistakes that many corporate executive officers (CEOs) make that might prevent their companies from developing excellent quality programs.



Mistake 7:  Using misguided incentives and developing a distorted culture


CEOs often sincerely try to institute beneficial changes, such as launching a quality program.  They allocate resources, train people and establish goals, but after a year of waiting they have gained little in return for their time and money.  Why does this occur?


Obtaining a sizable change in an organization requires a major revision in the culture and a CEO’s program to improve quality will have little impact if the incentives are wrong for the culture.  This frequently occurs for example, when managers are encouraged to attain monthly production quotas even if it means shipping poor-quality products.


Another common incentive is the promotion of managers who are deemed good crisis or fire-fighting managers.  The manager who is considered a star is the one who marshals the resources, gets everyone to work overtime and resolves crises.  Top management hears of these heroics and promotion occurs.  Rarely does anyone ask, however why this manager permitted so many crises to happen.


One important tenet of quality is to keep the system under control so that defects or crises rarely occur.  The best executives follow this philosophy and work to reduce the number of crises.   Without the crises to get top management’s attention, however these people might work with little honor or promotion.


Trouble might also ensue if the overriding culture is based on cost reduction.  Some managers will cut back on training, maintenance and new product development.  Those actions cut cost, but they have no chance to positively influence the company.  This manager will keep costs very low, look good to top management and receive a promotion in 18 to 24 months.  The manager after him or her, however is left with depleted resources and the inability to compete.  Implementing a quality program often means making careful investigation of the cultural aspects of the organization that might defeat it.


Mistake 8:  Changing targets each year


Most CEOs have an annual planning process, similar to management by objectives, in which “goals are established for the next year.  The overall goals are set by top management and then lower levels get their goals after some debate and discussion in a cascading process. In theory, if managers have reasonable goals and incentives then most goals will be accomplished.  In actual practice, however high levels of accomplishment rarely occur.


To achieve any really important and challenging goal requires training, investment, management reviews, incentives, worker involvement and cultural changes.  All of this takes time and effort to implement.  If new goals are instituted annually, management gets involved with new goals and directions before it can get fully underway on the old goals.


How can a CEO overcome this?  Although some of the annual goals can change yearly a few should be so fundamental and crucial that they persist into the foreseeable future.  Motorola has instituted three such goals for the entire organization:  quality, cycle time and cost reduction.  It took years to develop and implement the systems to ensure progress on these goals and for some people to realize that the CEO was really serious about them.


Although some goals will have to be revised annually, changing too many goals too fast will confuse middle management and employees. Instead, top management should identify the fundamental factors that underpin the organization’s success.  To implement these factors, top management should include a few cardinal goals as well as the systems to achieve them (training, management reviews, and incentives).



…to be continued…




Ten Mistakes CEOs Make About Quality

By Mike Gilstrap | March 26, 2014 at 01:56 PM EDT | No Comments

By Willard I. Zangwill

Quality Progress, June 1994


Third in a 5 part series:


THE WINNERS OF THE MALCOLM BALDRIGE National Quality Award and other organizations noted for excellent quality programs think differently about quality than most organizations.  A professor of management science and students at the University of Chicago interviewed executives at a number of organizations that have excellent quality programs.  These interviews revealed 10 mistakes that many corporate executive officers (CEOs) make that might prevent their companies from developing excellent quality programs.





Mistake 5:  Believing that quality improvement is too expensive


Many executives believe that quality improvement is too expensive when, in fact, the opposite is true-quality cuts costs.  Quality requires doing the right job right the first time, and doing the right job is cheaper than doing the wrong job.  Any task that must be redone or product that must be reworked adds cost. Any information that is incorrect and must be revised adds cost.  Any waste of people’s time, such as having to wait an excessively long time for top management to make a decision adds to the cost.  The more right things that are done right the first time the more money that is saved.  That is why quality saves money and all of the Baldrige Award winners have documented proof of this.


One of the curious facts about quality is that costs tend to go down more rapidly than expected.  This is because quality improvement in one area often cuts costs in other areas, thereby reaping multiple savings.  For example, a organization decided to improve the quality of the information generated by its computerized inventory system.  Its computers would show that a product was in inventory when, in fact, there was no such product on the shelf or they would show that a product was unavailable when there were still several in stock. Even though this happened only a small percentage of the time, people often had to call the warehouse to check whether an item was in stock or not.


To improve this system, the organization set up a special team to get rid of the defects.  Any time a problem arose, the team would count the inventory and check out what had gone wrong.  Many problems were found and resolved.  Part numbers were simplified and corrected, storage was rearranged and the computer software was improved.  After a couple months of effort, the computerized inventory system was made reliable, dependable and accurate.  An immediate savings was in the elimination of phone calls checking whether an item was in stock.


Then came an unexpected twist.  Twice a year the organization took physical inventory and counted everything in stock.  Soon after the computerized inventory system was corrected, a physical count was made.  The numbers from the physical count, however were different from those that the computer reported.  The physical count showed the computer count to be wrong.  The quality team was upset and felt demoralized.  After all of the effort to improve the computerized inventory system the team members thought that it had failed.


The quality team, however decided to check the physical count.  Since everyone in the organization helped do the physical count, it turned out that many people made errors because they did not understand the parts number system or the storage system.  The physical count, therefore was wrong and the computer count of the inventory was more accurate.  The biannual physical inventory was stopped since it was not as accurate as the computer system and the computer data were then used for all financial reporting.


In addition, the accounting department had two computer programmers developing software for the physical count, which was necessary because of the organization’s changing product mix, consisting of a variety of electronic test and computer equipment.  Now, with the physical count eliminated, the programmers were no longer needed for that task.


The people who had started the project of correcting the errors in the computer system had no concept that other savings would result. Phone calls from people trying to find out what was in stock, physical counts of inventory and the need for revisions to the inventory computer program were eliminated.


Most systems consist of many parts or steps in which one part feeds information or material to the next.  As the quality of one part of the system improves, it sends higher-quality information or material to the subsequent steps in the process.  That higher-quality input produces a cost reduction in those steps.  Since the interconnection of systems is often complex, sometimes, as in the inventory example, it is difficult to foresee exactly where the cost reductions will occur.  But as quality improves in one operation costs almost always drop not only in that operation, but also in other operations.


Mistake 6:  Managing by intuition and not by fact


Most CEOs strongly believe in their judgment.  After all, that is the essence of being a CEO-having the background, judgment, and intuition to make good decisions.  Research, however tells a different story. Many behavioral science studies have verified that intuition and judgment are not nearly as sound as we are led to believe.  In effect, people’s brains “lie” to them and tell them their judgment is much better than it really is.


The book Decision Traps by Edward Russo and Paul H. Schoemaker, for example, details some of the fallacies that people’s brains tell them.  In December of one year, executives were asked to predict sales for the following year.  More than a year later and after the actual annual sales figures were shared with everyone, the executives were asked to recall their predictions and they remembered them as being much closer to the actual outcome than they really were.  In essence, once the actual figures were known the brain subconsciously distorted and recalled the predictions as closer to the actual outcome.


Once a person knows an outcome his or her brain adjusts its memory.  The person then thinks that his or her judgment was far better than it actually was.  Judgment also gets distorted because the brain tends to pay more attention to recent, unusual or emotional events.


Management by fact, not intuition, strives to surmount this predicament.  One of the most common areas for misjudgment is in assuming to know the needs of customers.  Almost all presumptions about customers are wrong.  Whitman Corporation in Chicago, IL was concerned that its customers were upset because of damage to goods during shipment.  It launched a project to reduce the shipping damage and succeeded after some effort.  Only later did it learn that the customers were disappointed and thought the old shipping method was better.  Although the new packaging protected the contents well it was extremely hard for the customers to open.


At First National Bank of Chicago in Illinois the managers thought that the most important thing to customers was fast, courteous service. When the customers were surveyed that item ranked fourth in importance.  The customers’ biggest concern was employees who said they would get back to them on an issue, but then did not.


The need for management by fact extends not just to customers, but to any action or decision that can be made as demonstrated in the following examples.  A organization was proud that it had reduced its defect rates to 5% from 15%.  When asked for the facts it produced some charts that showed a 5% defect rate for several months.  There was no indication, however that the defect rate was previously 15%.  When pushed further someone recalled that someone else had said the defect rate was at least 15% at some point last year. Someone else then recollected that when new machines were installed the defect rate shot up briefly and perhaps that unique occurrence accounted for the 15%.


Another organization was proud of the new sales techniques its salespeople were using.  When questioned, however they could produce no proof of increased sales and justified it by replying that the salespeople had only recently been trained so it was too early for proof. When the training was investigated it was discovered that only 60% of the salespeople had attended the training session.  When those attendees were questioned most thought the training was useless and too theoretical, so they never implemented it.


Compaq computer also searches for facts and according to Vice President Hugh Barnes continually uses sanity checks and cross-checks. Just because an executive says something that does not mean the statement is gospel.  Suppose an executive predicts that the sales for a product will be 5,000 units.  That statement is questioned and the facts are sought.  Is it the person’s surmise?  Is it based on market surveys?  Is it based on organization orders?  The degree of validity of the statement is thereby determined.


The mind, behavioral scientists know, searches for evidence to confirm its beliefs and denies the validity or existence of contrary or additional evidence.  The best antidote to these distortions is management by fact.  The facts are usually easier to obtain and considerably more useful than most people believe.  Baldrige Award winners tend to collect a great deal of information and use it extensively in their decision processes.



…to be continued…



Ten Mistakes CEO’s Make About Quality

By Mike Gilstrap | March 19, 2014 at 10:01 AM EDT | No Comments

By Willard I. Zangwill

Quality Progress, June 1994


Second in a 5 part series:


THE WINNERS OF THE MALCOLM BALDRIGE National Quality Award and other organizations noted for excellent quality programs think differently about quality than most organizations.  A professor of management science and students at the University of Chicagointerviewed executives at a number of organizations that have excellent quality programs.  These interviews revealed 10 mistakes that many corporate executive officers (CEOs) make that might prevent their companies from developing excellent quality programs.




Mistake 3:  Believing that being close to the customer and planning for customer satisfaction is sufficient


The top executives at many companies believe that their organizations have a strong customer focus.  They might have oriented theirplanning systems to better satisfy the customer.  They might also maintain complaint hotlines, have extensive warranties and conduct customer satisfaction surveys.  These techniques, while helpful, still do not constitute a “total customer focus” because customer satisfaction is not something that concerns only some parts of the organization.  Each and every group in the organization should have goals and incentives that are tied into enhancing customer satisfaction.  This requires a carefully conceived management system that involves all parts of the organization in improving customer satisfaction.


Why is this done so infrequently?  Perhaps the most pervasive reason is that many companies lack a systematic approach to customer satisfaction.  The executives believe that a system isn’t needed and that they already understand the customers and know what they want.  Almost always, however that belief is false.  Motorola, for example, is well aware of this fact and requires its executives to visit its customers’ organizations.  The executives are required to speak not just to the organizations’ executives, but also to the workers who actually use the Motorola product.  Experience has shown that almost everyone has distorted ideas about what customers truly think and a systematic approach is needed to overcome this.


Steps to overcome the problem


How should such an approach be developed?  The first step is to conduct an analysis of all of the interactions a customer might have with the organization.  Most organizations collect customer satisfaction information on the products or services they provide, but an organization provides far more than just a product or service.  It also provides a complicated set of interactions with the customer, all of which should be top notch.  For example, an organization must have knowledgeable salespeople, on-time delivery, accurate information, error-free invoices, courteous and helpful employees who quickly answer phones and accurate and understandable technical manuals. Data should be collected on all customer interactions because total customer satisfaction means meeting or exceeding customer expectations in all areas.


Once the data are collected, the information must be used to improve the system.  For example, the complaint department might handle a customer’s complaint well, but once the customer’s specific problem is resolved a deeper issue arises about what happens next.  Many organizations do not try to find the cause of the complaint and change the system to prevent the problem from recurring.  Using customer information merely to resolve the immediate problem or error is not sufficient; the underlying system must be improved.


The failure to properly use customer information often occurs in the design of a new product.  The marketing department might collect a great deal of customer-related information, but the design engineers might not use it.  For instance, marketing might discover that customers want a car with good acceleration, but the engineers need to know whether that means acceleration for passing on a highway, acceleration from 0 to 60 miles per hour or acceleration to make the tires squeal.  Each situation requires different engineering design choices, but marketing often does not obtain information in a form that the engineers can use.


One way to overcome this problem is to give the right people direct access to the customers.  Ingersoll Rand in Pittstown, NJ, did this in the design of a new hand-held air grinder.  A cross-functional new product development team was formed consisting of people from the marketing, engineering and manufacturing departments.  This team conducted focus groups with customers throughout the country.  It was able to develop the new grinder in one-third the usual development cycle time.  The grinder has sold well and won an award from the Industrial Design Society of America.


While exposing decision makers to customers is a good first step, more formal techniques should be developed to drive the customer information throughout the company.  Quality function deployment (QFD) is used by a number of companies, including Hewlett-Packard, Ford and General Motors.  QFD obtains detailed lifestyle information from the customer.  This information is then deployed throughout the product design process to ensure that the final product fits the lifestyle of the customer who then will feel comfortable with it.


A good customer satisfaction system does far more than obtain information.  It gets the right information to the right people and ensures that the information is used not just to correct a specific error, but to improve the underlying process.


When the system breaks down


Even a very good system might not be sufficient under times of stress and strain.  The system is often abandoned and customer-oriented goals are sacrificed to achieve other business objectives.  To meet end-of-period goals, for example, a great rush often takes place in which defective products are shipped or services are cut.  This happens because executives are driven to reach their numerical goals, to ship a specified amount of product, or to make a certain profit.  When a crunch comes and the numbers might not be reached the customer-oriented standards are likely to be abandoned first.


What can be done about this?  Robert Galvin, chairman of Motorola’s executive committee, said one of his most important roles was to stand up for quality.  He served as the guardian and maintained the status of quality and customer focus even in times of stress and pressure when others would have sacrificed them.  That is a role that top management cannot delegate and it is the foundation of any successful total customer satisfaction system.  Only the CEO can ensure, even in times of great pressure, that quality and customer satisfaction are preserved.


Mistake 4:  Believing that quality means inspection


Many executives view quality narrowly and believe that it refers only to manufacturing process control and inspection.  Inspection, however is the antithesis of quality.  In fact, quality’s ultimate goal is to eliminate inspection.  Inspection should not be needed if the process is successful in producing the product without defects.  Inspection is only necessary if the production process is faulty and producing defects.  In this circumstance, final inspection might be necessary, but it should be viewed only as an interim procedure.


There are three problems with inspection.  The first is that inspection only eliminates a percentage of the defects.  Joseph M. Juran, W. Edwards Deming, and others suggest that inspection will, as a rule of thumb, eliminate 80% of the defects and 20% of the defects will still get through to the customer.


Second, an inspector might be able to find defects when the defect rate is at a few percent, but when there is one defect per 25,000, he or she cannot hope to find a defect.  Today’s marketplace demands such high quality levels, often a few defects per million, that final inspection is not a practical method for achieving those levels.


The third major problem with final inspection is that it is expensive; the cost of inspectors, equipment and correcting the defects at the final stage is high.  At worst, the defective product must be scrapped, totally wasting the item.  Even when the item can be salvaged, the rework and repair adds substantially to the cost.


Quality improvement efforts in many companies have shown that inspection is an inadequate approach.  It is much better and less expensive to produce the product correctly in the first place.  The key to this is error cause removal, which means identifying the cause of the defect or error and then eliminating it.  Once the cause is eliminated the defect cannot occur.  Systematically done, this approach is far less expensive and is the best way to achieve virtually zero defects.


For example, billing invoices for domestic pagers from Motorola had 450 errors out of 22,000 total invoices. The errors included wrong or omitted serial numbers, freight amounts or carriers. In just one year, the error rate was cut to nine out of 20,000. This would have been impossible to do by inspection.  Instead, the causes of the defects were rooted out one by one.  The team working on this problem noticed that some of the information was already in the computer and did not need to be recopied, which eliminated the possibility of error in that step.  Other information was available from the bar coding system, and form simplification cut the remaining errors.


Another example is from the People’s Bank of Bridgeport, CT, which had a high error rate for its tellers in the proofing department.  For each error found, a special correction notice had to be issued, which was expensive.  To solve the problem, extra inspection, training, and management encouragement were attempted, but that did not help the situation.  Finally, the bank tried to eliminate the root causes of the errors.  It discovered that the tellers had to know how to do 78 operations.  The bank standardized documents and reorganized the system so that the tellers needed to do only 12 operations.  The errors virtually disappeared.


Whether in manufacturing or service activities, defects are rarely cut by increasing inspection.  Getting rid of the root causes of the errors is usually cheaper and more effective.



…to be continued…

Ten Mistakes CEOs Make About Quality

By Mike Gilstrap | March 12, 2014 at 04:58 PM EDT | No Comments

By Willard I. Zangwill

Quality Progress, June 1994


First in a 5 part series:


THE WINNERS OF THE MALCOLM BALDRIGE National Quality Award and other organizations noted for excellent quality programs think differently about quality than most organizations.  A professor of management science and students at the University of Chicagointerviewed executives at a number of organizations that have excellent quality programs.  These interviews revealed 10 mistakes that many corporate executive officers (CEOs) make that might prevent their companies from developing excellent quality programs.


Mistake 1:  Failing to lead


When CEOs strive to be leaders and to inspire their employees to excel, many adopt an approach that almost always fails.  They recall movie scenes in which great leaders give powerful and highly motivating speeches, such as Knute Rockne inspiring his football, team or General George’ Patton spurring his troops to success.  Leadership to these CEOs becomes management by exhortation and inspiration. Speeches, high goals, slogans, and campaigns are supposed to motivate employees and propel the organization toward competitive victory.


This misguided Hollywood style of leadership almost always fails.  Jack Stack, CEO of Springfield.  Remanufacturing, which remanufactures engines in Springfield, MO, recalls when he took over a division and gave a powerful speech designed to rouse and inspire the workers.  At the end of the speech he asked if there were any questions and one worker in the back yelled out, “How old are you, anyhow?”  The workers had often been exhorted with motivational speeches and campaigns, slogans and goals.  They were cynical because of repeated failures and knew from experience that little real change would occur.


Although speeches and exhortations might produce a brief flurry of activity, soon people go back to the same management, systems and procedures as before.  Repeatedly, CEOs make speeches, set goals, wage a big campaign and then wonder why this leadership produces little lasting change.


Movies, in an effort to create a dramatic effect, confuse the issue.  Prior to their speeches, Rockne and Patton thoroughly planned, organized, equipped, trained and prepared their men.  That was the main substance of their leadership and created the success.  The motivational speech was the final encouragement and perhaps, was not even necessary.


Good leadership must produce results, which means that the actual work done in the offices or factory must change.  Exhortations rarely accomplish that.  Change requires a new infrastructure in which the organization’s steps, procedures and techniques are improved.  This requires new systems, planning, incentives and training.  Improvement in the way work is done is what quality systems accomplish and is the substance of real leadership.


Mistake 2:  Thinking that planning devolves from financial or marketing goals


Planning in many organizations starts with top management setting goals for financial growth (profit, earnings per share, and return on investment) and market growth (sales).  These overall financial and sales goals then get broken down into specific goals and budgets for each department or area.


What is often neglected in these goals, however is the customer, who makes the purchase and pays the bills.  Planning should start with the customer and the centerpiece of planning should be customer satisfaction.  This is true of the Baldrige Award winners. Westinghouse Commercial Nuclear Fuels Division, for example, has quality goals that directly relate to customer satisfaction issues.  The senior management level has eight goals, which are divided into sub-goals in lower levels of the organization.  Even the individual worker or work team has goals that relate to the overall customer satisfaction goals.  Each month, Xerox Business Systems sends out 40,000 surveys to its customers and to people who bought from the competitors and the data from these surveys strongly influence corporate goals.  Motorola has overarching goals for improving quality and cycle time, which are directly derived from customer satisfaction issues. All of the Baldrige Award winners examined, such as Globe Metallurgical, IBM Rochester and Federal Express, have systems to ensure that customer satisfaction drives their goals.


Peterson Products, a metal stamping organization near Chicago, IL, was about to launch a marketing campaign to elevate sales.  Instead, it decided to improve on-time delivery to the customer.  When the percentage of on-time delivery went up, the salespeople were ecstatic because for the first time they could promise delivery.  Sales rose 25% because the customers were getting what they wanted and Peterson was able to drop the plan for the marketing campaign.


Another example is Cooper Tire in Findlay, OH.  Building a reputation as a reliable supplier with modem plants, it aimed its marketing directly at the customers.  It has doubled its market share and enjoys a profit growth rate of 22% per year.   According to the New York Times, it is the envy of giants like Goodyear, Bridgestone, and Michelin.


In most organizations, planning is a vertical process that is driven from the top down or from the bottom up.  Planning should be a horizontal process starting from the customer and working inward.  Financial goals are also necessary, but the customer should drive thegoal-setting process and every department and functional group should have goals that positively affect customer satisfaction.  As Bob LaBant, IBM vice president, said, “My goal is to make our customers successful.  If I had one measure, it would be their success.”


…to be continued…

Injection wells seen as possible cause of earthquakes

By Mike Gilstrap | January 15, 2014 at 11:55 AM EST | No Comments

By Jim Fuquay

Star Telegram


The small earthquakes that shook the Azle area late last year have put a spotlight on another aspect of the oil and gas drilling boom in North Texas — injection wells that get rid of millions of gallons of water used and polluted in the process.


The state has about 35,000 active injection wells, according to the Texas Railroad Commission. Crude oil wells typically produce tons of salt water along with oil, and injection wells pump that water back down into the formation to help extract more oil. Injecting water into a depleting formation is rarely the cause of a seismic event, experts say.


But about 7,000 of the state’s injection wells are being used for disposal. The widespread use of hydraulic fracturing to extract natural gas and oil from shale formations has increased the need for disposal wells, which are used to send wastewater deep underground.


And there’s some evidence that they can cause the Earth to quiver.


“In a way, Texas has been a vast experiment in injection wells,” some of which are used to dispose of oil field waste, said seismologist Cliff Frohlich, associate director of the Institute for Geophysics at the University of Texas at Austin.


Millions of gallons of water are typically used to fracture, or frack, a well, and much of it eventually returns to the surface. Some is recycled, but most is pumped down disposal wells. And the extra fluid can migrate far from the well.


Disposal wells usually don’t produce seismic events, but sometimes they do, said Frohlich, who has studied the link between energy production and earthquakes. In a 2012 study, Frohlich found that “injection-triggered earthquakes are more common than is generally recognized.”


There are five active disposal wells in northern Parker County and southern Wise County, the site of more than 20 quakes that shook the Azle area in November and December. Those events prompted the Texas Railroad Commission to hold a public meeting in Azle on Jan. 2.


After hearing a litany of complaints about disruption and property damage from residents who packed the hearing at Azle High School, the three-member Railroad Commission, which regulates oil and gas production, voted to hire an in-house seismologist.


But some answers could be forthcoming even before that position is filled.


Researchers from Southern Methodist University and the U.S. Geological Survey have installed a network of seismic monitors around Azle and Reno, in northern Parker County, with the goal of collecting better data on the quakes.


Art McGarr, an earthquake researcher at the Geological Survey who is working on the Azle project, said Thursday that researchers expect to present their findings in late April. But they could come to a determination earlier than that and don’t necessarily need additional quakes to occur to do their job.


“We already have a lot of data in hand” from previous quakes, McGarr said. “We’re chewing through it.”


The wild card


Faults, or breaks in the Earth that typically formed millions of years ago in underground strata, are the big unknown that can influence whether an injection well might cause an earthquake. Faults aren’t always known before drilling takes place, and even if they were, McGarr said, it’s not certain that they will produce an earthquake if an injection well is drilled nearby.


Still, as Ken Morgan, director of the TCU Energy Institute, put it: “There are better places and worse places for disposal wells. That is common sense. If you have faults and a cluster of quakes, you’ve rounded up some suspects” by looking at nearby injection wells.


McGarr, Morgan and Frohlich said it can be hard to identify a single injection well as the cause of a particular quake. But a swarm of seismic events like the Azle quakes is certainly grounds for suspicion.


“Evidence would be if earthquakes started not too long after an injection well began operation,” McGarr said. “If they started within one or two months, that’s pretty good evidence. Even better evidence is if injection is stopped and the earthquakes stop.”


Scientists have actually controlled earthquakes by starting and stopping underground fluid injection. In what Morgan said is still the gold standard of such studies, researchers at the Rocky Mountain Arsenal near Denver in 1966 produced earthquakes by beginning or increasing injection. The quakes stopped when injections ceased.


In 1962, the well started disposing of wastewater from chemical weapons production. By 1966, more than 700 quakes had occurred within 5 miles of the well.


Disposal wells can also produce seismic events after years of operation. McGarr’s research shows that the total volume of fluid injected in a well can be the biggest factor in triggering quakes, not how fast it is injected.


Narrowing the field

The five disposal wells around Azle went into operation between 2005 and 2009, according to Railroad Commission data. Three are permitted to inject up to 25,000 barrels a day (or 1.05 million gallons, at 42 gallons per barrel). One well is limited to 15,000 barrels and another to 10,000 barrels. All are injecting considerably less than their allowed maximums at depths of 9,000 to 11,000 feet.


According to filings with the Railroad Commission, the largest well, operated by Foxborough Energy Co. of Oklahoma City, injected nearly 3.4 million barrels, or 142 million gallons, in the first nine months of 2013, the latest data available. The smallest, run by Strata Operating, injected 618,000 barrels, or nearly 26 million gallons, in the same nine months.


The additional seismic monitors that SMU is installing will allow researchers to locate new earthquakes much more accurately, researchers said. Earthquakes are tagged two ways: the focus, which is the depth underground where the quake originated, and the epicenter, which is its position on the surface.


Frohlich said all of Texas has about a dozen active seismic monitors at any time. That limits the accuracy of the epicenter to several miles. And Morgan said the estimated depth can be as broad as one of three ranges: shallow, moderate or deep.


McGarr said that with half a dozen monitors in just the Azle area, researchers can pinpoint the epicenter to within 200 to 300 meters and the depth to within about 500 meters.


Red light, green light

Fort Worth lawyer Jim Bradbury, who has followed the environmental issues of energy production, said state regulators should adopt a standard proposed by the U.S. Geological Survey called the traffic light system.


If earthquakes above a certain level occur near a disposal well, it could get a yellow light, requiring a reduction in the amount it’s injecting. “If seismicity continued or escalated, operations could be suspended” — the red light, the agency says.


The Railroad Commission has inspected all of the wells in the Azle area over the last two months, including three last week, according to reports emailed to the Star-Telegram.


“When earthquakes are reported, our staff will determine if saltwater disposal wells are nearby and then inspect the facilities to ensure that they are in compliance with their Railroad Commission permit conditions,” said spokeswoman Ramona Nye.


API Spec Q1 certification

By Mike Gilstrap | October 07, 2013 at 08:49 AM EDT | No Comments

Congratulations to everyone on a job well done in achieving API Spec Q1 and ISO/TS 29001:2010 certification.  With some of the toughest requirements in manufacturing, achieving API Spec Q1 certification shows Waples Manufacturing’s commitment to Quality and commitment to our Customers.  In addition, we have maintained our AS9100C:2009 Quality Management System certification in support of our Aerospace Customers.


Waples Manufacturing’s Quality Management System is now certified to API Spec Q1, AS9100C:2009, ISO/TS 29001:2010 and ISO 9001:2008 and we strive to continuously improve.


Our Quality Slogan is:  Commitment to Quality = Commitment to Our Customer!


We are Customer Focused & Quality Driven!


Seize the Opportunities

By Mike Gilstrap | June 05, 2013 at 02:40 PM EDT | No Comments

By Jim L. Smith from Quality Magazine, online


I was reading about Dr. Maxwell Martz, the American surgeon, who wrote several articles and books on the power of our self-image. Dr. Martz’s system of ideas was the frontrunner of the popular self-help books. I began thinking about what causes some people, more than others, to take advantage of opportunities.



Have you given much thought about the opportunities that exist in your life? You’ve heard the old adage about opportunity knocking on your door but I believe it’s more the other way around. Although we are surrounded with more opportunities for advancement in life than we could ever act on, we typically don’t live life from the perspective that we’ve got unlimited options but we really do.



While it is relatively easy to get people to agree that there is an abundance of opportunities for success of all kinds in the world today, their actions often tell a completely different story. They tend to wait until a sure thing comes looking for them. Just how many times does that happen?



Most people live life much too timidly or defensively and won’t step forward to grasp an opportunity. They can’t bring themselves to take actions that prepare them for the opportunities. They somehow think that the purpose of life is to make it safely to retirement and then death; however, nothing could be further from the truth.



Let’s think about this concept in relation to a sporting event such as hockey, football, or basketball. What if there were no offense, only defense? Certainly the game of life is not much different in these sporting events. A strong defense is definitely required to protect us from the all challenges that come our way, but we can never hope to win big in life without a commitment to a good offense.



When I think about opportunity, I often think of the opposing forces of offense and defense as they relate to sports. Certainly there are opportunities to take advantage of when playing defense, but again the big wins in life are more often gained when playing offense. Seizing an opportunity whether it be in sports or in life, usually requires taking bold new steps into the unknown which many of us don’t wish to confront.



Most experts say the primary reason why many people don’t pursue their dreams is fear. Fear of the unknown and what might happen if they take advantage of an opportunity and it doesn’t work out. If we are totally honest, we’re all a little bit frightened of the unknown but I find it interesting the most successful, happy, and fulfilled people we meet in life have transformed their fear into excitement for what they can build for themselves.



Achieving all of our goals and dreams for the future requires us to confront the unknown and take advantage of the opportunities that come our way. I believe it’s best to be bold rather than timid as we make these steps toward confrontation.



Definitely being bold doesn’t mean being careless or acting without intelligent thinking and planning. However, don’t over-analyze your life, either personally or professionally, or try to protect yourself from every little thing. The truth is that you are bigger than anything that could ever happen to you so don’t pass up a great opportunity out of just plain fear. Sometimes in life you’ve got to feel the fear and do it anyway when the opportunity is right for you. Bottom line, don’t fear the unknown but confront it and seize the opportunities which await you!



Dr. Martz said, “What is opportunity, and when does it knock? It never knocks. You can wait a whole lifetime, listening, hoping, and you will hear no knocking. None at all, you are the opportunity, and you must knock on the door leading to your destiny. You prepare yourself to recognize opportunity, to pursue and seize opportunity as you develop the strength of your personality, and build a self-image with which you are able to live with your self-respect alive and growing.”


Think about it.


The ISO Standard

By Mike Gilstrap | April 24, 2013 at 03:10 PM EDT | No Comments

Management system standards aren’t going away.

From Quality Magazine, online (April 18, 2013)

ISO 9001, the international standard for quality management systems, has been growing in popularity since 2010. There are no signs that this trend will subside. This news comes from The ISO Survey of Certifications, and relayed in the Journal of Quality and Participation. The top 10 countries totaled nearly 79,000 certifications. At the top of the list was China, accounting for 39,961 certifications—more than the total of the next nine countries combined.

And there is another management system standards (MSS) certification growing in popularity: ISO 14001. What is the difference between the two quality management systems?

The ISO 9000 family of standards addresses quality management, i.e., what the organization does to fulfill:

  • the customer’s quality requirements
  • applicable regulatory requirements, while aiming to
  • enhance customer satisfaction, and
  • achieve continual improvement of its performance in pursuit of these objectives.

The ISO 14000 family addresses environmental management, i.e., what the organization does to:

  • minimize harmful effects on the environment caused by its activities, and to
  • achieve continual improvement of its environmental performance.

The reasons for earning and maintaining a standard certification vary. Some customers ask that suppliers maintain certification. Some governments mandate specific industries carry certification. Other companies seek certification as proof that organizational operations are effectively controlled, which can open new global markets for the companies. In other words, some companies need to comply to certain standards to stay in business; while others see a real competitive advantage to having a certificate.

Earning an MSS is not an easy task. It is encouraging to see that so many companies derive enough of a benefit of the standard certificate to apply and take the time and effort to comply. But what are the benefits of an MSS and how does your company reap the benefits?

An MSS is an internationally agreed upon model organizations follow to ensure proper handling of day-to-day operations. Your MSS includes a structure for operations detailing tasks including the processes for:

  • purchasing material
  • maintaining accounting records
  • training employees
  • processing payroll
  • implementing pollution prevention

This can be seen as a daunting task. The good news is there is plenty of assistance along your ISO journey. The ASQ Knowledge Center hosts Standards Central, which gives you the basics—how to get started and how to link standards with other initiatives.

The benefits for earning a quality management system certification are numerous. And the best way to earn one is to get started and fully commit to the procedures. While there needs to be buy-in throughout the organization, there is no hope without the persistent guidance of quality practitioners.


A common MSS operating principle is the plan-do-check-act (PDCA) model. Popularized by Walter Shewhart and W. Edwards Deming, the PDCA model is synonymous with continuous improvement. A working knowledge of PDCA is helpful in connecting the ins and outs of standards implementation.

Use PDCA when:

  • starting a new improvement project.
  • developing a new or improved design of a process, product or service.
  • defining a repetitive work process.
  • planning data collection and analysis in order to verify and prioritize problems or root causes.
  • implementing any change.

Here is a brief overview of the PDCA procedures.

  1. Plan. Recognize an opportunity and plan a change.
  2. Do. Test the change. Carry out a small-scale study.
  3. Check. Review the test, analyze the results and identify what you’ve learned.
  4. Act. Take action based on what you learned in the study step: If the change did not work, go through the cycle again with a different plan. If you were successful, incorporate what you learned from the test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.


In-Line Gaging Systems Can Impact More Than Just Part Quality

By Mike Gilstrap | January 31, 2013 at 05:06 PM EST | No Comments

Learn more about the world of in-line gaging.

By Frank Powell and Roger Zeoli

From Quality Magazine online.


The rapid growth of in-line gaging over the last few years has been focused primarily on the use of this approach as a tool for improving quality. In a typical application an in-line gage is located physically within the cell, line or other system configuration to inspect each individual part and provide both tool-wear compensation (feedback) for the control, and data for SPC, tracking and other purposes.


In this role, the in-line gage is normally loaded and unloaded by the same automation used to transfer the part within the process. The gage, or gages, may be located between operations, used for final inspection, or both, depending on the process.


In most cases, the in-lie gages are used as part of a system that includes both in-process gages and off-line gages, coordinate measuring machines (CMMs) or other inspection technologies. Depending on the application, in-line gages may use any of the common gaging technologies including air, contact, laser and optical or a combination of technologies at different points within the manufacturing system. Each technology has unique advantages and application limitations.


Air gages are very tolerant of environmental contaminants and variations in part finish. They also tend to be very economical. This technology is a good solution for parts that cannot be touched.


On the other hand, air gaging, although initially less expensive, has a higher lifetime cost due to the production of compressed air and mist containment when measuring a part that is wet with coolant that can be atomized into the atmosphere when the gage is used. The technology also requires a very small gap between the gage and part which may offer challenges in part loading.


Contact gages are precise, robust, flexible, easy to apply, and offer a wide range of measurement capabilities. They are tolerant of environmental contaminants and variations in part finish. They also have a lower lifetime cost over air gages. However, since they must touch the part, it is possible for contact gages to leave unacceptable marks on highly finished surfaces.


Optical gages are extremely flexible and do not touch the part, making them the preferred solution for certain medical and aerospace components with highly polished surfaces. However, they generally require clean, dry surfaces which can impact their applicability in certain in-line gaging applications.


While this description covers the bulk of common in-line gaging applications, there are many variations on the theme. The gages, for example, can be fully dedicated, flexible, modular or re-toolable, depending on the nature of the application.


Briefly, a flexible gage is one that can inspect a range of part types without having to be reconfigured. A modular gage is made up of standard components such as pencil probes, measuring armsets and work rests mounted on a base that allows mounting details to be randomly located and re-located as necessary to inspect a wider range of parts. A re-toolable gage requires a change of mechanical elements to handle different, but similar, parts.


As noted, the system may use more than one gaging technology. It may also include other functions such as laser marking or pin-stamping, part sorting and classification, and a whole range of data storage and analysis tasks.


What these systems tend to have in common is that they are applied to stable, controlled processes as a quality assurance tool. In-line gaging is a proven performer in that role, and as a result, other potential applications for the technology receive less attention.


Extending the Useful Life of Veteran Equipment


For example, in many cases, adding in-line gaging to an existing manufacturing system can be a low-cost way to extend its useful life. No matter how well maintained, veteran equipment eventually reaches a point at which it becomes less stable, and the process of which it is a part becomes increasingly difficult to control.


Rather than re-build or replace the equipment, it’s often more cost-effective to add an in-line gaging capability to the system and use the virtual real-time feedback it provides to monitor and compensate the individual operations on a part-by-part basis. The compensation capability can help ensure ongoing quality, while the monitoring data can help identify problem processes and machines for remedial action and/or eventual replacement.


Nothing lasts forever, of course, so aging equipment will have to be re-built or replaced eventually. However, that doesn’t necessarily mean that the in-line gages used to extend the useful life of that equipment will also need to be scrapped. By specifying flexible, modular or re-toolable gages initially, it is frequently possible to re-purpose in-line systems for new equipment and even new processes with minimal additional capital investment.


The relative advantages of each system are highly application dependent and should be evaluated with the assistance of the gaging supplier. As a general rule, however, specifying some degree of flexibility, modularity or re-toolability in an in-line gaging system will result in significant long-term cost savings.


An automotive brake disk supplier used this approach in specifying an innovative flexible and re-toolable bench gage to provide 100% in-line inspection capabilities for a family of eight different parts. The parts vary in overall height, braking surface thickness and hat OD.


The gage dynamically measures rotor thickness; thickness variation; lateral runout of the braking surface; pilot bore ID; hat OD; brake surface parallelism; and mounting face flatness. Because it was designed for both flexibility and re-toolability, the gage can be retooled in 15 minutes as opposed to the three to four hours that would be required for a conventional gage.


Increasing Productivity Outside of the Process


In-line does not necessarily mean inside. There are many applications where an in-line gage can be used effectively to increase productivity by measuring parts before they are processed.


It’s often necessary, for example, to locate a part feature in relation to another feature. The traditional solution is to fixture the part in the machine, and then use a gage, probe or some other device to locate the reference feature and communicate the necessary offset to the machine control.


This is all done within the machine cycle, but it doesn’t have to be. Using an in-line gage to locate the reference feature before the part is fixtured moves this operation outside the machine cycle and results in a direct reduction in per-part cycle time. The same principle can be applied to a wide range of operations by using in-line gage data to either locate features, or to pre-position cutting tools or grinding wheels to minimize rapid-traverse to part time within the machining cycle.


A small engine manufacturer adopted this approach for an automated crankshaft bearing grinding operation and cut three to five seconds out of the process cycle time. An initial pilot test using a gage stand proved so successful that the manufacturer re-configured the entire system to optimize the advantage gained by the in-line gaging application.


In-line gaging is a rapidly maturing technology with a broad range of potential applications. While the traditional quality-focused uses of in-line gage systems will undoubtedly continue to predominate, it would be a mistake to overlook the broader applications of this technology to extend the life of capital investments and directly improve process productivity.


The Cost of Quality

By Mike Gilstrap | November 08, 2012 at 10:22 AM EST | No Comments

By Michelle Bangert, editor of Quality Magazine

November 2, 2012

Learn why it matters, how to calculate it, and what to do with the results.


It can affect everything from profits, suppliers and manufacturing location. Calculating the cost of quality can help improve quality, save money and reduce outsourcing. However you calculate it, the end result should be lower costs. The idea is to minimize the entire cost of production.


When Quality spoke to Douglas C. Wood recently, he was in the middle of editing the fourth edition of ASQ’s Cost of Quality manual, which should be out later this year. Wood, principal of DC Wood Consulting (Fairway, KS), says the ASQ cost of quality courses have seen more interest recently. He says the language of cost of quality is the biggest change he’s seen over the years. Though he uses the “ancient” term cost of quality, he says, “It’s really about the finance improvement.”

“When you do something correctly the first time, that’s the cost of doing business,” Wood says. “But anytime you have to go back and do something again, that’s the cost of quality.”


Cost of Quality Explained

In 1943, Armand Feigenbaum developed a quality costing model.  The Prevention-Appraisal-Failure concept appeared in 1956 with his article in the Harvard Business Review. This model is not the only one, however. Other models exist, such as Activity Based Costing (ABC).


Tech Tips

  • Measuring and tracking the cost of quality can improve decision making.
  • Research has shown that smaller companies that didn’t track cost of quality often had higher costs.
  • Cost of quality measures are one way for businesses to set improvement goals.

Let’s examine the classic cost of quality model and its focus on prevention, appraisal and failure. Prevention costs, as the name applies, are those that aim to prevent or reduce the risk of defects. Appraisal costs come from the evaluation or inspection process. Failure costs, which can be internal or external, result from scrap, rework, or warranty issues. These costs can all be measured—though they do require some calculation—and then companies can aim to eliminate or reduce them. Though some companies track cost of quality, experts say many more could benefit from it.


“I wish I could provide evidence that more organizations are using cost of quality and initiating programs to systematically reduce those costs,” says Victor E. Sower, an author, quality management consultant and distinguished professor emeritus of management at Sam Houston State University. “However, I do not find that evidence.” Sower and his coauthors published their cost of quality research in 2007 in the International Journal of Quality & Reliability Management. In surveying members of the Quality Management Division of ASQ, they found that about one third of organizations systematically tracked cost of quality. “We have seen no evidence since then that this percentage is substantially higher today,” Sower says.


This is a shame, because tracking cost of quality has proven to be a useful tool.


“For those organizations tracking cost of quality, we found that both internal and external failure costs decreased as prevention costs increased and that external failure costs decreased as appraisal costs increased,” Sower says.


In other words, prevention costs can prevent failure costs, and failure costs are the ones that can take down a business, hurting its reputation and relationship with customers. And even if you still have to invest in quality, spending on prevention is always better than spending on the cure.


Gary Cokins, founder and CEO of Analytics-Based Performance Management LLC (Cary, NC) and cost management author, notes that “If you do more prevention and appraisal, it may still be the same million dollars. But it’s a better million dollars because it’s internal and not affecting customers.”


But just reducing failure costs is not enough, notes Praveen Gupta, president of Accelper Consulting (Schaumburg, IL). He says that the goal should be to reduce cost of appraisal and failure. To achieve this, just measuring the cost of quality is not enough. “Measurement’s main purpose, including cost of quality, is to drive improvements,” Gupta says.


Why Track Cost of Quality? 

As with many other programs, Sower says management support can be the biggest hurdle. But managers will pay attention if asked, “Are you aware that we are spending 20% of sales on quality?” Sower says. This will most likely be greeted with incredulity, followed by “What can we do about it?” The idea is to then present a few potential projects and expected results. But management should also know that implementing a cost of quality study will first raise the cost of quality, as investing in prevention activities may take time to see a return.


“It is better to think of prevention and assessment costs as investments to assure that things are done right,” Sower says. “We should not be averse to increasing our investment in these cost of quality categories. Internal and external failure costs are incurred when things are not done right.”


Smaller organizations should especially take note. Sower found that larger organizations tended to have more developed quality systems and better tracking of cost of quality. The smaller companies that didn’t track cost of quality often had higher costs.


“On the brighter side,” Sower says, “more organizations are adopting information systems such as ERP and ABC that facilitate the tracking of cost of quality.  As the power of these systems is utilized to track cost of quality, one may expect to see decreases in the cost of quality.”


Knowing the cost of quality can improve decision making and it may also factor in to other initiatives, such as ISO. In a Quality article on preparing for ISO registration, Mike Ryer recommends doing a root cause analysis (RCA) to determine the factors influencing cost of quality. Ryer writes, “Not doing RCA is like being really good at putting out fires, but not finding the guy with the matches setting the fires in the first place.”


But, as David M. Anderson, consultant with Build-to-Order Consulting and fellow of the American Society of Mechanical Engineers, explains, some efforts to lower costs can actually be counterproductive: “There is usually pressure to reduce ‘cost,’ but if the cost system only quantifies parts and labor, then it will encourage (maybe pressure) the engineers to specify cheap parts, whose ‘savings’ will be more than cancelled out by various costs of quality. Similarly, emphasizing labor cost may encourage offshoring, which is legendary for raising quality costs.”


Harry Moser, founder of the Reshoring Initiative and Quality’s 2012 Professional of the Year, espouses this idea of monitoring the total cost of production, with his Total Cost of Ownership Estimator. (To examine your company’s total costs, visit


Finding the Cost of Quality

Wood teaches an ASQ cost of quality course and says the first step in tracking cost of quality would be to designate a point person. He also recommends reading up on the topic, taking a class in-house that would involve a cross functional team. The team could then go on to build a program to reduce these quality costs.

Data collection methods vary. It could be a quality engineer with a spreadsheet or companies using Activity Based Costing. Either way, the initial baseline of finding the cost of quality is time-consuming—but also eye-opening. Sower says that the important thing is to track the cost of quality and monitor changes over time. It can be done on a yellow notepad or with more sophisticated software. Even if you don’t have numbers to the last decimal, reasonably accurate estimates will still help, he says.


In a report released this June, LNS Research (Cambridge, MA) found that investing in the costs of good quality are more than offset by the reductions in poor quality. In many cases, this investment may involve new tools. Matthew Littlefield, president and principal analyst of LNS Research, cautions that buying new software should not be the first step in tracking cost of quality, but it can help. “One of the trends I’ve seen that’s new in the past year or two is that some of the software companies are embedding cost of quality calculators into solutions,” Littlefield says.


But the metric isn’t embraced by everyone. In his twenty-five years of business, Douglas Hicks, a certified public accountant at D. T. Hicks & Co. (Bloomfield Hills, MI), says he has never seen company finance people zealous about the topic. The finance staff or chief operating officers didn’t consider it high on their agenda, Hicks says. “Quality was, but calculating the cost of quality wasn’t,” he says. For some, it isn’t important to track this subset of the business, but rather costs as a whole.


Though the concept has been around for decades, it has evolved. Mariela Koenig, research director, manufacturing at the Aberdeen Group (Boston), has noticed some differences over the years. “Quality is becoming early in the process, companywide, and the tack on cost of quality has shifted,” Koenig says, “from how much money we are spending to how ready we are to handle events.”

Prepare to Succeed

By Mike Gilstrap | October 17, 2012 at 02:53 PM EDT | No Comments

Jim’s Gems: Prepare to Succeed

October 15, 2012

By Jim L. Smith from online Quality Magazine


Diligent, and persistent, preparation leads to outstanding achievement.  Alexander Graham Bell, the great scientist and inventor, said “Before anything else, preparation is the key to success”.  What are you preparing for right now?


In every moment, in every action, you are preparing yourself for one thing or another. It sounds simple, but if you prepare yourself for what you truly desire, it will come to be.  Your will and actions, will simply not allow otherwise.


Opportunities are constantly coming your way.  Yet in order for them to be of any value, you must be prepared to take advantage of those opportunities.  Tony Robbins, the self-help writer and motivational speaker said, “The meeting of preparation with opportunity generates the offspring we call luck.”

With that understanding, luck is what you choose to make it.  The sooner you begin to prepare, the more lucky you will become, or the more you can achieve.  Being prepared gives you considerably more options and puts time on your side.


If you choose to put in the effort today, you can feel good that you are preparing for tomorrow.  By putting in the effort today, you are preparing yourself for the times to come.  Build one day upon another, with focus, positive intention and purpose, and you’ll build a personal and professional life that will be spectacular.


The power of preparation is the belief that each moment in your life is your opportunity to prepare yourself for reaching higher and higher levels of fulfillment.  Prepare well, and your life will reap many great rewards.  As Colin Powel said, “There are no secrets to success. It is the result of preparation, hard work, and learning from failure.”


There are no short-cuts to achieving what you really desire.  Success requires preparation with much of it seemingly going unnoticed but the outcome will be noticed.  An outcome of preparation is self-confidence and self-confidence breeds success.  If you want to be more successful, start preparing for success.

What Do You Have to Offer?

By Mike Gilstrap | August 30, 2012 at 05:23 PM EDT | No Comments

Jim’s Gems: What Do You Have to Offer?
by Jim L. Smith
August 27, 2012
From Quality Magazine online


We spend so much of our lives focusing on ourselves and the things we think we deserve. So often, we are disappointed. However, we can have most things we desire when we have something to offer in return. The path to lasting success is traveled by continuously creating something new and valuable.

If all we focus is on getting and taking and acquiring, we are generally met with much resistance. Such an approach will make it nearly impossible to accomplish the goals we’ve set for ourselves.

We should focus instead on finding ways to give, to create value and to make that real and meaningful value available to others. When we seek to give of ourselves, the possibilities for doing so will have no limit.

Many people are, by their very nature, quite thoroughly self-centered. That represents quite an incredible opportunity.

By giving people what they want, and helping them achieve their goals, we can readily achieve whatever we want. When we work to advance the interests of others, our own interests are even more profoundly advanced.

Look carefully at those around you and ask yourself what you’re able to offer that can bring value to the moment. Answer that question honestly and you’ve figured out the recipe for moving ahead.

Calibration Can Be Risky Business

By Mike Gilstrap | August 13, 2012 at 12:03 PM EDT | No Comments

Management: Calibration Can Be Risky Business
by Harry C. Spinks in Quality Magazine (online)
October 7, 2011

When it comes to calibration requirements, make sure you get what you need.

Risk management is critical in every business, particularly in today’s volatile economy. Failure to assess potential risks and take action to mitigate them can result in financial loss, harm to people or the environment, and ultimately failure of the business.

Calibration is one area where risk management is needed to reduce liability and expenses, whether you have an internal calibration department or outsource calibration. In either case, one must determine the risks associated with the equipment and its calibration. In some industries this is more critical, such as pharmaceuticals, medical device, aerospace and defense.

Depending on the company, improperly calibrated equipment could impact the quality of the product or service they provide, resulting in potential harm to people or the environment. Product may need to be recalled or services performed again resulting in a financial impact to the company.

Calibration of test, measurement and inspection equipment is critical to manufacturing. Measurement equipment may be embedded in processing equipment and is used to verify that processes are running within acceptable parameters. Calibrated inspection equipment is used to ensure that materials, components and end items are within specifications after a process is completed.

What are the risks associated with calibration and what can be one to mitigate or reduce those risks?

Calibration Risks

Overdue calibration—condition unknown

Failure—out of tolerance (impact to product/process)

Used outside specs/range—unintended purpose

Incorrectly calibrated—wrong calibration standard, procedure, specifications, technician training

Over calibration—calibrated too often, too much

False failure—fails calibration, but is actually in tolerance


Business risk—increased scrap, expenses, reduced productivity and missed schedules

Process risk—defective product produced and enters supply chain and customer impact

Safety risk—potential harm to people or environment

As an auditor with a calibration background, I have found many instances where the business did not understand what they received when their equipment was calibrated. What was performed was different from what they needed, but they did not know it.They had a calibration certificate and assumed that the equipment was properly calibrated—in some cases it wasn’t.

Auditors have a basic motto:

Say what you do (documents).

Do what you say (action).

Prove that you did it (data records).

When it comes to calibration, the motto is:

Say what you need (document requirements).

Ask for it (action).

Prove that you got it (data records).

Sounds rather simplistic, but it is really very important. Most companies ask for what they want—their equipment calibrated. They leave the requirements up to an internal or external calibration provider. Many manufacturers outsource some or all of their calibration activities since this is not one of their core competencies. They receive a calibration certificate and a bill for the services and assume that the calibration and the calibration certificate are what they need. Often this is not the case.

For companies with an internal calibration department, one of the engineering groups may take their process requirements and divide by 4 to obtain a 4:1 test accuracy ratio. If the process control requires ±1.0 psi, they assign a calibration accuracy of ±0.25 psi (wishful thinking specs). This would be fine if the gage were rated by the manufacturer for this accuracy. This leads to false failures and extra work when the gage fails to meet the assigned specification but is within the manufacturer’s specification.

What’s Happening?

Few companies have metrology engineers, metrologists or senior calibration technicians on staff. They rely on others to determine the calibration specifications. Who are these others? It depends on the company. In some companies they may be inspectors, technicians or engineers.

Very few engineers are trained in metrology and even less are trained in calibration. The result can be calibration specifications which are unrealistic for the equipment (better than the manufacturer’s specifications). Or they do not take into account all of the measurement components that make up the measurement system. Normally, they are just looking at range and accuracy specifications and not considering the measurement uncertainty of the system.

When outsourcing calibration, a company is relying on the supplier to perform the calibration correctly. But what is correct? The calibration supplier will probably use their own procedure since most calibrations do not have standardized procedures. They may use the same calibration process as the manufacturer, if one is available. Or they may obtain procedures from the Government-Industry Data Exchange Program (GIDEP, )which are contributed by the government (military services) and commercial industry.

Have you documented your equipment’s calibration specifications and linked them to the product or process? Have you communicated them to the calibration supplier? If not, how does the calibration supplier (internal or external) know what you need?

While there are many risks and risk management processes, we are just going to look at calibration specifications and what can be done to reduce the risk.

Calibration Customer

As a customer of a calibration supplier you need to document your equipment’s calibration specifications and link them to the product or process. At the least, ensure that the manufacturer’s specifications meet your needs and document it.

To determine the calibration requirements, form a team. The roles of the team are engineering (manufacturing, quality, and/or R&D), customer (production or manufacturing) and metrology (calibration).

If all of the equipment is commercial, off-the-shelf, then you should be able to use the manufacturer’s specifications. You still need to verify that the equipment specifications meet your process needs before purchasing it, not after.

If the equipment is custom-built, there is more work to do. Identify the measurement components of the system and their specifications, from the manufacturer. Determine how these devices will be calibrated. Be sure to design calibration capabilities into the equipment before you build it or you will delay implementation when the equipment has to be modified so it can be calibrated.

How to combine the accuracies of the components in the system is more complicated. You’ll need a metrologist or someone with measurement systems assurance experience to help with this.

Create a controlled form for recording the equipment calibration specifications whether it is in a database or on paper. Be sure to have a process for change control that includes all the members of the risk assessment team.

Finally, communicate the requirements to your calibration supplier if they were not a part of the team.

Calibration Supplier

As a provider of calibration services, you need to request that your customers provide their requirements for each piece of equipment. If you cannot do this electronically, then use paper. You can request it once and maintain it in your system, or request that the customer provide this information with each request for calibration.

This can be a challenge for a third-party calibration supplier as some customers just want their equipment calibrated. Sometimes all the supplier receives is a box of gages with no purchase request or identification. The supplier calibrates the gages and expects the customer to verify that it meets their needs.

When the services do not meet the customer’s requirements, it can result in rework and loss to the service provider. Having documented requirements may take more effort to obtain, but will save time and money in the long run. It also may improve customer relations and enable you to keep the customer.Too many issues and the customer may find another supplier.

Rules for Calibration Requirements

Rule 1. Gages cannot be calibrated to an accuracy better than that specified by the manufacturer. Over time you may collect sufficient data to show that the gage was consistently within a lesser tolerance than the manufacturer specified. However, it is not good practice to use the gage as if it has a better accuracy (smaller tolerance). Just because the opertaor needs the gage to be ±1% does not mean it will consistently meet that accuracy when the manufacturer gave it a ±5% accuracy.

Rule 2. Precision of the gage is built into the manufacturer’s specifications. Do not try to prove precision. You are really determining the repeatability of the measurement process. Think gage repeatability and reproducibility (gage R&R). This has value for critical measurement systems and for measurement systems analysis if you require that level of detail.

Rule 3. Do not trust the specifications found on the Internet. They may be catalog or marketing specs, not the real specifications. Get the specification sheet from the manufacturer. Call them if you have to. For example, a temperature chart recorder may be advertised as having a 1 C accuracy. It will meet it, if you use signal injection to provide the input instead of a temperature probe. However, the paper chart will usually have an accuracy of 2 C due to the mechanics of the pen and chart. After connecting a temperature probe to the chart recorder the measurement system (probe + chart recorder electronics + paper chart) will have an accuracy greater than 1 C (a combined worse accuracy depending on each component in the system). So, the chart recorder accuracy will never equal 1 degree.

Rule 4. Establish a written agreement with the calibration provider. This may be a purchase agreement, service level agreement or a contract. Put the agreement in writing and make sure that you and the supplier understand your respective responsibilities. Include an out of tolerance notification procedure in the event your equipment or the supplier’s calibration standard is found to be out of tolerance.

Rule 5. Document the equipment’s calibration specifications in a controlled system. Identify the parameter, range, accuracy, etc. Provide this information to your calibration supplier and verify their calibration certificate meets the requirements you provided.

For example, consider an environmental chamber (oven). The customer has an oven and needs to monitor temperature. They use a chart recorder to continuously monitor the temperature. The chart recorder has a digital indicator and a paper chart (circular in this case). The calibration provider gives the customer a calibration certificate with multiple temperatures. What the customer does not know is that the provider disconnected the chamber’s temperature probe and signal injected the chart recorder. Then the recorded the temperature from the digital indicator on the recorder.

What’s missing? The calibration data for the temperature probe and the chart. The customer received a calibration certificate and thought their chamber was calibrated, but it wasn’t.

Rule 6. Verify you received what you asked for. One concern with providing the supplier with your requirements is getting those requirements to the technician that performs the work. Talk to your supplier and find out how those requirements will be communicated to the technician and verified by the supplier’s quality person before returning the gages to you. Compare the calibration certificate to the calibration requirements—they should match. If not, contact your supplier and find out what went wrong. Be sure to keep a document trail of the issue in case it happens again.

Bottom Line

If you do not know what calibration specifications you need, get help figuring it out or your financial bottom line could be adversely affected. It is possible to save money if you find out that you have been calibrating equipment that does not need it.

You may be thinking that if you have not had a problem with calibration before, everything must be great. It is, right up until you find out it isn’t from one of your customers, a regulatory agency, or when a product has to be recalled because of a measurement system error—one that could have been avoided if you had worked with your calibration provider to ensure they know what you need and you make sure that you receive it.

Want Quality? Keep it Simple

By Mike Gilstrap | July 23, 2012 at 02:28 PM EDT | No Comments

Quality Observations: Management: Control Quality Assurance by Keeping it Simple
by Ashley Osgood
July 18, 2012

If you are a seasoned quality professional, you understand that quality can’t be managed, but processes can. People are of vital importance to the identification, maintenance and improvement of business processes, and simple principles and methods ofprocess control provide a solid foundation for continuous improvement, and can be readily implemented by any organization.

Most of us are familiar with Philip Crosby’s definition of quality as “conformance to requirements.” While this is a simple definition, it’s powerful in that it enables all quality characteristics to be judged in absolute terms of both good and bad. This also permits the development and management of processes that result in the desired quality characteristics of a product or service being provided by an organization.

There is a perception vs. reality when it comes to quality, so an effective quality improvement strategy must comprehend the differences that exist between the two. Both of these must be recognized, understood and managed in order to move forward. When we think about processes, the foundation of lean manufacturing is discipline to a defined process, and business performance is a result of many of these processes operating both concurrently and sequentially. While it’s common to consider processes independently, it’s important to note that they are always interrelated with others.

Management must recognize quality to be its first priority, regardless of the industry. Management itself is a process, and you can only manage processes with facts supported by data gathered from your quality management software. Any opportunity you can to make this quality data management “visual”, you should take advantage of it wherever possible—especially when applying the four fundamental elements of quality: 5S, problem solving, Poka Yoke, and continuous improvement.

So, where do you start the process of continuous improvement? Ask the question: do we treat our people well, or not? Take a look at your work environment; be sure you have a clear and consistent direction, listen and hear your employees, educate and train them, keep them informed, share goals and performance data and be sure to provide them with the time and resources necessary to do the job asked of them.

Next, create an improvement plan. You should adhere to the following rules when structuring your plan:

1) Don’t receive bad product, data, or services;
2) Don’t produce bad product, data, or services; and

3) Don’t send bad product, data, or services.

To assure your commitment to quality, conduct weekly quality assurance management meetings to ask where the organization is on quality performance, and see the status of each action plan item. Be sure to include managers responsible for the quality function of the business. Gather your inputs (external, internal and supplier management performance metrics, along with the current action plans) and outputs (resource assignments, action to remove obstacles, escalation of unresolved issues, and published meeting minutes).

As I’ve noted before in talking quality to your CEO , the language of management is dollars. It is important, as quality professionals, for you to bridge the gap between quality and the financial metrics used to drive the business. Define a process to identify, capture, and report the cost of poor quality. Your inputs will be time and material, and outputs will be financial justification for proposed changes. (If you struggle with talking quality to your CEO, you may want to check out this whitepaper:How to Sell Quality to Management to help you get started.)

Understanding these simple but key concepts will help you to successfully manage your quality improvement initiatives. Some takeaway tips for you: listen to and support the people, assure quality data integrity, identify systematic problems, rationalize the feasibility of improvement targets, make your action plans very specific and follow up daily to assure optimum performance.”

Accept Responsibility

By Mike Gilstrap | July 10, 2012 at 12:28 PM EDT | No Comments

Jim’s Gems: Accept Responsibility
by Jim L. Smith from Quality Magazine
July 2, 2012

If you are willing to accept full responsibility, there is absolutely no limit to how far you can go.

It would be a safe bet that most everyone would agree that being responsible is a good thing. However, I would also bet that not everyone would agree about what being responsible actually means. How many of us have actually thought much about it? Have you ever asked yourself these questions? Am I a responsible person? What does it mean to me to be responsible? Does being responsible mean doing what is expected of me? Does being responsible just mean keeping my promises? Does it mean being a good provider for my family?

All the above questions will give us insight, but they don’t go far enough to address what responsibility means. In addition to the obvious answers to the above questions, it is my belief that we are responsible for everything in our lives. Of course, we accept responsibility for all the successes we experience. However, I’ll wager most of us are reluctant to accept responsibility for all our failures as well.

If we don’t accept responsibility for what happens in our life, we are likely to shrug off our failures thinking we have nothing to learn from them. But, I’m sure you’ve heard before, we learn more from failures than we do from our successes. If we don’t take full responsibility for what happens in our life, we will never be truly happy, because no one can make us happy but ourselves.

You might wonder, then, if we’re responsible for illness and adversity, too. Well, let’s think about this for a moment. We are certainly not responsible for natural disasters like hurricanes, tornadoes and earthquakes (we’ll ignore our responsibility for being adequately prepared, and following evacuation directions, because that’s beyond my point). We are also not responsible for the drunk driver who runs a stop sign and sideswipes our cars. However, we are, without a doubt, responsible for how we respond to these things—and whether we choose to use them as experiences from which to learn and grow.

Taking responsibility for your life gives you the freedom to take risks and make mistakes, and that’s a great feeling. Of course, it also means you need to be prepared to take the consequences of your risks and mistakes as well as the rewards. It is my belief that you can’t grow very much unless you are willing to do these things.

If you are willing to accept full responsibility, there is absolutely no limit to how far you can go.

Tim Robbins, the author and motivational speaker, said “Whatever happens, take responsibility.” We must take personal responsibility. We may not be able to change the seasons or the wind, but we can put on a coat or adjust our sails—we can transform ourselves. Our willingness to accept responsibility for our own life is the source from which self-respect springs.



It’s not our motivation that will produce results – it’s our actions!

By Mike Gilstrap | June 27, 2012 at 09:37 AM EDT | No Comments

Jim’s Gems: The Road to Positive Change is Paved with Persistence
by Jim L. Smith from Quality Magazine
June 26, 2012

Any change is difficult, but the road to positive change is especially challenging. It takes more than new-year resolutions or simply writing down a few goals to achieve positive change. It takes persistence and determination. Calvin Coolidge said “Nothing in the world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan ‘press on’ has solved and always will solve the problems of the human race.”

Persistence is the ability to maintain action regardless of our feelings or what’s going on around us. Even when we feel like quitting, we must continue to press on.

When we work on any big goal, our motivation may waver. At times we’ll feel motivated; other times to a lesser degree. But it’s not our motivation that will produce results—it’s our actions. Persistence allows us to keep taking action even when we don’t feel motivated to do so. Persistence is a matter of hanging on while others have let go; therefore, persistence is the ingredient that allows us to keep accumulating results.

Persistence will ultimately provide its own motivation. If we simply keep taking action, we’ll eventually get results, and results can be very motivating. For example, we may become a lot more enthusiastic about dieting and exercising once we’ve lost those first 10 pounds and feel our clothes fitting more loosely.

Sooner or later, however, we’re probably going to be tempted to slip back into old ways, especially when we’re feeling tired, lonely or sad. It is then that we need to have a plan ready that will help keep us on track.

What is the plan? One thing I’ve learned is to be ready with a list of alternative activities in which I can engage until the temptation passes such as: engage in positive self-talk; talking with someone who can give me positive reinforcement or encouragement; engage one of my classes in meaningful discussion; writing an article; focusing on helping someone with a challenge; reading; or just watching a good movie.

If you get through the temptation successfully, give yourself a ‘pat on the back’ and push on. If you don’t do so well, forgive yourself as the world is not going to come to an end. Grant yourself the right to be human, get right back on track, and move forward. We are not talking perfection here; we are talking about fulfilled potential for creating positive change.

Persistence, therefore, becomes a powerful ally. Elbert Hubbard, an American writer, said “A little more persistence, a little more effort, and what seemed hopeless failure may turn to glorious success.”



Sharp Focus

By Mike Gilstrap | June 20, 2012 at 12:40 PM EDT | No Comments

From Quality Magazine:

Jim’s Gems: Sharp Focus
by Jim L. Smith
June 18, 2012


In photography, a viewfinder is what the photographer looks through to compose, and in many cases to focus, the picture. When taking pictures, we want to have a sharp, clear focus through the viewfinder. We want, and need, the same clear focus for our everyday life.

Most of us live in a culture that presents us with a bewildering array of options. But the confusion of too many options will largely disappear once we learn how to focus. In this sense, focusing means to concentrate all our attention on one particular thing, and, much as we do with a camera, bring it into sharp relief in order to clarify our relationship with it.

When we concentrate our attention on a particular endeavor, problem or person, we bring all of our energy to it, shutting out irrelevant details. Even if we find ourselves caught in a crisis where our attention seems to be demanded everywhere at the same time, when we choose to focus our attention on one aspect of the problem, a solution becomes much easier. A natural progression then begins to unfold, making it possible for us to arrive eventually at an overall resolution.

Remember, your experience in life is determined by where and upon what you choose to focus your attention and energy, just as a photographer must decide what to focus the lens on and what to leave out. If you let your attention wander all over the place, you will end up feeling muddled and blurry, just like the picture that results from a lens not held still.

Manage your attention, focus on what’s important and you will be taking charge of the situation.




Choosing a certification body

By Mike Gilstrap | June 11, 2012 at 05:14 PM EDT | No Comments

From the International Organization for Standardization website (


When choosing a certification body to carry out ISO 9001:2000 (or ISO 9001:2008) or ISO 14001:2004 certification, these arethe aspects the organization needs to take into account.

  • The first point is that an organization can implement ISO 9001:2000 (or ISO 9001:2008) or ISO 14001:2004 without seeking certification. The best reason for wanting to implement the standards is to improve the efficiency and effectiveness of company operations. Certification of the management system is not an ISO 9001:2000 (or ISO 9001:2008) or ISO 14001:2004 requirement.
  • Deciding to have an independent audit of the system to confirm that it conforms to ISO 9001:2000 (or ISO 9001:2008) or ISO 14001:2004 is a decision to be taken on business grounds: for example
    • if it is a contractual or regulatory requirement
    • if it is a market requirement or to meet customer preferences
    • if it falls within the context of a risk management programme
    • or if the organization thinks it will motivate staff by setting a clear goal for the development of its management system.
  • Criteria to consider include:
  • evaluate several certification bodies
  • bear in mind that the cheapest might prove to be the most costly if its auditing is below standard, or if its certificate is not recognized by the organization’s customers
  • establish whether the certification body has auditors with experience in the organization’s sector of activity
  • establish whether the certification body implements, or is migrating to ISO/IEC 17021:2006, Conformity assessment – Requirements for bodies providing audit and certification of management systems
  • Another point to clarify is whether or not the certification body has been accredited and, if so, by whom. Accreditation, in simple terms, means that a certification body has been officially approved as competent to carry out certification in specified business sectors by a national accreditation body.

In most countries, accreditation is a choice, not an obligation and the fact that a certification body is not accredited does not, by itself, mean that it is not a reputable organization. For example, a certification body operating nationally in a highly specific sector might enjoy such a good reputation that it does not feel there is any advantage for it to go to the expense of being accredited. That said, many certification bodies choose to seek accreditation, even when it is not compulsory, in order to be able to demonstrate an independent confirmation of their competence.

The list of accreditation bodies with their contact information and links to their Web sites can be found on the Internet site of theInternational Accreditation Forum (, under “Members” > “Accreditation members“. In general, accreditation bodies’ Web sites contain an up-to-date list of certification bodies that they have accredited which can be used for selecting a certification body.




Management: The AS9100 Approach

By Mike Gilstrap | June 07, 2012 at 11:43 AM EDT | No Comments

Article written by Vlad Di Natale from Quality Magazine

Manufacturers supplying the aerospace industry face the decision of whether to become certified to AS9100, the international quality management system standard that builds on ISO 9001:2000 and adds requirements specific to the aircraft, space and defense industry. For those new to this standard, it basically combines and harmonizes AS9000, ISO 9001 and Europe’s prEN9000-1 standards. In addition to providing a single standard for all suppliers, its stated benefits are greater focus on key customer requirements, improved product and process quality, reduced quality variation, increased efficiency, potential reduction of second-party audits, and precise traceability throughout the supply chain.

AS9100’s core concept is the Plan-Do-Check-Act cycle that focuses the organization on its key processes, planning, reviewing, and continual improvement. From its inception, one of the tenets of AS9100 has been to mandate what a quality management system must achieve, but not how to achieve it, leaving this to the supplier. As a result, the way the requirements of the standard are met varies dramatically from supplier to supplier.

For example, some manufacturers who are ISO 9001 certified create their own systems that meet AS9100 requirements. The AS9100 standard’s flexibility allows these companies to fine-tune existing quality management systems and avoid investing the considerable time and money required to purchase and implement new enterprise-level software. Many companies have electronic files, and demonstrating compliance still requires the production of paper documents. Therefore, organizations must ensure that their quality manual is extremely comprehensive and well-written, and that their documentation is exceptionally thorough.

Companies may be able to modify their existing quality system, even if it was not originally designed for lot tracking from parts procurement through final build and shipping, by scrupulously maintaining the process. For example, when a part arrives, it would be entered into the system, which then issues a transaction number that begins the process of recording all subsequent information and becomes a new lot number. When the part gets kitted, a tracking number is assigned. Tracking continues as the component becomes part of larger and larger assemblies. The end result is a build package that includes every routing document created throughout the process. This package is used to generate the “as-built” list. Even though this process is only minimally automated, it provides the ability to find what lot tracking number was given to every part and thus trace it back to its supplier. It also offers the ability to produce any associated certificates, test reports and other supporting documentation at a moment’s notice.


Going Above and Beyond the Standard

For manufacturers supplying hardware that will be used in space, the requirements for “hi-rel” (high-reliability) and space-qualification vastly exceed those of AS9100. Considering that repairs are generally impossible to equipment orbiting the Earth, component failure is not an option. For these companies, meeting the AS9100 standard is considerably less difficult—more like dotting i’s and crossing t’s. Companies must maintain extraordinary levels of traceability, including the serial and lot numbers for every component in an assembly. Traceability must also be maintained from the materials level through plating and a broad array of other functions that are well beyond what is required in AS9100 as it applies to the aerospace community as a whole.

For example, a manufacturer required to meet the MIL-PRF-38534 QML for hybrid microcircuits must meet specifications that demand extraordinary traceability, require extensive accountability for manufacturing control, worst-case analysis, shock, vibration, thermal cycling, and many other factors. Serial numbers, part numbers and date codes must be present on every product. Traceability must be provided all the way back to original materials and components, such as the wafer number in the case of a semiconductor or the lot number for a packaged part. This requires strict controls on materials procurement, kitting and record retention during manufacturing—an expensive, labor-intensive process. The benefit to the customer is that if a problem develops even five years after the product was delivered, the manufacturer can trace the individual failed component, in which products it was used, as well as the customers who received them. Many terrestrial platforms also require this high level of detail and testing, especially in military and mission-critical applications, but this is usually flowed down contractually regardless of what certifications a subcontractor may hold.


AS9100C Compliance Beckons


After being revised in 2008 (AS9100 Rev. B) all AS9100-certified companies must now be certified to a new revision (AS9100 Rev. C) by July 1, 2012. Although the new standard contains many clarifications to its predecessor, its most significant and broad-based enhancements are to risk mitigation. In fact, language specifically dedicated to risk management is present throughout.

In general, AS9100C increases requirements for demonstrating compliance in more detail, from internal auditing to corrective action, while also mandating the ability to provide objective evidence of compliance, whether a document, chart or diagram, thus eliminating some gaps in the previous version. It is generally acknowledged among quality assurance managers that the changes within AS9100C were driven by the aerospace industry rather than the aircraft industry. Thus risk analysis, assessment and mitigation, processes that were only minimally covered in the previous revision, are now essential. In addition to taking a major step forward in risk mitigation, AS9100C expands sections of its predecessor to better define compliance requirements. In short, it calls for more detailed documentation and thus potentially greater traceability.

As the AS9100C standard requires significantly higher levels of effort to remain compliant, enterprise software and the automated accountability it provides can be a significant benefit. For example, rather than simple tooling documentation, a procedure must be in place and records kept as to when a tool will be replaced and whether it in fact was replaced. Enterprise software can automate this and many other functions. In larger companies, this could become a necessity. Another benefit of enterprise software is that it can make companies more productive without adding hordes of new employees who can actually complicate, rather than streamline, the process.


Customers Influence Certification Decisions

The Department of Defense, Federal Aviation Administration (FAA) and NASA endorse AS9100 certification, but do not demand it. As suppliers serving the defense and aerospace markets are not required to be certified, in practical terms, the decision is often determined by whether or not a major customer requires it. For example, if a manufacturer’s customers are prime contractors such as Boeing, Rolls Royce, United Technologies or Raytheon, certification becomes a necessity. In addition, if a manufacturer is supplying a component to a contractor further up the food chain and the ultimate recipient of the end product requires certification to AS9100, the manufacturer may find itself in a situation that much resembles a full-blown certification audit, with the need to prove it is compliant with key aspects of the standard.

For many manufacturers, the customer with the most strenuous requirements dictates whether ISO 9000 alone or AS9100 (which incorporates ISO 9000) is necessary. At the very least, AS9100 is beneficial because it forces manufacturers to pay strict attention to quality. At its best, the standard provides a very high level of accountability, especially in the case of first article inspection—the standard’s furthest reach into the domain of military specifications.


Tech Tips


As AS9100 certification is not a mandatory requirement of many prime contractors, every manufacturer should evaluate whether or not ISO 9001 certification alone will suffice.

The decision is often determined by whether or not a major customer requires it.

Subcontractors facing compliance challenges stand to benefit from a solid relationship with their customer’s quality representatives.