One of the biggest scams in the world

An inspector once explained how it didn’t matter that UKAS inspectors’ demands are always changing and what a lab does as a result is different every year.  Because you have accreditation, it proves you always have quality.

Very conveniently, this is a quality that can’t be measured.  If it could be measured it would be a quantity, not quality at all.

Quantity inspection, quantity improvement, quantity assurance, quantity officer…those couldn’t be sold so easily – specifically because they are measurable.  That’s the genius of accreditation – even the smart are too dull to realise the scam they are participating in.

They have neither the wit nor the courage to say what is really happening in the alternative reality of the inspection cartel.

Maybe this explanation of similar plays in the world of finance will help you understand,

This is one of the biggest scams in the world of finance

Editor’s note: Bill Bonner is the New York Times bestselling author of Empire of Debt. He just published a new book that’s part history, part business, and part self-help. The following is an excerpt “debunking” the kinds of numbers you see in the media every day. As you’ll see, if you’re taking these figures at face value, you’re making a big mistake…

From Bill Bonner, editor, The Bill Bonner Letter:

Numbers are a good thing. Economics is full of numbers.

It is perfectly natural to use numbers to count, to weigh, to study, and to compare. They make it easier and more precise to describe quantities.

Instead of saying, “I drank a bucket of beer,” you say, “I drank two 40s.” Then instead of saying, “I threw up all over the place,” you say, “I threw up on an area four feet square.”

But in economics, we reach the point of diminishing returns with numbers very quickly. They gradually become useless. Later, when they are used to disguise, pervert, and manipulate, they become disastrous.

At exactly what point does the payout from numbers in the economics trade become a nuisance? Probably as soon as you see a decimal point or a Greek symbol.

I’m not above eponymous vanity either. So I give you Bonner’s Law: In the hands of economists, the more precise the number, the bigger the lie.

For an economist, numbers are a gift from the heavens. They turn them, they twist them, they use them to lever up and screw down. They also use them to scam the public. Numbers help put nonsense on stilts. Numbers appear precise, scientific, and accurate. By comparison, words are sloppy, vague, subject to misinterpretation.

But words are much better suited to the economist’s trade.

The original economists understood this. Just look at Wealth of Nations – there are a lot of words. We understand the world by analogy, not by digits.

Besides, the digits used by modern economists are most always fraudulent. “Math makes a research paper look solid, but the real science lies not in math but in trying one’s utmost to understand the real workings of the world,” says Professor Kimmo Eriksson of Sweden’s Mälardalen University.

He decided to find out what effect complicated math had on research papers. So, he handed out two abstracts of research papers to 200 people with graduate degrees in various fields. One of the abstracts contained a mathematical formula taken from an unrelated paper, with no relevance whatsoever to the matter being discussed. Nevertheless, the abstract with the absurd mathematics was judged most impressive by participants. Not surprisingly, the further from math or science the person’s own training, the more likely he was to find the math impressive.

Whereas the classical economist – before Keynes and econometrics – was a patient onlooker – the modern, post-Keynes economist has had ants in his pants. He has not the patience to watch his flock, like a preacher keeping an eye on a group of sinners, or a botanist watching plants. Instead, he comes to the jobsite like a construction foreman, hardhat in hand, ready to open his tool chest immediately – to take out his numbers.

If you are going to improve something, you must be able to measure it. Otherwise, how do you know that you have made an improvement?

But that is the problem right there. How do you measure improvement? How do you know that something is “better”? You can’t know. “Better” is a feature of quality. It can be felt. It can be sensed. It can be appreciated or ignored. But it can’t actually be measured.

What can be measured is quantity. And for that, you need numbers. But when we look carefully at the basic numbers used by economists, we first find that they are fishy. Later, we realize that they are downright fraudulent. (emphasis added)

These numbers claim to have meaning. They claim to be specific and precise. They are the basis of weighty decisions and far-reaching policies that pretend to make things better. They are the evidence and the proof that led to thousands of PhD awards, thousands of grants, scholarships, and academic tenure decisions. More than a few Nobel Prize winners also trace their success to the numbers arrived at on the right side of the equal sign.

1… 2… 3… 4… 5… 6… 7… 8… 9… There are only nine cardinal numbers. The rest are derivative or aggregates. These numbers are useful. In the hands of ordinary people, they mean something. “Three tomatoes” is different from “five tomatoes.”

In the hands of scientists and engineers, numbers are indispensable. Precise calculations allow them to send a spacecraft to Mars and then drive around on the Red Planet. But a useful tool for one profession may be a danger in the hands of another. Put a hairdresser at the controls of a 747, or let a pilot cook your canard à l’orange, and you’re asking for trouble.

Similarly, when an economist gets fancy with numbers, the results can be catastrophic.

On October 19, 1987, for example, the bottom dropped out of the stock market. The Dow went down 23%. “Black Monday,” as it came to be called, was the largest single-day drop in stock market history. The cause of the collapse was quickly traced to an innovation in the investment world called “portfolio insurance.”

The idea was that if quantitative analysts – called “quants” – could accurately calculate the odds of a stock market pullback, they could sell insurance – very profitably – to protect against it. This involved selling index futures short while buying the underlying equities. If the market fell, the index futures would make money, offsetting the losses on stock prices.

The dominant mathematical pricing guide at the time was the Black-Scholes model, named after Fischer Black and Myron Scholes, who described it in a 1973 paper, “The Pricing of Options and Corporate Liabilities.” Later, Robert C. Merton added some detail and he and Scholes won the 1997 Nobel Prize in Economics for their work. (Black died in 1995.)

Was the model useful? It was certainly useful at getting investors to put money into the stock market and mathematically-driven hedge funds. Did it work? Not exactly.

Not only did it fail to protect investors in the crash of ’87, it held that such an equity collapse was impossible. According to the model, it wouldn’t happen in the life of the universe.

That it happened only a few years after the model became widely used on Wall Street was more than a coincidence. Analysts believe the hedging strategy of the funds that followed the model most closely – selling short index futures – actually caused the sharp sell-off.

“Beware of geeks bearing formulas,” said Warren Buffet in 2009.

– Adapted from Hormegeddon: How Too Much of a Good Thing Leads To Disaster. Copyright © 2014 by Bill Bonner.

[A scam within a scam? Readers should be aware that Brian Deer argued that the publisher of this source is also a financial scammer

We pointed out previously an unusual paper that attempted to unveil the “quality” scam by quantifying its effects, which were clearly negligible:

Fig. 5 -

Their figure shows that audits performed by professional UKAS inspectors and those performed by staff in a victim organisation were similarly ineffective at improving the quality of laboratory results or service.  All they did was generate a quantity of non-compliances which the gullible take as a measure of quality improvement.

Who added up all a lab’s non-compliances over the years and asked, how bad then was the quality UKAS accredited at the start?

The authors suggested it would take a large and thorough examination of EQA results from many labs or international ring trials to prove whether accreditation made any difference in proportion to its cost.  Despite the increasing proportion of laboratory work that is now spent on EQA to provide fodder for UKAS and EC fussing, the paper of Wilson et al. noted the lack of publicly-available data to give statistical substance to the cartel’s claims and this perhaps suggests an institutional unwillingness to investigate further.

Is all this work really worthwhile to show that most of the labs, most of the time, more or less conform to the normal distribution curve that is assumed in the first place?  Would less achieve just as much?

Has much of EQA become just another part of the obsessive, parasitic “quality” cartel –  done for profit rather than quality?

This entry was posted in Introduction. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s