Brand clinic: Statistics and lies
Published:  26 March, 2010

Don Williams, CEO of brand and design consultancy Pi Global, bewails bad statistics

"There are three kinds of lies: lies, damned lies and statistics." Damned right! Everyone knows that statistics are usually unreliable, agenda-ridden and often total bobbins. Of course, if a baker stocks 100 brown loaves and 100 white loaves and, every day, he sells all the white ones and half the brown ones, he knows that the white loaves are twice as popular as the brown.

The problem arises when researchers and statisticians try to create a science of hugely complex and often intangible situations and the resulting nonsense is used to support weak arguments or obfuscate the truth. Research is unfortunately compelling for one very simple reason: it's a huge facilitator in a decision-making process and a career safety net. If you make a bad decision based on research results, you can blame the inexplicable research. If you make a bad decision based on your experience and nous, there's nowhere to hide so research can be a convenient crutch.

There's nothing wrong with research per se it can be very valuable. But there is something wrong with bad research. Asking consumers dumb questions about likes and dislikes, probing them and forcing them to rationalise the irrational is not just daft, it's irresponsible and can result in major brand damage and financial disaster.

I have sat through groups where consumers have been encouraged to design packaging there are even research techniques that revolve around this concept. Does anyone in their right mind believe consumers understand how a piece of packaging has to function on the battlefield that is a supermarket at three metres from the fixture, 1.5 metres in the hand, in the home? But ask them whether they like red or what they think of the picture or whether they like the logo or 200 other inane questions and you can be certain they will have answers. After all, they've been fed and watered in a cosy little room with a few bob thrown in for good measure and they don't want to look stupid.

In my view, the resulting data from exercises like this is all-but-useless in the real world and unless you get as close to real world environments, you cannot hope to gain any worthwhile knowledge. Look at the 1985 New Coke debacle, for example: Pepsi was winning the 'cola war' and taste test after taste test, when Coke decided to launch a new high-fructose corn syrup to get closer to the Pepsi sweetness. What they didn't take into account was that the short-term sweetness experienced in a sip test/focus group situation was very different to the long-term, real-world experience of living with the product every day. So the research was fundamentally flawed.

When you consider that well over 90% of new product launches fail, most of which are presumably researched to death, you have to question the standard of research methodology.

We need a sea change to provide us with a more pragmatic, common-sense approach to consumer understanding, based on what they do, rather than what they say they do.

My Account


Most read