The words 'usually' and 'always' are words that are in regular usage. When it comes to science and research, there are certain epistemological issues that people are often unaware of, or choose to ignore. Technically speaking, we do not know for sure that the next time someone conducts a simple physics experiment, they will not get the results that accepted theories predict, even in the absence of experimental error. This happens rarely. The MichaelsonMorleyExperiment is one stand-out, and in part led to the rise of the modern theory of relativity. But physics is so well tested that we can pretty much assume that the laws of physics, as we understand them, always hold. When it comes to studying humans, or animals, and their behaviour, it is rather different. We cannot test theories about human behaviour with simple experiments in the way physicists can test their theories: humans are just too complex. Nor can we take simple well-tested theories such as those of physics and chemistry, and work out issues of human behaviour using pen, paper, and a bit of mathematical reasoning. The problem then is that researchers will often conclude that a theory for which they know of no counter-evidence is a theory which always holds, in the same way as the physicist.

The 'No Prime Minister' fallacy

Consider the following experiment: take 1000 people at random from the human population and interview them, discarding those who are unavailable. Establish whether or not any of them is the current prime-minister of the United Kingdom. Quite obviously we will either always find that none of them are, or nearly always find that none of them are. Seriously, if you tried to arrange a meeting with the prime minister, on the grounds thta you were conducting such an experiment, you would be regarded as mad, and told that the prime minister has more important things to do with his time! In any case, you would end up with quantative evidence along the lines that, in n repetitions of this experiement, less than 0.00001% of the time did the sample contain somebody who was the prime minister. Thus, ignoring this discrepancy as likely experimental error, you would conclude, with a(n apparently) high degree of confidence, that there is no person at all who is the prime minister of the U.K.!

Clearly there is a prime minister nearly all the time (one could point out that, when an encumbent party is defeated in a general election, there is the time between one prime minister's resignation, and the acceptance of the next prime minister, for example...). Clearly there are easy ways to find and check evidence that there is one. But the problem, and the point of writing this, is that it is straightforward to conduct a scientific-looking study which would produce an outcome like the above. This is why independent scrutiny is so important in science, since with all the pressure to produce results so as to gain continued funding, there are pressures to get published results even if that means cutting corners. The side effect is that much published research, even most of it, is wrong to a lesser or greater extent. Thus one has to approach the scientific literature with the same skepticism that one much approach Wikipedia, or a blog. You cannot trust accounts of published research until you understand the limitations of that research. You certainly cannot just flick through to the statement of the conclusion and take that as incontrovertible truth by way of its being published in a peer-reviewed journal.

Rolling large dice

Consider a hypothetical fair billion-sided dice. Roll it a few times, and you will almost certainly get different sides coming up. Furthermore, the probability of any particular side coming up is near zero. If you categorise the sides with colours, so that, say, 99% are red, of the remaining ones 99% are blue, and the rest are green, you will almost never roll a green side. Usually such a (hypothetical) die will not roll a green side but it is not true that never will such a die roll a green side. The most practical example of this, of course, is a natioal lottery: you have, say, a 1 in 14 million chance of hitting the jackpot. This is statistically negligible to many, yet most draws at least one person hits the jackpot. Such situations are the source of the saying '99% right, but 100% wrong'.