The 'No Prime Minister' fallacy
Consider the following experiment: take 1000 people at random from the human population and interview them, discarding those who are unavailable. Establish whether or not any of them is the current prime-minister of the United Kingdom. Quite obviously we will either always find that none of them are, or nearly always find that none of them are. Seriously, if you tried to arrange a meeting with the prime minister, on the grounds thta you were conducting such an experiment, you would be regarded as mad, and told that the prime minister has more important things to do with his time! In any case, you would end up with quantative evidence along the lines that, in n repetitions of this experiement, less than 0.00001% of the time did the sample contain somebody who was the prime minister. Thus, ignoring this discrepancy as likely experimental error, you would conclude, with a(n apparently) high degree of confidence, that there is no person at all who is the prime minister of the U.K.!
Clearly there is a prime minister nearly all the time (one could point out that, when an encumbent party is defeated in a general election, there is the time between one prime minister's resignation, and the acceptance of the next prime minister, for example...). Clearly there are easy ways to find and check evidence that there is one. But the problem, and the point of writing this, is that it is straightforward to conduct a scientific-looking study which would produce an outcome like the above. This is why independent scrutiny is so important in science, since with all the pressure to produce results so as to gain continued funding, there are pressures to get published results even if that means cutting corners. The side effect is that much published research, even most of it, is wrong to a lesser or greater extent. Thus one has to approach the scientific literature with the same skepticism that one much approach Wikipedia, or a blog. You cannot trust accounts of published research until you understand the limitations of that research. You certainly cannot just flick through to the statement of the conclusion and take that as incontrovertible truth by way of its being published in a peer-reviewed journal.
Rolling large dice
Consider a hypothetical fair billion-sided dice. Roll it a few times, and you will almost certainly get different sides coming up. Furthermore, the probability of any particular side coming up is near zero. If you categorise the sides with colours, so that, say, 99% are red, of the remaining ones 99% are blue, and the rest are green, you will almost never roll a green side. Usually such a (hypothetical) die will not roll a green side but it is not true that never will such a die roll a green side. The most practical example of this, of course, is a natioal lottery: you have, say, a 1 in 14 million chance of hitting the jackpot. This is statistically negligible to many, yet most draws at least one person hits the jackpot. Such situations are the source of the saying '99% right, but 100% wrong'.