
Most of the studies I review for this site fall into two categories:
1) Statistically significant, but practically useless: This is the equivalent of saying that a nickel is TEN TIMES more than half a penny. On the face of it, it's a massive proportional difference. But practically speaking, you can't do much more with a nickel than you can with a penny.
2) Practically useful, but statistically coincidental: This one is a little trickier. The whole purpose behind using statistics is to determine whether or not the result in the study is plausibly a real effect, or simply a fluke. You can successfully use statistics to state that it's not a fluke, but if you couldn't prove it wasn't a fluke, you can't really prove that it was a fluke either. You can only state that it definitely wasn't not a fluke. So basically, the result lies in the twilight world of possible fluke-dom. Results that are "promising" or "approaching significance" are examples of this type of study (if you want to see a statistician fly off the handle, tell them that "approaching significance" is a meaningful statement)
I don't know about you, but I'd rather spend my hard-earned cash and irreplaceable time on something I KNOW I'm going to enjoy (say, like a well-dressed bison burger with yam fries) than something that might or might not be a fluke.