Often I come across ‘scientific studies’ claiming outrageous and down right ridiculous statements on many disciplines,including social sciences.
One would claim that a woman’s sexual references can be predicted by her nail polish colors!
And the father of all this is Sigmund Freud, who blamed everything under the Sun for one’s behavior,especially one’s parents.
If you are aggressive, it is because of gregarious parents:if you are timid, it is because of retiring parents.
If you are a pervert, your parents are responsible.
But never You, for your actions or for what you are!
So much for these ‘Scientific studies!’
I have posted some articles on this.
Now read more news on this.

”
A preliminary investigative report issued on Monday by Tilburg University has concluded that dozens of research papers authored and co-authored by Stapel contain fabricated data.
“We have some 30 papers in peer-reviewed journals where we are actually sure that they are fake, and there are more to come,” says Pim Levelt, chair of the committee that investigated Stapel’s work. If all of these papers are withdrawn, Stapel’s will become one of the worst cases of scientific misconduct in history.
Stapel is the researcher behind a number of eye-catching studies which, prima facie, seem to offer provocative insights into human nature. His research topics range from the effects of beauty product ads on consumer self-esteem, to how urban decay (like littered streets) promote stereotyping and discrimination — the latter being a study we reported on here at io9.
Whether these studies are included in the 30+ papers known to contain fraudulent data remains to be seen. Tilburg University has yet to provide a list of which studies contain fudged results, though Stapel’s paper on the tie between urban decay and discrimination, published in April in the journal Science, has already been flagged with an expression of concern by the journal’s publishers.
Stapel is believed to have acted alone, deceiving colleagues, collaborators, and even PhD candidates for years by providing them with fictitious data. Given Stapel’s prominence within the field of social psychology, (not to mention the sheer volume of publications already identified as tainted), it’s safe to say that the effects of his outing will be far-reaching…..
”
otably, none of these mention anything about science, fact-finding, or statements about converging upon truth. (Note, in the past I’ve gone so far as to suggest that even the process of citing specific papers is biased and flawed, and that we would be better off giving aggregate citations of whole swathes of the literature.)
The second article takes almost an entirely economic, cost-benefit perspective of peer-review again focused on publishing results in journals. Only toward the end does the author directly address peer-review’s purpose in science by saying:
…[T]he most important question is how accurately the peer review system predicts the longer-term judgments of the scientific community… A tentative answer to this last question is suggested by a pilot study carried out by my former colleagues atNature Neuroscience, who examined the assessments produced by Faculty of 1000 (F1000), a website that seeks to identify and rank interesting papers based on the votes of handpicked expert ‘faculty members’. For a sample of 2,500 neuroscience papers listed on F1000, there was a strong correlation between the paper’s F1000 factor and the impact factor of the journal in which it appeared. This finding, albeit preliminary, should give pause to anyone who believes that the current peer review system is fundamentally flawed or that a more distributed method of assessment would give different results.
I strongly disagree with his final conclusion here. A perfectly plausible explanation for this result would be that scientists rate papers in “better” journals higher because they’re published in journal perceived to be better. This would appear to be a source of bias and a major flaw of the current peer-review system. Rather than giving me pause as to whether the system is flawed, one could easily interpret that result asproof of the flaw.
The most common response that I encounter when speaking with others scientists about what they think peer-review is for, however, is some form of the following:
Peer-review improves the quality of published papers.
I’m about to get very meta here, but post-doc astronomer Sarah Kendrew recently wrote a piece in The Guardian titled, “Brian Cox is wrong: blogging your research is not a recipe for disaster”.
More than 120 computer-generated “gibberish” research papers are being removed from the archives of scientific journal publishers Springer and the Institute of Electrical and Electronic Engineers (IEEE) after a French computer scientist determined the papers were fakes.
The bogus research papers, it turns out, were created by an automated word generation program that can string random, seemingly sophisticated words together in plausible English syntax.
Scientific papers, especially those dealing with computer science and mathematics, as these fake papers were, feature reams of sophisticated jargon. Even legitimate papers can seem like gibberish to an unfamiliar reader.
Citation.
http://io9.com/5855733/psychologist-admits-to-faking-dozens-of-scientific-studies
http://blogs.scientificamerican.com/guest-blog/2011/11/02/what-is-peer-review-for/






You must be logged in to post a comment.