Sunday, January 26, 2014

The changing face of psychology

 Psychology is championing important changes to culture and practice, including a greater emphasis on transparency, reliability, and adherence to the scientific method. Photograph: Sebastian Kaulitzki/Alamy


http://www.theguardian.com/science/head-quarters/2014/jan/24/the-changing-face-of-psychology

After 50 years of stagnation in research practices, psychology is leading reforms that will benefit all life sciences.

In 1959, an American researcher named Ted Sterling reported something disturbing. Of 294 articles published across four major psychology journals, 286 had reported positive results – that is, a staggering 97% of published papers were underpinned by statistically significant effects. Where, he wondered, were all the negative results – the less exciting or less conclusive findings? Sterling labelled this publication bias a form of malpractice. After all, getting published in science should never depend on getting the “right results”.

You might think that Sterling’s discovery would have led the psychologists of 1959 to sit up and take notice. Groups would be assembled to combat the problem, ensuring that the scientific record reflected a balanced sum of the evidence. Journal policies would be changed, incentives realigned.

Sadly, that never happened. Thirty-six years later, in 1995, Sterling took another look at the literature and found exactly the same problem – negative results were still being censored. Fifteen years after that, Daniele Fanelli from the University of Edinburgh confirmed it yet again. Publication bias had turned out to be the ultimate bad car smell, a prime example of how irrational research practices can linger on and on.

Now, finally, the tide is turning. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo.
Here are five key developments to watch in 2014.
  
1. Replication

The problem: The best evidence for a genuine discovery is showing that independent scientists can replicate it using the same method. If it replicates repeatedly then we can use it to build better theories. If it doesn't then it belongs in the trash bin of history. This simple logic underpins all science – without replication we’d still believe in phlogiston and faster-than-light neutrinos.

In psychology, attempts to closely reproduce previous methods are rarely attempted. Psychologists tend to see such work as boring, lacking in intellectual prowess, and a waste of limited resources. Some of the most prominent psychology journals even have explicit policies against publishing replications, instead offering readers a diet of fast food: results that are novel, eye catching, and even counter-intuitive. Exciting results are fine provided they replicate. The problem is that nobody bothers to try, which litters the field with results of unknown (likely low) value.

How it’s changing: The new generation of psychologists understands that independent replication is crucial for real advancement and to earn wider credibility in science. A beautiful example of this drive is the Many Labs project led by Brian Nosek from the University of Virginia. Nosek and a team of 50 colleagues located in 36 labs worldwide sought to replicate 13 key findings in psychology, across a sample of 6,344 participants. Ten of the effects replicated successfully.

Journals are also beginning to respect the importance of replication. The prominent outlet Perspectives on Psychological Science recently launched an initiative that specifically publishes direct replications of previous studies. Meanwhile, journals such as BMC Psychology and PLOS ONE officially disown the requirement for researchers to report novel, positive findings.

2. Open Access 

The problem: Strictly speaking, most psychology research isn’t really “published” – it is printed within journals that expressly deny access to the public (unless you are willing to pay for a personal subscription or spend £30+ on a single article). Some might say this is no different to traditional book publishing, so what's the problem? But remember that the public being denied access to science is the very same public that already funds most psychology research, including the subscription fees for universities. So why, you might ask, is taxpayer-funded research invisible to the taxpayers that funded it? The answer is complicated enough to fill a 140-page government report, but the short version is that the government places the business interests of corporate publishers ahead of the public interest in accessing science.
 
How it’s changing: The open access movement is growing in size and influence. Since April 2013, all research funded by UK research councils, including psychology, must now be fully open access – freely viewable to the public. Charities such as the Wellcome Trust have similar policies. These moves help alleviate the symptoms of closed access but don’t address the root cause, which is market dominance by traditional subscription publishers. Rather than requiring journals to make articles publicly available, the research councils and charities are merely subsidising those publishers, in some cases paying them extra for open access on top of their existing subscription fees. What other business in society is paid twice for a product that it didn’t produce in the first place? It remains a mystery who, other than the publishers themselves, would call this bizarre set of circumstances a “solution”.

3. Open Science

The problem: Data sharing is crucial for science but rare in psychology. Even though ethical guidelines require authors to share data when requested, such requests are usually ignored or denied, even when coming from other psychologists. Failing to publicly share data makes it harder to do meta-analysis and easier for unscrupulous researchers to get away with fraud. The most serious fraud cases, such as Diederik Stapel, would have been caught years earlier if journals required the raw data to be published alongside research articles.

How it’s changing: Data sharing isn’t yet mandatory, but it is gradually becoming unacceptable for psychologists not to share. Evidence shows that studies which share data tend to be more accurate and less likely to make statistical errors. Public repositories such as Figshare and the Open Science Framework now make the act of sharing easy, and new journals including the Journal of Open Psychology Data have been launched specifically to provide authors with a way of publicising data sharing.

Some existing journals are also introducing rewards to encourage data sharing. Since 2014, authors who share data at the journal Psychological Science will earn an Open Data badge, printed at the top of the article. Coordinated data sharing carries all kinds of other benefits too – for instance, it allows future researchers to run meta-analysis on huge volumes of existing data, answering questions that simply can’t be tackled with smaller datasets.

4. Bigger Data

The problem: We’ve known for decades that psychology research is statistically underpowered. What this means is that even when genuine phenomena exist, most experiments don’t have sufficiently large samples to detect them. The curse of low power cuts both ways: not only is an underpowered experiment likely to miss finding water in the desert, it’s also more likely to lead us to a mirage.

How it’s changing: Psychologists are beginning to develop innovative ways to acquire larger samples. An exciting approach is Internet testing, which enables easy data collection from thousands of participants. One recent study managed to replicate 10 major effects in psychology using Amazon’s Mechanical Turk. Psychologists are also starting to work alongside organisations that already collect large amounts of useful data (and no, I don’t mean GCHQ). A great example is collaborative research with online gaming companies. Tom Stafford from the University of Sheffield recently published an extraordinary study of learning patterns in over 850,000 people by working with a game developer.

5. Limiting Researcher "Degrees of Freedom"

The problem: In psychology, discoveries tend to be statistical. This means that to test a particular hypothesis, say, about motor actions, we might measure the difference in reaction times or response accuracy between two experimental conditions. Because the measurements contain noise (or “unexplained variability”), we rely on statistical tests to provide us with a level of certainty in the outcome. This is different to other sciences where discoveries are more black and white, like finding a new rock layer or observing a supernova.

Whenever experiments rely on inferences from statistics, researchers can exploit “degrees of freedom” in the analyses to produce desirable outcomes. This might involve trying different ways of removing statistical outliers or the effect of different statistical models, and then only reporting the approach that “worked” best in producing attractive results. Just as buying all the tickets in a raffle guarantees a win, exploiting researcher degrees of freedom can guarantee a false discovery.

The reason we fall into this trap is because of incentives and human nature. As Sterling showed in 1959, psychology journals select which studies to publish not based on the methods but on the results: getting published in the most prominent, career-making journals requires researchers to obtain novel, positive, statistically significant effects. And because statistical significance is an arbitrary threshold (p<.05), researchers have every incentive to tweak their analyses until the results cross the line. These behaviours are common in psychology – a recent survey led by Leslie John from Harvard University estimated that at least 60% of psychologists selectively report analyses that “work”. In many cases such behaviour may even be unconscious.

How it’s changing: The best cure for researcher degrees of freedom is to pre-register the predictions and planned analyses of experiments before looking at the data. This approach is standard practice in medicine because it helps prevent the desires of the researcher from influencing the outcome. Among the basic life sciences, psychology is now leading the way in advancing pre-registration. The journals Cortex, Attention Perception & Psychophysics, AIMS Neuroscience and Experimental Psychology offer pre-registered articles in which peer review happens before experiments are conducted. Not only does pre-registration put the reins on researcher degrees of freedom, it also prevents journals from selecting which papers to publish based on the results.

Journals aren’t the only organisations embracing pre-registration. The Open Science Framework invites psychologists to publish their protocols, and the 2013 Declaration of Helsinki now requires public pre-registration of all human research “before recruitment of the first subject”.

Friday, January 10, 2014

Feynman on Scientific Method



Now I'm going to discuss how we would look for a new law. In general, we look for a new law by the following process: first, we guess it, no, don’t laugh, that’s the truth. Then we compute the consequences of the guess, to see what, if this is right, if this law we guessed is right, to see what it would imply and then we compare the computation results to nature or we say compare to experiment or experience, compare it directly with observations to see if it works.

If it disagrees with experiment, it’s wrong! In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, it doesn’t make a difference how smart you are, who made the guess, or what his name is… If it disagrees with experiment, it’s wrong. That’s all there is to it.

It is therefore not unscientific to make a guess, although many people who are not in science think it is. For instance, I had a conversation about flying saucers, some years ago, with a layman — because I am scientific I know all about flying saucers! I said “I don’t think there are flying saucers”. So the other...my antagonist said, “Is it impossible that there are flying saucers? Can you prove that it’s impossible?” “No, I can’t prove it’s impossible. It’s just very unlikely”. At that he said, “You are very unscientific. If you can’t prove it impossible then how can you say that it’s unlikely?” But that is the way that is scientific. It is scientific only to say what is more likely and what less likely, and not to be proving all the time the possible and impossible. To define what I mean, I might have said to him, "Listen, I mean that from my knowledge of the world that I see around me, I think that it is much more likely that the reports of flying saucers are the results of the known irrational characteristics of terrestrial intelligence than of the unknown rational efforts of extra-terrestrial intelligence." It is just more likely. That is all, and it is a very good guess. And we always try to guess the most likely explanation, keeping in the back of our minds the fact that if it does not work, then we must discuss the other possiblities.

There was, for instance, for a while, a phenomenon called super-conductivity, there still is the phenomenon, which is that metals conduct electricity without resistance at low temperatures and it was not at first obvious that this was a consequence of the known laws with these particles. Now that it has been thought through carefully enough, it is seen in fact to be fully explainable in terms of our present knowledge.

There are other phenomena, such as extra-sensory perception, which cannot be explained by our knowledge of physics here. However, that phenomenon has not been well established, and we cannot guarantee that it is there. If it could be demonstrated, of course, that would prove that physics is incomplete, and it is therefore extremely interesting to physicists whether it is right or wrong. Many, many experiments exist which show that it doesn't work. The same goes for astrological influences. If it were true that the stars could affect the day that it was good to go to the dentist - in America we have that kind of astrology - then the physics theory would be wrong, because there is no mechanism understandable in principle from these things that would make it go. That is the reason that there is some scepticism among scientists with regard to those ideas.

Now you see of course that with this method we can disprove any definite theory. We have a definite theory, a real guess, from which you can clearly compute consequences which could be compared to experiment and in principle we can get rid of any theory. You can always prove any definite theory wrong. Notice however that we never prove it right.

Suppose you invent a good guess, calculate the consequences, and discover every time that the consequences you have calculated agree with experiment. The theory is then right? No, it is simply not proved wrong.

Another thing I must point out is that you cannot prove a vague theory wrong. If the guess that you make is poorly expressed and rather vague, and the method that you use for figuring out the consequences is a little vague —you are not sure, and you say, “I think everything’s right because it’s all due to so and so, and such and such do this and that more or less, and I can sort of explain how this works...” then you see that this theory is good, because it cannot be proved wrong! Also if the process of computing the consequences is indefinite, then with a little skill any experimental results can be made to look like the expected consequences. You are probably familiar with that in other fields. ‘A’ hates his mother. The reason is, of course, because she did not caress him or love him enough when he was a child. But if you investigate you find out that as a matter of fact she did love him very much, and everything was all right. Well then, it was because she was overindulgent when he was a child! By having a vague theory it is possible to get either result. The cure for this one is the following: if it were possible to state exactly, ahead of time, how much love is not enough, and how much love is over-indulgent, then there would be a perfectly legitimate theory against which you could make tests. It is usually said when this is pointed out--when you are dealing with psychological matters things can’t be defined so precisely. Yes, but then you cannot claim to know anything about it.

Leading Questions - Yes Prime Minister




Bernard Woolley: He's going to say something new and radical in the broadcast.

Sir Humphrey: What, that silly Grand Design? Bernard, that was precisely what you had to avoid! How did this come about, I shall need a very good explanation.

Bernard Woolley: Well, he's very keen on it.

Sir Humphrey: What's that got to do with it? Things don't happen just because Prime Ministers are very keen on them! Neville Chamberlain was very keen on peace.

Bernard Woolley: He thinks ... he thinks it’s a vote winner.

Sir Humphrey: Ah, that’s more serious. Sit down. What makes him think that?

Bernard Woolley: Well the party have had an opinion poll done and it seems all the voters are in favour of bringing back National Service.

Sir Humphrey: Well have another opinion poll done to show that they’re against bringing back National Service.

Bernard Woolley: They can’t be for and against

Sir Humphrey: Oh, of course they can Bernard! Have you ever been surveyed?

Bernard Woolley: Yes, well not me actually, my house … Oh I see what you mean

Sir Humphrey: You know what happens: nice young lady comes up to you. Obviously you want to create a good impression, you don’t want to look a fool, do you?

Bernard Woolley: No

Sir Humphrey: So she starts asking you some questions: Mr. Woolley, are you worried about the number of young people without jobs?

Bernard Woolley: Yes

Sir Humphrey Appleby: Are you worried about the rise in crime among teenagers?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Do you think there is lack of discipline in our Comprehensive Schools?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Do you think young people welcome some authority and leadership in their lives?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Do you think they respond to a challenge?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Would you be in favour of reintroducing National Service?

Bernard Woolley: Oh, well I suppose I might.

Sir Humphrey Appleby: Yes or no?

Bernard Woolley: Yes.

Sir Humphrey: Of course you would, Bernard. After all you told you can’t say no to that. So they don’t mention the first five questions and they publish the last one.
Bernard Woolley: Is that really what they do?
Sir Humphrey: Well, not the reputable ones, no, but there aren’t many of those. So alternatively the young lady can get the opposite result.
Bernard Woolley: How?

Sir Humphrey Appleby: Mr. Woolley, are you worried about the danger of war?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Are you worried about the growth of armaments?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Do you think there's a danger in giving young people guns and teaching them how to kill?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Do you think it's wrong to force people to take arms against their will?

Bernard Woolley: Yes.

Sir Humphrey Appleby: Would you oppose the reintroduction of National Service?

Bernard Woolley: Yes.

Sir Humphrey Appleby: There you are, you see, Bernard. The perfect balanced sample.