According to social scientists, a humble mindset would result in more honest and reproducible research.
Young Zoey has to figure out how to save magical creatures with mysterious injuries and ailments in the children’s chapter book series Zoey and Sassafras, which my own two kids adore. Zoey learns the fundamentals of conducting an experiment from her scientist mother: observe, hypothesize, test, and conclude. Throughout the book, Zoey discovers that, while disappointing, failed experiments are an inevitable part of the scientific process.
Like Zoey, most aspiring scientists are encouraged by their teachers to be open to making mistakes and refining their ideas. This humble thinking should, in theory, remain foundational as students progress to become established scientists. However, psychologists Rink Hoekstra and Simine Vazire contend in a Nature Human Behaviour article published on October 28 that the practice of science, particularly the process of publishing findings in scientific publications, is far from this “tell it like it is” manner. It has a more haughty tone to it.
“I believe we are taught implicitly to brag about our achievements,” says Hoekstra of the University of Groningen in the Netherlands.
Hoekstra and Vazire of the University of Melbourne in Australia argue that scientists should be willing to admit when they are incorrect, a concept called “intellectual humility” by psychologists. The authors write that this humble approach goes beyond transparency. “Owning our limitations… necessitates a commitment to bringing them to the forefront, taking them seriously, and accepting the consequences.”
Intellectual humility, according to psychologists, promotes people to learn for the sake of learning, reduces political polarization, and encourages people to question news articles for disinformation.
A humble attitude could also aid in the restoration of trust in the social sciences. For more than a decade, the profession has been in a state of crisis as researchers have repeatedly attempted and failed to replicate original discoveries. Many scientists have been doing some soul-searching as a result of the ongoing issue. Julia Rohrer, a personality psychologist, started the Loss-of-Confidence Project in 2016, asking researchers to submit work they no longer believed in, along with a detailed explanation for why they had changed their minds. While publicly disparaging one’s own work is reactive, Rohrer of the University of Leipzig in Germany believes that intellectual humility in science would be proactive, allowing researchers to avoid frequent problems from the start.
Scientists may feel pressured to overstate their findings because their careers are typically dependent on publishing research papers in top-tier publications, according to Hoekstra. Scientists may exaggerate a study’s novelty, manipulate statistics to hide data uncertainties, gloss over unsuccessful trials, or imply that theoretical discoveries are closer to real-world applicability than they are. Hoekstra points out that the publication process inadvertently encourages this practice. Clear narratives are prioritized by journal editors and paper reviewers who approve studies over more nuanced ones.
Hoekstra and Vazire suggest that change must begin with the gatekeepers. Reviewers, in particular, can make a positive contribution to the solution without jeopardizing their careers. “One of the few jobs in academia where you may say whatever you want is reviewing,” Hoekstra says.
Hoekstra discusses to Science News how each component of a scientific publication can be imbued with intellectual humility, from the abstract that sets up the work to the discussion that leads to conclusions.
Title and abstract:
It’s critical to set up the nuances at the start of a study. For example, if a study was conducted with small sample size, researchers should not assume that their findings apply to everyone. Furthermore, researchers should report on all of the study’s experiments, not simply the ones that provided the best findings.
Researchers should not overstate how much their discoveries contribute to current understanding. They also shouldn’t cherry-pick data from earlier studies to make it appear as if the current conclusions are backed up by a mountain of evidence. According to Hoekstra, researchers frequently treat this section of the report as a convincing argument. Instead, researchers should be open and honest about similar findings as well as any disputes or debates surrounding the research topic.
The goal of this section, according to Hoekstra, is for an outside researcher to be able to duplicate the study by following the instructions. “The recipe should be so detailed… that you can’t go wrong.” Scientists, on the other hand, usually leave out time details. This covers fundamental information like what time of day data was collected and when key judgments were made. When, for example, were specific individuals omitted from the research? And what decisions were taken before or after calculating the data?
Though it is still the exception rather than the rule, an increasing number of journals are now requiring researchers to preregister their research plans with an online service — outlining their hypothesis, research design, and analyses — before starting their research. This can help shield you from bias. “Even if you don’t want to do anything wrong and want to follow the laws,” Hoekstra says, “the instinct is to play around with your data to discover what works and what doesn’t.”
Instead of focusing on what the data shows, researchers should concentrate on where the data may be lacking. This strategy could include running several analyses to see how seemingly insignificant decisions in the research design, such as who is excluded or how essential variables are quantified, affect the outcomes.
In addition, researchers should consider the context of their statistical findings. This technique may be quite simple outside of the social sciences. Epidemiologists, for example, can calculate how many dying patients a medicine could potentially save. Quantifying the impact of nostalgia on happiness, or how boredom influences whether people follow social distancing standards, for example, can be difficult.
Bayesian analysis, which incorporates prior knowledge to forecast the likelihood of a certain outcome, is also an option for researchers.
Researchers prefer to portray a more ironclad story in the final words of a publication than the evidence allows. Instead, researchers should emphasize any shortcomings in the study’s methodology and analyze how broadly the findings could be applied. Many publications, for example, include a limits section that briefly summarizes the study’s probable flaws. Rather, those constraints should serve as the foundation for the entire discussion.
“In order to be viewed as strong or informed, we often throw doubt under the rug,” Hoekstra explains. “I believe it would be far more powerful to recognize that there would always be uncertainty.”
R. Hoekstra and S. Vazire. Aspiring to greater intellectual humility in science. Nature Human Behaviour. Posted October 28, 2021. doi:10.1038/s41562-021-01203-8.