Lying Increases Trust in Science

3 months ago 1

Abstract

This study begins by outlining the transparency paradox: that trust in science requires transparency, but being transparent about science, medicine and government reduces trust in science. A solution to the paradox is then advanced here: it is argued that, rather than just thinking in terms of transparency and opacity, it is important to think about what institutions are being transparent about. By attending to the particulars of transparency – especially with respect to whether good or bad news is disclosed – it is revealed that transparency about good news increases trust whereas transparency about bad news decreases it, thus explaining the apparent paradox. The apparent solution: to ensure that there is always only good news to report, which might require lying. This study concludes by emphasizing how problematic it is that, currently, the best way to increase public trust is to lie, suggesting that a better way forward (and the real solution to the transparency paradox) would be to resolve the problem of the public overidealizing science through science education and communication to eliminate the naïve view of science as infallible.

Similar content being viewed by others

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

The importance of trust in science is widely acknowledged. Governments rely on scientific evidence to inform public policy so, if people don’t trust science, they won’t trust policy either and this leads to noncompliance. On the other hand, science is publicly funded, so distrust in science will cause funding to dry up – which is the current experience of American academics. To do science, or to do anything with science, public trust in it is essential.

It’s commonly thought that opacity negatively affects authority and that transparency increases public trust in experts, professionals, science, government, banks, and really any other kind of institution. In this study, I reject this assumption. I argue that this confuses what actually causes the public to trust institutions for what we think should give cause for trust, confusing trust for trustworthiness. Far from strengthening trust in institutions, transparency might actually weaken it. To support my argument, I bring to bear a wealth of empirical evidence that is notably lacking from many theoretical investigations.

I identify a fundamental tension when it comes to transparency: trust in institutions is damaged by failures of transparency, but is also damaged by increased transparency. In my view, what explains this bizarre phenomenon is that the public has a naïve view of science. I argue that it’s only by being transparent with good news that transparency increases trust. Ostensibly the best way to resolve the transparency paradox is to falsify the information about which institutions are transparent so that it’s only ever good news they have to report. However, this is a lamentable conclusion: I conclude by arguing that a better way out is to educate the public so that they understand what makes science trustworthy rather than overidealizing science.

Transparency both increases and decreases trust

Expert authority should be undermined by secretive behaviours that we associate with untrustworthiness and transparency should be a minimum expectation: as Kate Vredenburgh (2022) argues, we have a right to an explanation. Trust in science increases when scientists honour scientific norms like those described by Robert Merton (1942) and it’s often by being transparent that scientists signal that they are in fact doing just that (Jamieson, 2019). Withholding negative results can endanger the evidence base for life-changing decisions, making transparency not just methodologically important but ethically so too (Bonilla-Escobar et al. 2014). Increased transparency allows for replication which should increase trust (Fineberg & Allison, 2020) and a lack of transparency leads to quality uncertainty, which threatens to erode trust in science (Vazire, 2017). Improved transparency mitigates bias, in turn making science more trustworthy (Bradley et al., 2020).

There’s more than just theory to this: experiments reveal that transparency does in fact increase trust in organizations (Norman et al., 2010; Auger, 2014; Schnackenberg & Tomlinson, 2016). Evidence suggests that science and government are also more trusted when they’re transparent. One high-powered meta-analysis shew that the overall effect of transparency in government on trust is positive and significant (Wang & Guan, 2023). As well as controlled experiments, observations of science also reveal that transparency increases trust (Mangal et al., 2022) and that transparent communication of evidence does not undermine public trust in evidence (Kerr, 2022).

A mistake that many theorists make when speaking about public trust in science is to mistake trustworthiness (whether we should trust) for trust (whether we do trust) and, subsequently, a vast amount of the literature focusses on increasing trust in science by making it more trustworthy. For one of the most obvious examples of this erroneous equivocation, I pick on Alberto Giubilini et al. (2025). They make a normative claim disguised as a descriptive one: “trustworthiness requires experts to be transparent” (§2). I think this is a relatively uncontroversial statement, particularly in the case of science. What licenses our trusting science is how reliable it is, and open science is much more likely to be reliable insofar as it’s much more scrutable. But we must be clear that this is a normative statement about how science should be, not a descriptive statement about how public trust in science actually works. Trustworthiness is not the same as trust. People trust things that aren’t trustworthy all the time – the most obvious examples probably being the media, government, banks, and pretty much every other institution – and they distrust things that are trustworthy just as often.

However, it’s trust (and authority), not trustworthiness, that I think Mr. Giubilini et al. are interested in. I infer this from another important claim they make, this time a descriptive one disguised as a normative one, that “experts can improve their trustworthiness among general public” by being increasingly transparent (§1). There is no such thing as “trustworthiness among the general public” because the property of being trustworthy is independent of the public’s perception. How trustworthy something is in the eye of the public doesn’t correspond to how trustworthy it is, but how trusted it is. What Mr. Giubilini et al. argue is that trust in experts is “undermined by failures of transparency” (§1) and that it can be restored by being more transparent. About this I think they, and many others like them, are grossly mistaken. While transparency does make science trustworthy – that is, we should trust open science – it doesn’t actually result in greater public trust.

For example, disclosure of conflicts of interest increases trustworthiness but, even though it should instil confidence in the public, actually does the opposite. Since the Physician Payments Sunshine Act, introduced by the Centers for Medicare & Medicaid Services (2013), increased the transparency of United States physicians’ conflicts of interest, half of whom are paid by the biomedical industry (Tringale et al., 2017), a reduction of public trust in physicians has been observed (Kanter, 2019; Tringale & Hattangadi-Gluth, 2019). Similarly, the consensus statement from the European Atherosclerosis Society Consensus Panel (Borén et al., 2020) has been criticized heavily on social media in the last few years, not for its scientific merits or demerits, but for the long list of conflicts of interest declared by its thirty authors. Experiments on transparency in priority setting have also shown that transparency negatively influences public trust in healthcare (de Fine Licht, 2011). I’m not just speaking about disclosure of conflicting interests, but all sorts of transparency with respect to data, things, values, people and so on (see Roelofs, 2019).

Experimental evidence also reveals that transparency and trust in government are not systematically positively associated either (Mabillard & Pasquier, 2016). In some studies, transparency has been shown to reduce trust in government (Grimmelikhuijsen et al., 2013; Porumbescu, 2015). The same is true of other kinds of institution like businesses (Janning et al., 2020). Even experimentation on artificial intelligence has shown that, when they’re transparent about their reasoning and decision-making, they’re trusted less (Schmidt et al., 2020). What explains this counterintuitive phenomenon whereby transparency both increases and decreases public trust in institutions?

Idealization explains the transparency paradox

I argue that what explains the transparency paradox is that the public has an inflated image of science: what Philip Kitcher (1984) called ‘legend’ and Gabriele Contessa (2023) has more recently called ‘the problem of overidealizing science’ or the ‘naïve view of science’. As Michael Mahoney (1976) said, the scientist is “viewed as the paragon of reason and objectivity, an impartial genius whose visionary insights are matched only by his quiet humility” (p. 3). In reality, science is far from perfect and is just as fallible as the scientists conducting it (Watson, 1938; Mitroff, 1972, 1974) – so much so that some philosophers of science and medicine have argued that we shouldn’t really trust it at all (e.g. Stegenga, 2018). As David Hull (1988) said: “The least productive scientists tend to behave the most admirably, while those who make the greatest contributions just as frequently behave the most deplorably” (p. 32). The problem is that the public doesn’t see it that way, in part because science communication has failed to keep the public updated with the latest developments in philosophy of science, so the general populace is several decades behind in their understanding of science. An analysis undertaken in the middle of the twentieth century exposed the stereotypical image of the scientist:

A very intelligent man—a genius or almost a genius. He has long years of expensive training—in high school, college, or technical school, or perhaps even beyond—during which he studied very hard. He is interested in his work and takes it seriously. He is careful, patient, devoted, courageous, open-minded. He knows his subject. He records his experiments carefully, does not jump to conclusions, and stands up for his ideas even when attacked (Mead & Metraux, 1957, p. 387).

Studies sixty years later have found that this ‘storybook image’ of the scientist continues to pervade both the general public and, shockingly, scientists themselves too (Veldkamp et al., 2017).

Climategate is, I think, a paradigm example of this. In November 2009, the Climactic Research Unit at the University of East Anglia was hacked and more than a thousand emails and two thousand documents were leaked (see Powell, 2011, ch. 14). Certain correspondence was interpreted by some as evidence of data manipulation and scientific misconduct aimed at exaggerating the threat of global warming. For instance, one of the most cited phrases was “hide the decline,” which some interpreted as a deliberate attempt to conceal data that didn’t support the prevailing narrative on climate change. The emails also revealed scientists expressing frustration with climate change skeptics and discussing ways to respond to their criticisms. Some interpreted these discussions as evidence of a conspiracy to suppress dissenting viewpoints and control the narrative around climate science. In reality, these were standard academic debates about how to handle what scientists viewed as misinformation and flawed analysis.

Investigations took place by eight committees, including the House of Commons Science and Technology Committee and the National Science Foundation. They found no evidence of wrongdoing, and scientists agreed that “a fair reading of the e-mails reveals nothing to support the denialists’ conspiracy theories” (Nature, vol. 462, p. 545; 2009). Although the scandal “steadily defused as the true facts and context were made clear” (Nature, vol. 480, p. 6; 2011), many members of the public remained unconvinced because “they hold peer review as a revered gold standard of scientific excellence” (Nature, vol. 468, p. 345; 2010). The problem was that they had an ideal of science that was much more perfect than it actually is so, when they found out about all the messiness in the method of the East Anglia climatologists, they lost trust in science, failing to realize that the science they originally trusted was never infallible. The false public preconception of an ideal science was shattered when the real scientific method was made transparent for everyone to see. Stephen John (2017) puts this nicely: “Unfortunately, just as publicising the inner workings of sausage factories does not necessarily promote sausage sales, so, too, transparency about knowledge production does not necessarily promote the flow of true belief throughout the population” (p. 75). While some have claimed that Climategate serves as a reminder of the importance of transparency (Maibach et al., 2012), the reality is that it’s transparency that caused the uproar in the first place.

C. Thi Nguyen (2021) also criticizes transparency, on similar grounds, following after Onora O’Neill (2002). He argues that, because transparency forces experts to justify themselves to laymen to whom expert reasons tend to be inaccessible, experts are pressured to behave or make recommendations in a way that they know they’ll later be able to offer public justification, even if that isn’t how they’d normally act (§3). It’s often the case that the reasons why experts act are intimate to the expert communities in which they’re situated and are therefore difficult to explain to outsiders; thus, transparency forces expert insiders to abandon the special understanding and sensitivity that arises from their particular experiences in favour of more generally applicable reasoning (§5). Consequently, he argues, transparency is a form of surveillance. By opening up reasoning – scientific, governmental, or professional – for scrutiny, transparency inhibits the full application of expertise, including skill, sensitivity, and subtle shared understandings.

My argument so far is that transparency, even though it’s “offered as a solution to all manner of social, political, environmental and economic problems” (Birchall, 2011, p. 19), actually decreases public trust in science. For this I have a few good theoretical explanations and a good deal of evidence to back them up. The problem is that the opposite is also true: transparency also increases public trust in science, for which I can also provide some convincing explanations and a wealth of empirical evidence. Thus, transparency both increases and decreases trust in science. The Baroness O’Neill (2002) calls this the transparency paradox, and Sunita Sah (2023) has shown that it’s caused especially by disclosing conflicts of interest. It seems we need to be transparent if we want to be trusted, but cannot be transparent else we’ll lose trust. What are scientists to do? My solution is as follows:

Misinformation resolves the paradox

It isn’t transparency that increases trust, but the illusion of transparency. This explains some of the empirical disagreement we see, where in some cases transparency improves trust and in others it worsens it. For example, respondents in one experiment stated that transparency would (i.e. in theory) increase their trust in a pharmaceutical regulator (Meijer, 2016). Stripped of the particulars of what we’re transparent about, everyone says that transparency increases trust and that they’d be distrustful of opaque processes.

What I propose is that we begin taking consideration of exactly what it is we’re transparent about; that we pay a lot more attention to the particulars rather than seeking to make a general policy about something abstract like transparency – particularism being a favourite philosophic and policy approach of mine anyway after having been indoctrinated into it by Nancy Cartwright (e.g. 2002; 2012). One particular qualification that I think is especially relevant is transparency or opacity about good or bad news. This brings a whole new dimension to the transparency paradox.

I suggest that one type of transparency is conducive to public trust, and that’s transparency about good news or news that is only slightly bad but still palatable to the public. For example, transparency about the negative features of COVID-19 vaccines was found to decrease their acceptance but increase trust in health authorities (Petersen, 2021), but that’s only because the negatives were heavily downplayed. More recently, a number of scientists are beginning to argue that various side effects were overlooked (e.g. Hosseini & Askari, 2023) and AstraZeneca is even being sued over the blood clots caused as a result of its vaccine (see Dyer, 2023) after the United Kingdom Government was already forced to compensate for injury or death caused by COVID-19 vaccines (see Wise, 2022). It’s an open question whether trust in medicine and government would have increased if all this was known in advance and made clear to the public too. Would this information have been too bad for public trust, particularly when vaccines were being forced onto them by governments worldwide?

In these examples, the ‘bad news’ in question is vaccine safety and conflicts of interest. There’s no one specific feature of institutions that always counts as bad news: it depends on the institution and, more importantly, it depends on what the public expects of the institution. Bad news is anything that falls below the public expectation. It’s widely thought that vaccines should be safe if they’re mandatory so, if they’re not, that would be bad news. However, if it were found that an experimental drug that was tested in a toxicity trial caused minor irritation to participants’ skin, this wouldn’t be considered bad news, and therefore wouldn’t decrease trust in science and medicine, because there’s no public expectation that such trials should be completely harmless. In other words, good news is what the public consider trustworthy and bad news is what they consider untrustworthy – these being, crucially, different from what’s actually trustworthy about science.

Let me put it like this: If you ask me, as your physician, whether I have any conflicts of interest, and I get all coy about it, avoid answering the question, make weird claims about legal rights and data protection regulations, and come up with all sorts of reasons for why I shouldn’t say anything, then for sure you’re going to be suspicious as hell. On the other hand, if I open my bank accounts to you and show you that I’ve nothing to hide and that, moreover, I’m squeaky clean, then you’re going to trust me a whole lot more. Of this there is, unsurprisingly, empirical evidence: patients do in fact trust their physicians more when they know that they haven’t received any payments (Kanter, 2019). If, however, I reveal to you the fat list of payments I’ve received that make it look like I’m thoroughly in the pocket of corporations who really don’t care a whole lot about your wellbeing, then you’re probably not going to trust me very much.

Of the options available to me, the only way I’m going to come away with your trust is by being transparent about good news. But what if I don’t have any good news to share? What if I have a whole litany of conflicts of interest? What if my science isn’t very pretty? Trust in science increases when scientists honour scientific norms, leading scholars like Kathleen Jamieson et al. (2019) to argue for more effective signals of trustworthiness at the article level, including increasing transparency to show that scientists are following such norms. But what do we do when scientists haven’t honoured these norms, or when those norms look bad? Metascientists like John Ioannidis (2005) and medical nihilists like Jacob Stegenga (2018) suggest that this might be the normal state of affairs. While we fix these problems, do scientists just have to suck up a devastating loss in public trust (and, correspondingly, public funding)? Even when we do fix them, social scholars of science like Steven Shapin (2010) have argued for years that science will never be perfect.

Speaking solely in terms of public trust in science, the answer is to make sure you always have good news to be transparent about; that is, when you don’t, you lie. If you have conflicts of interest, or even some interests that might give the impression of a conflict, don’t refuse to declare it, and definitely don’t declare it. Instead, lie about it and assert that you have no conflicts of interest. The same goes for the messiness of science. If you’re uncertain, say you’re certain; if there’s disagreement, label them conspiracy theorists; if data is missing, make it up and fill in the gaps. The people think that science is perfect, so let it be perfect.

The Baroness O’Neill (2002) made a somewhat similar point in her Reith Lectures. She argued that transparency forces experts or public servants to conceal their actual reasons for action and invent different ones for public consumption because the public, in her opinion, wouldn’t be able to understand the nuances of a really expert justification. Inadvertently, then, transparency forces deception and, rather than change the experts’ actions, it’ll only alter their recorded justifications. An example of a related phenomenon is recorded by Wendy Parker (2014), who shows that climate scientists have the choice between providing policymakers with honest figures, which are uncertain and therefore less likely to encourage policy action, or effective ones, which will probably stimulate action but about which scientits have claimed more certainty than they really should. Stephen John (2017, §4) draws on this dilemma to argue that, because proper informing should be guided by a clear sense of an audience’s needs and interests (Manson & O’Neill, 2007), scientists needn’t be honest if honesty – defined as the literal reporting of scientific knowledge – might result in the audience being miseld about science. However, he doesn’t go as far as to suggest that lying is permissible, specifically stating that the communication of poorly established claims to further scientists’ ideological interests – which he calls “wishful speaking” – is unethical (§5). Scientists are only allowed to slightly skew the way they report the truth to ensure that the public ends up with the same impression as scientists and don’t misunderstand the science – this being an essential part of accurate science communication. He calls this “well-leading” rather than misleading and warns against an overemphasis on reporting science literally, or what Bernard Williams (2002) called the “fetishisation of assertion” (p. 107). What I’m suggesting is that scientists should outright lie.

To this there is one major caveat, and that’s if the public find out that you’re lying. If it wasn’t bad enough hiding the truth behind silence, the discovery that scientists have lied to the public would, understandably, result in a catastrophic loss of trust in science and medicine. It would also result in a proliferation of conspiracy theories: as Russell Muirhead and Nancy Rosenblum (2016) say, “when government lying is the norm we can expect conspiratorial thinking to become normal also” (pp. 84–85). In this case, such thinking wouldn’t be irrational because there is in fact a conspiracy that seeks to quieten dissenters and obfuscate inconvenient facts, even if it’s not pushing a nefarious agenda.

What would be even worse would be if the public found out, more than that they’ve been lied to, the truth, especially if that truth isn’t pretty. Then it certainly looks like an evil conspiracy is in place, particularly when it comes to sensitive topics like vaccination or economic policies like quantitative easing. This could result in the complete loss of trust in all our institutions, like science, medicine, government, banks, and anyone else who might be thought to have contributed to the conspiracy, whether they did or not.

The obvious solution to this is to just not get caught. But there’s a good chance you would: scientific fraud, for instance, is often caught by scientific sleuths and there’s constant research into developing ways to develop better methods of detecting misconduct (e.g. Bolland et al., 2016). Similarly, if scientists started calling each other conspiracy theorists, those who received the label would fight back. You couldn’t just lie all by yourself; we’re talking now about a massive, impenetrable system of negative information control and positive propaganda. Let’s call it the noble lie (γενναῖον ψεῦδος) approach to science communication after Plato – unless readers prefer the big lie (große lüge) moniker after Hitler (1925, ch. 10). We’d need to set up some kind of propaganda police; criminalize so-called ‘misinformation’ and label those that spread it as enemies of the state; set up a fascist totalitarian state and convince the people it’s for their own good – a real life Big Brother (see Orwell, 1949). They’re more likely to trust us anyway because we’re so ‘transparent’ (that is, with misinformation). This is a system that would make the Soviet Committee for State Security deeply jealous.

What’s apparent is that thinking along these lines takes us down a farcical rabbit hole that we don’t really want to go down. I don’t envision a future in which scientists and government officials must lie to the public to keep their trust. However, I think I’ve revealed something in this argument and the pseudo-solution that follows from it, and that’s what I want to turn to now to conclude this study.

It’s better if society understands science

That the best way for science and government to increase public trust in them is to outright lie is a lamentable conclusion and not one for which I wish to advocate. Instead, what I believe this shows is that there is a problem with the relationship between science and society if honesty and openness result in hostility and distrust. That there is a paradox of transparency in the first place is evidence of this sickness. I’d prefer to cure it than take advantage of it.

As it stands, any increase in the trustworthness of science doesn’t necessarily correspond to an increase in the extent to which it’s trusted by the public. This is why I believe many academics are mistaken to focus on strengthening trust in science by making it more trustworthy: many of the measures taken to increase its trustworthiness, such as increasing transparency, are likely to have the perverse effect of decreasing trust in science. In a world more ideal than the one we currently have, the public would be much more rational and their trust in science would map directly onto how trustworthy science is. In that world, transparency would make science trusted a whole lot more because open science is much more trustworthy. The fact of the matter is that open science is better; not just scientifically better but morally so. I don’t want what I’ve said here to be taken as an excuse to permit scientists and the government to lie to the public.

Lying is a short-term solution with disasterous long-term effects that misses the wood for the trees. Where I think my peers are correct is that trust in science should be rooted in how trustworthy it is, not based upon some noble lie. The problem is that there’s a gap between these two things – a gap which few recognize, which is why scholars such as Alberto Giubilini et al. (2025) are prone to literally equivocate between the terms ‘trust’ and ‘trustworthiness’. I’ve argued that this gap is caused by the overidealization of science, which is what’s ultimately responsible for the transparency paradox so, rather than lying, it’s this phenomenon that I propose we focus on fixing. I’d like to see the public educated and the naïve view of science dispelled. As the National Academy of Sciences asserted in response to Climategate, “all citizens should understand some basic scientific facts” including, for example, that “there is always some uncertainty associated with scientific conclusions” (Gleick et al., 2010). It may be true that transparency decreases trust in science, but this doesn’t mean that we shouldn’t promote open science and open institutions more broadly. Instead, we need to identify why science that’s honest about its potential biases is not trusted and to independently rectify this misconception without resorting to the easy solution of just lying to the public about the reliability of science. We should aim to educate the public to have trust in science despite its imperfections, to understand what objectivity and rigour really look like and why ‘the tangle of science’ is reliable and trustworthy (see Cartwright et al., 2022). They should understand that science ‘proves’ nothing, that conflicts of interest don’t automatically entail bias or fraud, that anomalous studies don’t falsify a whole body of literature and, above all, the public needs to understand that the scientist is human. They make mistakes just like everyone else. Many scientists know this; they’ve known this for over a century.

This, however, requires some work by science educators and communicators who must, as Kristen Intemann (2023) argues, “remind audiences how science works” (§4). Scientists often pass the buck when it comes to science communication and education: they’re happy to let someone else do all the hard work, but when they come under fire because either the public or government officials responsible for funding their research don’t understand it, they’re quick to complain about others’ ignorance. That’s what happened during Climategate, and it’s what’s happening now in the United States too. Scientists have known for a long time that science isn’t perfect, but attempts to relay this to the public have been substandard. If they want the public to trust them, then this needs to change.

Data availability

Not applicable.

References

  • Auger, G. (2014). Trust me, trust me not: An experimental analysis of the effect of transparency on organizations. Journal of Public Relations Research, 26(4), 325–343.

    Google Scholar 

  • Birchall, C. (2011). Introduction to secrecy and transparency: The politics of opacity and openness. Theory, Culture & Society, 28(7–8), 7–25.

  • Bolland, M. J., et al. (2016). Systematic review and statistical analysis of the integrity of 33 randomized controlled trials. Neurology, 87(23), 2391–2402.

    Google Scholar 

  • Bonilla-Escobar, Francisco, J., et al. (2014). Transparency in the international journal of medical students. International Journal of Medical Students, 2(1), 6–7.

    Google Scholar 

  • Borén, J., et al. (2020). Low-Density lipoproteins cause atherosclerotic cardiovascular disease: Pathophysiological, genetic, and therapeutic insights: A consensus statement from the European atherosclerosis society consensus panel. European Heart Journal, 41(24), 2313–2330.

    Google Scholar 

  • Bradley, S. H., et al. (2020). Reducing Bias and improving transparency in medical research: A critical overview of the problems, progress and suggested next steps. Journal of the Royal Society of Medicine, 113(11), 433–443.

    Google Scholar 

  • Cartwright, N. (2002). In favor of laws that are not ceteris paribus after all. Erkenntnis, 57(3), 425–439.

    Google Scholar 

  • Cartwright, N. (2012). Presidential address: Will this policy work for you? Predicting effectiveness better: How philosophy helps. Philosophy of Science, 79(5), 973–989.

    Google Scholar 

  • Cartwright, N., et al. (2022). The tangle of science: Reliability beyond method, rigour, and objectivity. Oxford University Press.

  • Centers for Medicare & Medicaid Services. (2013). Medicare, medicaid, children’s health insurance programs; transparency reports and reporting of physician ownership or investment interests. Federal Register, 78, 9458–9528.

    Google Scholar 

  • Contessa, G. (2023). It takes a village to trust science: Towards a (thoroughly) social approach to public trust in science. Erkenntnis, 88(7), 2941–2966.

    Google Scholar 

  • de Fine Licht, J. (2011). Do we really want to know? The potentially negative effect of transparency in decision making on perceived legitimacy. Scandinavian Political Studies, 34(3), 183–201.

    Google Scholar 

  • Dyer, C. (2023). Patients launch legal action against AstraZeneca over its Covid-19 vaccine. British Medical Journal, 380, 725.

    Google Scholar 

  • Fineberg, H. V., & Allison, D. B. (2020). The use and misuse of transparency in research: Science and rulemaking at the environmental protection agency. Journal of the American Medical Association, 323(7), 605–606.

    Google Scholar 

  • Giubilini, A., et al. (2025). Expertise, disagreement, and trust in vaccine science and policy. The importance of transparency in a world of experts. Diametros, 82, 7–27.

    Google Scholar 

  • Gleick, P. H., et al. (2010). Climate change and the integrity of science. Science, 328(5979), 689–690.

    Google Scholar 

  • Grimmelikhuijsen, S., et al. (2013). The effect of transparency in trust in government: A cross-national comparative experiment. Public Administration, 73(4), 575–586.

    Google Scholar 

  • Hitler, A. (1925). Mein Kampf. Eher-.

  • Hosseini, R., & Askari, N. (2023). A review of neurological side effects of COVID-19 vaccination. European Journal of Medical Research, 28, 102.

  • Hull, D. L. (1988). Science as a process: An evolutionary account of the social and conceptual development of science. University of Chicago Press.

  • Intemann, K. (2023). Science communication and public trust in science. Interdisciplinary Science Reviews, 48(2), 350–365.

    Google Scholar 

  • Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

  • Jamieson, K. H. (2019). Signaling the trustworthiness of science. Proceedings of the National Academy of Sciences, 116(39), 19231–19236.

  • Janning, F., et al. (2020). The illusion of transparency in corporate governance. Springer.

  • John, S. (2017). Epistemic trust and the ethics of science communication: Against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75–87.

    Google Scholar 

  • Kanter, G. P. (2019). US nationwide disclosure of industry payments and public trust in physicians. Journal of the American Medical Association Network Open, 2(4), e191947.

  • Kerr, J. R. (2022). Transparent communication of evidence does not undermine public trust in evidence. Proceedings of the National Academy of Sciences Nexus, 1(5), pgac280.

  • Kitcher, P. (1984). Species. Philosophy of Science, 51(2), 308–333.

    Google Scholar 

  • Mabillard, V., & Pasquier, M. (2016). Transparency and trust in government (2007–2014): A comparative study. NISPAcee Journal of Public Administration and Policy, 9(2), 69–92.

    Google Scholar 

  • Mahoney, M. J. (1976). Scientist as subject: The psychological imperative. Ballinger Publishing Company.

  • Maibach, E., et al. (2012). The legacy of climategate: Undermining or revitalizing climate science and policy? Wiley Interdisciplinary Reviews Climate Change, 3(3), 289–295.

    Google Scholar 

  • Mangal, S., et al. (2022). Building trust in research through information and intent transparency with health information: Representative cross-sectional survey of 502 US adults. Journal of the American Medical Informatics Association, 29(9), 1535–1545.

    Google Scholar 

  • Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics. Cambridge University Press.

  • Mead, M., & Metraux, R. (1957). Image of the scientist among high-school students: A pilot study. Science, 126(3270), 384–390.

    Google Scholar 

  • Meijer, A. (2016). Comment on paper ‘transparency and trust in the European pharmaceutical sector’. Journal of Risk Research, 19(9), 1104–1106.

    Google Scholar 

  • Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.

    Google Scholar 

  • Mitroff, I. I. (1972). The myth of objectivity or why science needs a new psychology of science. Management Science, 18(10), B613–618.

    Google Scholar 

  • Mitroff, I. I. (1974). The subjective side of science: A philosophical inquiry into the psychology of the Apollo moon scientists. Elsevier.

  • Muirhead, R., & Rosenblum, N. L. (2016). Speaking truth to conspiracy: Partisanship and trust. Critical Review, 28(1), 63–88.

    Google Scholar 

  • Nature (2009). Climatologists under pressure. Nature, 462, 545.

    Google Scholar 

  • Nature (2010). Closing the climategate. Nature, 468, 345.

    Google Scholar 

  • Nature (2011). A poor sequel. Nature, 480, 6.

    Google Scholar 

  • Nguyen, C. T. (2021). Transparency is surveillance. Philosophy and Phenomenological Research, 105(2), 331–361.

    Google Scholar 

  • Norman, S. M., et al. (2010). The impact of positivity and transparency on trust in leaders and their perceived effectiveness. The Leadership Quarterly, 21(3), 350–364.

    Google Scholar 

  • O’Neill, O. (2002). A question of trust: The BBC Reith lectures 2002. Cambridge University Press.

  • Orwell, G. (1949). Nineteen eighty-four. Secker & Warburg.

  • Parker, W. S. (2014). Values and uncertainties in climate prediction, revisited. Studies in History and Philosophy of Science Part A, 46, 24–30.

    Google Scholar 

  • Petersen, M. B. (2021). Transparent communication about negative features of COVID-19 vaccines decreases acceptance but increases trust. Proceedings of the National Academy of Sciences, 118(29), e2024597118.

  • Porumbescu, G. A. (2015). Using transparency to enhance responsiveness and trust in local government: Can it work? State and Local Government Review, 47(3), 205–213.

  • Powell, J. L. (2011). The inquisition of climate science. Columbia University.

  • Roelofs, P. (2019). Transparency and mistrust: Who or what should be made transparent? Governance, 32(3), 565–580.

  • Sah, S. (2023). The paradox of disclosure: Shifting policies from revealing to resolving conflicts of interest. Behavioural Public Policy, online first.

  • Schmidt, P., et al. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278.

    Google Scholar 

  • Schnackenberg, A. K., & Tomlinson, E. C. (2016). Organizational transparency. Journal of Management, 42(7), 1784–1810.

    Google Scholar 

  • Shapin, S. (2010). Never pure: Historical studies of science as if it was produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority. John Hopkins University.

  • Stegenga, J. (2018). Medical nihilism. Oxford University Press.

  • Tringale, K. R., & Hattangadi-Gluth, J. A. (2019). Truth, trust, and transparency-the highly complex nature of patients’ perceptions of conflicts of interest in medicine. Journal of the American Medical Association Network Open, 2(4), e191929.

  • Tringale, K. R., et al. (2017). Types and distribution of payments from industry to physicians in 2015. Journal of the American Medical Association, 317(17), 1774–1784.

    Google Scholar 

  • Vazire, S. (2017). Quality uncertainty erodes trust in science. Collabra: Psychology, 3(1), 1.

  • Veldkamp, C. L. S., et al. (2017). Who believes in the storybook image of the scientist? Accountability in Research, 24(3), 127–151.

    Google Scholar 

  • Vredenburgh, K. (2022). The right to explanation. Journal of Political Philosophy, 30(2), 209–229.

    Google Scholar 

  • Wang, Q., & Guan, Z. (2023). Can sunlight disperse mistrust? A meta-analysis of the effect of transparency on citizens’ trust in government. Journal of Public Administration Research and Theory, 33(3), 453–467.

    Google Scholar 

  • Watson, D. L. (1938). Scientists are human. Watts.

  • Williams, B. (2002). Truth and truthfulness. Princeton University Press.

  • Wise, J. (2022). Covid-19: UK makes first payments to compensate injury or death from vaccines. British Medical Journal, 377, o1565.

Download references

Funding

The author declares that he received no funding.

Author information

Authors and Affiliations

  1. Centre for Ethics in Medicine, Bristol Medical School Department of Population Health Sciences, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK

    B. V. E. Hyde

  2. School of History, Law and Social Sciences, Bangor University, Main Arts Building, Bangor, Gwynedd, Wales, LL57 2DG, UK

    B. V. E. Hyde

Corresponding author

Correspondence to B. V. E. Hyde.

Ethics declarations

Conflict of interest

The author declares that there are no conflicts of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hyde, B.V. Lying increases trust in science. Theor Soc (2025). https://doi.org/10.1007/s11186-025-09635-1

Download citation

  • Accepted: 15 June 2025

  • Published: 09 July 2025

  • DOI: https://doi.org/10.1007/s11186-025-09635-1

Keywords

Read Entire Article