Discussing Disinformation #3 Professor Alan Jagolinzer
Earlier this year, the CCOD sat down with Alan Jagozlinzer – Professor of Financial Accounting at the University of Cambridge and organiser of the Cambridge Disinformation Summit.
CCOD: Could you start by reflecting on your decision to become involved in the disinformation space. How has this changed your personal and professional life over the past few years?
Alan Jagolinzer: I’ve always held strong values around justice, accountability, and transparency. Every relationship is based on trust, and trust is broken without transparency and accountability, period.
I’ve always had a strong skepticism around trust systems, and my profession is fundamentally about trust and information clarity. Financial accounting specifically is designed to create as transparent and accurate an estimation of financial health as one could get. Then there’s an entire layer of accountability infrastructure around it with audits and regulatory enforcement of rules. This is the framework I come in with. We’ve been relying on that ever since the market crash of the late 1920s.
I’m deeply disturbed by the lack of information clarity and the amount of harm that’s capable in some of the information streams we have, including in the financial markets. I think we’re at a point where systems are degraded and some people are intentionally trying to further degrade them. There’s always been corruption, and anti-corruption is always an underfunded and underserved practice. But I think even the fundamental support for anti-corruption is degrading, and we’re getting to a point where we’re normalising corruption in some places, particularly in markets. I find that really disturbing.
The moment I decided to do a career in financial accounting, that was the moment I began engaging with information systems and trusted information systems. That’s just what we do. Two decades of work, not including my undergraduate work, have been in this space about how we better communicate.
I worked as a fellow at the International Accounting Standards Board, which is a policy organisation that defines the global financial reporting standards. All of that is about how can we do it better, how can we get better clarity, how can we get better information so that people can better trust companies that affect their lives every day. I’ve seen the downstream harms when we don’t do it properly. Even in good structures, when there are mistakes, and especially when there is intentional corruption, people are harmed. Society’s harmed beyond just financial stakeholders.
I talk about this a lot in the Carillion scandal. Carillion was a company that had bad accounting and ended up failing. Not only did shareholders and lenders lose money, but more importantly, they were doing hospital contracts for the UK government, and these hospitals were not being built. They also built railways and cleared snow on Canadian highways, so large cuts of society were harmed by management’s failures. There are always downstream systemic societal harms from this type of activity.
I began to move beyond the scope of financial information integrity when I studied how Putin operates disinformation campaigns to support his aggression, how COVID and vaccines were politically weaponised, and how corrupt actors use misleading narratives in sextortion scams, grooming for sexual assaults, and crypto rug pulls.
CCOD: You now run the Cambridge Disinformation Summit – how does that come to happen, what mobilises it? And specifically about Cambridge: it seems like it’s becoming one of the central hubs now for disinformation studies. Why do you think that is?
Alan Jagolinzer: The original summit was started to feed my own curiosity. I did not understand what we’re facing. My field, because it’s so structured and focused in the financial markets world, there were so many elements that just didn’t make sense to me once we moved away from that world.
Why do people listen to blatant liars who have a track record of bullshitting people? Why do people pay attention to, believe and defend what they say despite all the counter-evidence? To me that’s so foreign.
So I reached outside my field to learn more. I realised that this is such a multidisciplinary question. I was talking to cult psychiatrists, psychologists, sociologists, anthropologists, theologians, people in AI, journalism, law, fact-checking, online trust & safety—anyone I saw who was trying to answer these same questions. I just started reaching out on these fundamental questions, and the very first summit was built around these questions.
What I fundamentally learned is that we are all examining similar structures and questions but were not communicating across disciplines. People were even surprised to learn that accounting was at the table. They’re like, “What are you doing? Why is this?” And I said, “Information integrity and mitigating information harms is what we do.” That led us into discussions about audit and what an audit looks like and how we built our governance structures that became really vibrant.

To your second question, Cambridge has the ability to convene because people want to come. When I was at Stanford, it’s one of the few places where if we invite people, they’ll say yes if they can. Cambridge has this historical presence and this energy of significance. I don’t mean that in an arrogant way. I feel there’s energy in buildings, energy in communities. When we assemble a community around Cambridge, it could be in a pub, it could be outside, there’s just an energy about really getting into very complex global thinking.
Cambridge is one of the few institutions that is inherently global by nature. Everywhere I walk around here, it’s a global discussion of some nature. Our communities and our business school organically just think globally, so we can have a global perspective, and this is a global systemic question.
It happened to be that I was at Cambridge, [Psychology Professor] Sander [van der Linden] was at Cambridge, and his cohort was at Cambridge. That was just serendipity. And as we started looking further, I started noticing that there are many people at Cambridge who are organically working in this space in some format. I learned that [Deepfake expert] Henry Ajder and [Executive director of the Minderoo Centre for Technology and Democracy] Gina Neff were working on themes of technology and democracy. We just started realising that Cambridge inherently had people who were really interested in this.
I had no intention of being a central hub, nor did I have an intention of building something completely new. This isn’t about empire building. This is simply about how do we get people to talk to each other from around the world, and we could host an event. We’ll host an event, people will come, we’ll record it, we’ll push it out to YouTube, and we’ll all collectively learn. It was just a global learning exercise.
Right now, we are one of the few institutions where we still have the political oxygen to do this work. We are not yet getting pressure politically to shut this kind of thing down. We’re still not yet afraid as an institution to host something like this. I want to make it clear, we’re paying for professional security for our events. We are on the radar of some actors who falsely claim we do censorship work and many of our colleagues face real threats for doing their work.
We are definitely at risk. But as I said when I opened the last summit, the fact that we are being threatened means that our work matters. They would not threaten us if our work was not important. So, we convene to continue our work.
The conference agenda is still driven by what we sense are the most important questions right now, the hot-button issues. For the 2026 summit, one of the things I’m seeing is a proliferation of parasocial and computer-social relationships. Parasocial relationships are one-way emotional bonds with influencers we see online, in reality television, or with celebrities. There is an emotional connection that forms, and a sense of a relationship, even though the person on the other side has no idea who we are. This allows the influencer to have a lot of power that can be exploitative. We also see a lot of this influence from AI systems, where people form emotional connections with bots and avatars. We will be hosting a discussion about how and why these relationships are formed. In what ways are they healthy? In what ways are they systemically harmful?
We will also hear from former Australian Prime Minister Malcolm Turnbull, who will discuss the systemic risks to democracy when we allow too-concentrated ownership of media platforms, like newspapers, television, and other information systems. I thought about hosting this topic after hearing attorney Carrie Goldberg say at the end of the documentary, The YouTube Effect, that hergreatest fear is when Big Tech has become far more powerful than courts or lawmakers and when these companies and their executives have more information than law enforcement and “we’re all just kind of at their mercy.”
The 2026 summit is also about systemic business and economic risks from disinformation. The downstream effects of what’s happening on the social media platforms, on the news media platforms, on what’s happening in AI are slamming traditional businesses. I’m not sure whether business leaders fully understand what risks they face from amplified information algorithms or some other enterprises that are profiting off corrupted information.
So we’re hosting Joseph Stiglitz, a Nobel economist, to discuss systemic economic risks, and Stanford’s Anat Admati, who is a world leading financial economist who will discuss systemic risks in banking and crypto. That might help us communicate more precisely with some of the people in finance and business, and maybe they can then become more advocates for cleaner information systems.
CCOD: Related to the theme of momentum, we’ve seen different waves of disinformation research at this point, and we seem to be going through a moment where people are pushing away from fact-checking and from legislative interventions altogether. Instead, a lot of people seem to be focusing more on individual media literacy-based solutions. Do you see this as a permanent change?
Alan Jagolinzer: Policy is difficult without enforcement. Let me focus on the enforcement and accountability piece first. We can do fact-checking all day. We can have rules and change policy around it, but if we don’t have viable, credible enforcement that matters, then nothing matters. None of that matters.
I think one problem with fact-checking is that I can flag that you’ve been manipulating and lying, but what are we going to do about it? We know some politicians right now are blatantly lying. We don’t need to run a fact-checking AI program to see that. We know they’re lying. What do we do? There’s simply too little accountability. There’s no enforcement. There’s no courage for accountability. So to me, the core issue is around accountability – consequences if we believe somebody’s manipulating and lying in an intentionally deceitful way to harm others.
The other piece I think is recognising that this is a human problem. Fact checking, whistleblowing, and evidence-based disclosures can only inform people who want to understand this information. We’re at the point now where belief system structures and cognitive issues create powerful barriers to engaging sensible and vetted information.
For example, lets assume that a company puts out a financial report and an independent professional team has audited it to the standards and it is inherently reasonably well vetted. As a financial markets trader, I’m supposed to say this is now more credible and it gives me more comfort. But the problem is now that some people don’t even want to hear it. They don’t care. They might say “I don’t care what that person or company says. I don’t trust them. I believe the CEO instead because I follow him in my social media feeds.” I’ve had these conversations with audit firm partners—how do you preserve your reputation in this world where the CEO you’re auditing has an online cult following or a bot attack can make people think you’re not trustworthy—even if you’re doing super careful and well-documented work?
I’m starting to see where having an elite brand or title can actually generate lower trust in some circles, because assessments from traditional “experts” or professionals might seem threatening to an entrenched belief system that supports self and social identity for a community. In these systems, introducing evidence-based or well-vetted data and analysis can be outright rejected because it does not conform with established beliefs and social alignment norms.
So to me, this gets into social and emotional questions. This is why working with people like Sander [van der Linden] is so important, likewise working with other scientists, like sociologists and anthropologists, who understand the human condition. We have the most senior scientist from Vatican on our scientific committee now. We also work with theologists, and cultural studies people, because this is really about why do people reject evidence despite their own needs.
Fact-checking in this context is really challenging. I am very thankful for fact-checking. I rely on it. I’m glad to see it. But fact-checking requires somebody to want to engage with it. It’s like a psychologist’s dilemma. Those who probably most need to engage with these tools are the ones who are probably going to most vehemently deny that they need to.
CCOD: Circling back to some of the themes of trust, skepticism, transparency, accountability, justice, and the faith aspect. To what extent do you think that mainstream political, media, and to a lesser extent academic institutions need to acknowledge their own past challenges before they provide solutions to something like a disinformation epidemic?
And a second question: Can we solve disinformation by addressing external societal factors rather than reactively seeking to fight disinformation itself online?
Alan Jagolinzer: Let’s start with trust. Everything is a human-level individual assessment. Every human being is assessing trust individually, and trust by definition has four components, using Charles Feltman’s framework which I actually really like. The first is competency. The second is reliability. The third is sincerity or authenticity. And the fourth is care.
Those latter two really resonate, and I sense people don’t pay enough attention to it in our fields. We typically say, “Hey, I’m competent and I’m reliable. Trust me because I’m a scientist.” But a lot of this is an emotional assessment.
Trust is based on assessments of relatability and how much people sense that you care, which gets into how opportunists exploit victims’ grievance.
When people are feeling anxious, we’re typically anxious about two of the biggest things: uncertainty about what’s going to happen in the future, as well as uncertainty about our social identity, our sense of purpose, where we fit in and how we make status.
A lot of the grievances are in those two buckets. We’re seeing a lot of grievance around lack of social mobility, lack of financial mobility. This is where we – academics, politicians – need to meet people where they’re at and say, “Hey, I hear you. I understand. Let me hear your stories. Let me understand more about what you’re feeling.”
Then there has to be some sincerity around how we engage with them. A lot of the political movements are targeting this underlying grievance. I think some people engage in good faith. I think many people engage in bad faith, and the bad faith actors are those who are typically exploitative. The get-rich-quick schemes around crypto might be an example. Building community and identity around that I think is highly exploitative.
I don’t like the term “solving disinformation.” I don’t know that one can solve disinformation. Disinformation has been baked into history, and I think disinformation will persist forever. I don’t think one can solve it. I do think, however, that one can engage some of its systemic harms.
First, the necessary condition is you have to get people to trust the evidence that you then bring. It’s not enough to bring evidence. One has to build rapport first. You have to lower the walls of resistance, lower the skepticism, lower the perceived threat of the evidence. That’s a very difficult thing to do in this political environment.
How does one do that? I think in some cases we have to walk away from the titles. It doesn’t mean that we abandon them, but we don’t necessarily have to show up and say, “Hey, I’m at Cambridge, therefore I know everything.” That doesn’t work. That gets into the relatability piece. It comes across as insincere. Some of the discussions we’re going to have at the Summit include how to communicate more authentically with people, to help break down the barriers and open the potential to share more information to allow them to assess for themselves if they are being harmed by disinformation.
Communicating with care and sincerity really resonates, which is why I think some parasocial influencers seem effective with messaging and building loyalty. I think our academic community and politicians can do better here. I don’t see our prime minister engaging parasocially with relatability, and I would like to see him doing it. I’d be happy to discuss this with him.

CCOD: Moving towards the political. You’ve spoken in the European Parliament and you engage with lawmakers. I wondered if you could comment a bit more about where you think the UK sits at the moment in terms of dealing with information threats.
Alan Jagolinzer: Way behind. I don’t think they fully understand the threat.
CCOD: Why do you think that is?
Alan Jagolinzer: If they did, they would be a lot more aggressive about engaging it. They would be actively influencing and addressing the grievance directly, because right now I sense they’ve abdicated the online parasocial space to those who I think are exploitative.
CCOD: Do you think it’s the fault of politicians not knowing enough in the first place, or do you think there’s also a responsibility on the academic or counter-disinformation actors in the space who haven’t done enough to get into the political sphere?
Alan Jagolinzer: I think there’s a lot of normalcy bias. I also think that a lot of the people who have graduated to those levels have done so by leveraging more traditional communications platforms. They’re comfortable with wearing a suit and talking to BBC cameras.
I also think they’ve come up in a society that has been very different. My learning curve on this was rapid because I used to think that if you were quoted in the New York Times, it resonated everywhere. Now it’s “whose podcast are you on?”
I think they haven’t really seen the evolution. For whatever reason, they’re underestimating the power of some of these influencers who are engaging rapid-fire directly into people’s phones.
I think some of this is coming around. I think there’s been some progress in awareness – the Netflix show Adolescence for example. There’s been some awakening a little bit. But I don’t sense there’s an urgency around engaging grievance. I just don’t sense urgency around it.
CCOD: Finally, what are some of the things that give you hope about tomorrow’s information landscape? A lot of it is quite pessimistic at the moment.
Alan Jagolinzer: I struggle for hope at the moment. I do. I admit that.
I think one of the things that gives me hope is the growing global field of people who are actively engaged in trying to understand this incredibly complex problem. The fact that people are still engaging work on how disinformation campaigns are structured, why people deploy them, why people fall for them, and how we can mitigate the harms from them gives me hope for the future, despite the threat environment.
I think there’s a tremendous amount of passion to build calmer, cleaner information systems, and for evidence and logic to inform policy again. I think people are tired of the chaos that flows from the deluge of corrupted information and this is creating movements against these monetised, amplified, and weaponised systems. It’s just a matter of time before the momentum shifts. But it’s going to take courage to speed up that timeline and minimise carnage.
The Cambridge Disinformation Summit takes place between April 23-25.


Aw yes! Very nice! Sounds like I got more to reference! Hells yes.
Nice work!