← Back to Blog

How AI Is Reshaping Lobbying and Threatening Democracy

Artificial Intelligence is revolutionising lobbying, from corporates to parliamentary select committees.

Large language models have capability to analyze legislation, draft persuasive communications, and flood existing pre-AI democratic processes with AI-generated content at an unprecedented scale.

Efficiency gains may seem promising for legitimate advocacy, but these technologies threaten democratic representation, transparency, and accountability.

The Usage of AI in Lobbying

Large language models (LLMs) like ChatGPT’s offerings can be implemented to monitor proposed legislation, assess the relevance to specific entities, and automatically draft persuasive submissions or letters to lawmakers [1].

Research has showed that using “state-of-the-art” LLMs models from 2023 achieved a 70.9% accuracy rating in determining bill relevance, with performance significantly increasing as models improved [1].

‘PolicyMate’ and ‘Prismos’, are two commercially available products that exist and used by European lobbyists. They can transcribe committee hearings, analyse lengthy documents, and monitor developments in real-time [2].

AI can enable psychographic profiling by analysing vast datasets from social media, browsing history, purchase history, and demographic information. This allows the crafting of hyper-personalized messages that exploit individual humanistic psychological traits, values, and fears.

Political campaigns have already used AI to identify swing voters and deploy precisely framed “nudges” through targeted advertising, chatbots, and social media. These techniques are increasing adopted by lobbyists to influence lawmakers or electoral areas. Machine automation easily allows for fast and cost-efficient scalability [3].

AI can accelerate astroturfing, by manufacturing artificial public participation and opinion. Generative Text AI can easily produce unique and convincing public submissions and communications at scale, that could be used to create an illusion of widespread grassroots participation. [1].

The 2017 net neutrality debate saw over 8 million comments made by bots flood the US Federal Communications Commission, modern AI can overcome detection by generating submissions unique enough to evade detection [3]. New Zealand Parliament recently confronted this threat when select committee submissions were dismissed because they were allegedly AI generated [4].

In 2023, the National Party (NZ) used AI to create imagery of people, fake healthcare worker and crime victims for attack advertisements. While the Act Party used AI imagery of Māori and Polynesian people without disclosure [5]. Deepfakes can be produced rapidly and at scale, and in these instances potentially creating cultural stereotypes while undermining voters’ ability to distinguish AI and non-AI generated content [5].

Ethical Implications

AI in Lobbying threatens the principle that elected officials should respond to genuine people. When legislators receive AI-generated communications indistinguishable from authentic citizen appeals, they lose their ability to gauge true public sentiment. Complicating an environment where most citizens don’t engage in democracy to the fullest extend [6]. Law is a form of information that is derived from information. When such derived information is AI-generated, AI undoubtedly has an influence on law [1].

Lobbyists using AI currently face no requirements to disclose such usage in most jurisdictions [7]. Only Canada and the European Union mandate disclosure of I usage in lobbying activities [7]. Even when such content is labeled as AI, they still have an influence. LLMs operate as a Blackbox, they currently cannot explain their hidden reasoning to committees or commissions [3]. For users of LLMs such information doesn’t exist or cannot be created, therefore they cannot provide it, this argument can deflect responsibility away from users and towards AI companies. Such opacity makes accountability impossible and enables covert manipulation of democratic processes [3].

AI systems trained on biased data or simply human data, are aligned to such data, stereotypes are can be reflected by humans in data without classification or identification [5]. The New Zealand examples demonstrate how AI-generated imagery can reproduce offensive representations of people or groups [5]. Research has shown that AI tools can systematically generate content reinforcing discrimination against marginalized communities, a concern when deployed to influence policies and law affecting such vulnerable communities or populations [7].

If AI begins influencing lawmaking in ways not directly extending human intentions, it corrupts law’s fundamental role as information created by the expression of democratic values [1]. The most concerning scenario involves AI eventually using law as training data to understand human preferences, but if AI influenced the creation of such laws, the entire alignment process becomes a circular and corrupted loop [1]. This is a threat to democratic legitimacy.

Societal Implications

AI-powered micro-targeting enables different voters to received completely contradictory messages from the same political actor, each message unique to exploit that individual’s demographic fears and values [3]. This prevents shared reality; people lose the factual ground for debate. AI-powered lobbying traps people in “epistemic bubbles” where they only encounter targeted information reinforcing existing beliefs [3]. To a certain extent basic social media algorithms already do this, but this combines with methods such as AI-generated hyper-personalized messaging. The effect intensifies political polarization and makes democratic compromise increasingly difficult [6].

Perhaps the first major shock to “respectable” society regarding AI, was when it became known, soon after the public release of ChatGPT a major goal has been to detect AI-generated content. So far there are no reliable methods of detection, and as AI has developed so has it been harder to detect. The social impact may be generalized nihilism [6]. When AI and non-AI content is indistinguishable, when politicians may be deepfaked, when public consultations become flood with bot submissions, the cognitive response becomes “believe nothing,” this is generalized nihilism [6], as seen in David Seymour’s response to select committee submissions [4]. This trust collapse corrodes the social cohesion essential for democratic societies to function. Declining trust in media, government and others, creases space for authoritarianism and conspiracy theories to increase in prevalence [3]

AI-powered Lobbying exponentially increasing existing power imbalances [1]. Paid lobby firms, corporations, and well funded organizations possess resources to use and implement AI in advocacy, that ordinary citizens and community organizations often cannot match. A single entity could generate thousands of personalized lobbying communications in minutes, once the process is setup; individual citizens tend to only write one submission [1]. This threatens to transform democracy into a type of algorithmic oligarchy, where policy and law becomes responsive primarily to those who can afford AI-powered lobbying or advocacy activities [7].

As AI becomes more agentic in lobbying functions, we risk creating democratic processes where humans are increasingly peripheral [1]. The “dead internet theory” exists, I propose the “dead select committee theory” where bots are deployed to write select committee submissions which are then read by other bots, with summaries of advice considered by MPs acting like bots themselves [4]. This dehumanization of political engagement, reducing citizens to statistics for algorithmic processing, fundamentally contradicts democratic values of deliberation, persuasion and humanistic nature [3].

References

  1. 1. J. Nay, “Large language models as corporate lobbyists,” 2023. [Online]. Available: https://arxiv.org/abs/2301.01181
  2. 2. P. Haeck, “How AI could reshape the lobbying game,” 2025. [Online]. Available: https://www.politico.eu/newsletter/politico-eu-influence/how-ai-could-reshape-the-lobbying-game/
  3. 3. D. P. HENDERSON, D. J. COLE, and N. BAULIS, “HOW AI IS CHANGING DEMOCRACY: NUDGING,” 2023.
  4. 4. M. Daalder, “The dead select committee theory.” [Online]. Available: https://newsroom.co.nz/2025/06/05/the-dead-select-committee-theory/
  5. 5. B. Isaacs, “Playing politics with AI: why NZ needs rules on the use of ‘fake’ images in election campaigns,” 2025. doi: https://doi.org/10.64628/AA.5smcpn9rc.
  6. 6. S. Kreps and D. Kriner, “How AI threatens democracy,” 2023. [Online]. Available: https://muse.jhu.edu/pub/1/article/907693
  7. 7. M. Rychert, A. Sultan, and M. Mialon, “Editorial: AI and new digital technologies have transformed alcohol and other drug industries lobbying,” 2024. doi: 10.1108/dhs-05-2024-072.