Generative AI technologies, such as large language models (LLMs) and large image models, could enable malicious actors to create sophisticated, individualized content at scale, potentially affecting election outcomes with minimal effort. This technology is expected to transform the field of political messaging and misinformation, which have historically exploited political and ideological views to resonate with people and convince them to act. With these capabilities, the risk posed by malicious actors can reach new heights, making it crucial for governments and organizations to be vigilant in addressing these issues.
We have already observed a variety of actors abusing generative AI as part of ongoing fraud campaigns, including the use of generative text to send messages to scam victims, generative AI images to create deceptive social media, and “deepfake” video and voice created by AI to aid social engineering of victims. These same tools have been used in social media political misinformation and deception campaigns social media.
Given the relevance of these subjects due to ongoing elections worldwide, understanding the effect of new technology on political misinformation is particularly consequential. In this analysis, we explore one of the most significant emerging threats from malicious use of generative AI: tailored misinformation. If someone includes intentional misinformation in a bulk email, people who disagree with that misinformation will be turned away from the campaign. However, in the method we explored in our research, misinformation is only added to the email when that individual is likely to agree. The ability to do this can completely change the scale at which misinformation can propagate.
In the research we document in this report, we aimed to uncover potential methods in which adversaries could apply generative AI tools to make impactful changes in the political sphere. These methods use current generative AI technologies in a way that can be executed at a meager cost by a wide range of potential actors who wish to influence politics on a small or large scale.
This effort was based on research we’ve already conducted, in which we developed a tool that can automatically launch an e-commerce scam campaign based on AI-generated text, images, and audio to create diverse and convincing fraudulent websites. An example of one of these websites is shown in Figure 1; a complete description of the research can be found here.