“Beta, did you hear the news? As of February 2025, working from home is going to be banned!” exclaimed my dadima, my paternal grandmother, visibly worried as she clutched the stairway. “How are people going to keep their jobs?”
I looked at my sister—she looked back at me—and we both nearly keeled over laughing. Clutching our stomachs and forcibly downturning our mouths, we responded, saying, “dadima, don’t worry. I’m sure this is some form of misinformation.”
This highly skewed announcement, coming from my dadima’s morning dose of news, was rooted in President Donald Trump’s barrage of executive orders. “Return to In-Person Work” was released on Jan. 20, 2025, and stated, “Heads of all departments and agencies in the executive branch of Government shall, as soon as practicable, take all necessary steps to terminate remote work arrangements and require employees to return to work in-person at their respective duty stations on a full-time basis…” The executive order refers to federal workers in the United States, not to all workers in the world.
Albeit a drastic change for federal workers across the nation, this order was not meant to impact the widespread issue of remote work across industries and organizational bodies. The executive order was quickly rebranded by a news outlet and implanted in the mind of my smart yet vulnerable grandmother, along with hundreds of thousands of other listeners. In turn, my grandmother’s interpretation of this repackaged narrative illustrated an evident display of misinformation.
Misinformation runs rampant in today’s world, where news is easily accessible, interpretable, and disseminated. Its existence, in some cases, comes merely from naïveté—my dadima was unquestionably not ill-intentioned, just innocently confused. Its concern, however, arises in two situations: either when spread with general malicious intent, or when incited to change a specific opinion.
As per a Johns Hopkins library guide, the difference between misinformation and propaganda may not be as large as you think. “Because of its historical use, many people associate propaganda with inflammatory speech or writing that has no basis in fact,” says the guide. “In reality, propaganda may easily be based in fact, but facts represented in such a way as to provoke a desired response.” Thus, we can understand propaganda as a sort of opinion-altering version of misinformation.
If you think back to high school history class, “propaganda,” may have had a certain meaning. Perhaps it’s the poster of Uncle Sam, pointing at you with “I want you for U.S. Army,” in bold lettering below his fierce face, or Rosie the Riveter, pumping her muscles, saying “We Can Do It!” Over the years, the word has remained, but it has progressed in its definition.
As stated by the American Historical Association (AHA), “propaganda is not new and modern… the battle for men’s minds is as old as human history.” The AHA even emphasizes the importance of some propaganda in modern-day democracy, particularly for “propaganda as promotion,” where a political candidate aiming to gain the favor of their voting bodies and constituencies “must engage in promotion as a legitimate and necessary part of a political contest.”
The waging of this opinion battle has progressed beyond appealing to voters, however, and into a modern-day war of misinformation. The culprit is in part the rise of artificial intelligence, with its ability to generate false stories, images, and even websites at the click of a finger. What began as an appeal to voter populations has become a war on truth.
NewsGuard, an organization dedicated to rating the transparency of news and information websites, has now tracked “1,150 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools,” (as of Jan. 13, 2025). Interestingly, NewsGuard reports that such AI-generated websites tend to have relatively “normal” names, which easily hides their bot-operative status. NewsGuard even found a network of over 150 Russian AI-generated websites, which they discovered hidden under the guise of “DC Weekly,” writing “egregiously misleading claims,” largely about the war in Ukraine.
The use of deepfakes is particularly harmful in inciting propaganda. A deepfake is “an image or recording that has been convincingly altered and manipulated,” and is intended “to misrepresent someone as doing or saying something that was not actually done or said,” as defined by the Merriam-Webster dictionary. By putting fake information in people’s mouths, deepfakes can falsely display political support from prominent members of the public for or against a candidate, blurring the line of truth.
There were significant concerns about deepfakes and other AI-generated content drastically affecting the 2024 election. A deepfake of former President Joe Biden’s voice urged New Hampshire voters not to vote in the state’s primary, and to instead “save [their] vote for the November election.” This deepfake, distributed in Jan. of 2024, elicited fear for the coming election. Yet, as explained by NPR correspondent Shannon Bond, the damage that was expected did not ensue. Instead, AI was largely used to create politicized memes and videos intended to act as propaganda. In an interview with NPR, Zeve Sanderson, Executive Director of NYU’s Center for Social Media and Politics, noted that the propaganda created through AI for the election this past year was “designed to push a narrative, and propaganda works.”
The concern is that AI-generated content is convincing. Researchers from Stanford University’s Institute for Human-Centered AI have found that AI-generated propaganda is even more cogent than human-created propaganda, indicating strong human susceptibility to AI-generated content. Additionally, as large language models and image generators improve in accuracy, alongside the rise in AI-generated propaganda, it will become increasingly difficult to discern the truth from the noise.
Bond and other researchers’ suggestion that elections may not be at stake, given the relatively small effect of deepfake-created propaganda and false information in 2024, is misleadingly comforting—deepfakes are not going anywhere. A recent opinion from the World Economic Forum details that, indeed, “deepfakes failed to turn the tide in any candidate’s favour,” but also that “their ineffectiveness does not mean that they are harmless.” Personal safety concerns and harassment are two very significant concerns that can affect anyone.
Mitigators are working hard to protect populations. Deepfake recognition technologies are in development, with several organizations releasing preliminary versions. OpenAI is one such group, having released a detection software that can identify 98.8% of images that are created by its most recent image generator (DALL-E 3). Though the government has no large-scale crackdowns or comprehensive AI regulation, largely due to free speech and innovation concerns, roughly 120 bills are in Congress circulation. Organizations such as NewsGuard are working tirelessly to capture AI-generated misinformation at its earliest iteration.
The most powerful tool, however, is the only thing we can continue to do—build awareness and foster education. This war against truth is not ending anytime soon; instead, it continues to evolve, from political propaganda to personal attacks. We must question the validity of the information we receive and attempt to fully understand it, just like my dadima did by vocalizing her confusion. In a time of constantly changing technologies with the possibility to enable limitless naïve assumptions, these are our strongest weapons.
Gauri Sood ’26 (gaurisood@college.harvard.edu) loves looking through the AI-generated content in her grandparents’ WhatsApp group chats.