As The Intercept reports, a wishlist put forward by the Joint Special Operations Command (JSOC) — a furtive counterterrorism group within the US Department of Defense (DoD) — reveals the agency’s interest in using generative AI to create fake online internet users. (View Highlight)
That’s despite the US government’s persistentwarnings that deepfakes and other AI-generated content will deepen the misinformation crisis and lead to a muddier information ecosystem for everyone. (View Highlight)
In the document, JSOC explains that it’s seeking “technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content” for use by Special Operations Forces. This “solution,” JSOC adds, “should include facial and background imagery, facial and background video, and audio layers.” (View Highlight)
According to the wishlist, JSOC wants SOF agents to “use this capability to gather information from public online forums.” (View Highlight)
As The Intercept reported last year, the Pentagon expressed interest in deepfakes as a means of improving and expanding influence efforts run by the DoD’s Special Operations Command (SOCOM), writing in a 2023 procurement that it sought “more encompassing, disruptive” technologies “larger in scope” than then-current tools. (View Highlight)
And with election day swiftly approaching, it’s worth noting that Project 2025 — a blueprint for a projected Donald Trump presidency penned by dozens of close allies — salivates over the prospect of using AI to expand surveillance and spying efforts, as a Futurism deep-dive into the policy playbook showed. (View Highlight)
“This will only embolden other militaries or adversaries to do the same,” Heidy Khlaaf, chief AI scientist at the AI Now Institute, told The Intercept, “leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.” (View Highlight)