December 23, 2024

Pentagon to Use AI to Create Deepfake Internet Users

Pentagon-to-Use-AI-to-Create-Deepfake-Internet-Users

The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.

This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

Read the rest