The Justice Department inspector general cautioned that the lack of guidance or strategy for sharing information with social media companies about foreign malign influence threats to U.S. elections could create free speech risks.

The Justice Department inspector general cautioned that the lack of guidance or strategy for sharing information with social media companies about foreign malign influence threats to U.S. elections could create free speech risks. SDI Productions / Getty Images

FBI lacks strategy for sharing information with social media platforms about foreign influence threats to U.S. elections

The Justice Department inspector general warned about the risk of sharing information with foreign-owned social media platforms, the FBI not maintaining relationships with companies outside of Silicon Valley and disinformation created using artificial intelligence.

The Justice Department does not have a comprehensive strategy or specific guidance for sharing information with social media companies about foreign malign influence threats to U.S. elections, according to a report from its inspector general released on Tuesday. 

The FBI is the lead agency for investigating foreign malign influence operations and regularly informs social media companies about associated threats on their platforms. Focus on this issue surged after Russia’s attempts to interfere in the 2016 presidential election. 

Due to a lack of a strategic approach, however, the inspector general reported that there are shortcomings with DOJ’s efforts to combat the threat. 

For example, FBI officials told investigators that the process for sharing information with domestic social media companies would not work for foreign-owned platforms due to national security risks. 

Many national security experts have warned about the popular app TikTok, which is owned by a Chinese company. President Joe Biden earlier this year signed into law a bill to require that company to either sell the app to a U.S. entity within a year or cease operating in the U.S.

The report also found that the FBI does not maintain strong relationships with social media companies outside of San Francisco and could be unprepared for artificial intelligence-enabled disinformation like deepfakes. A deepfake is a manipulated video that seems realistic but uses fabricated images and sounds to make it appear that a person said or did something that they did not actually say or do. 

The inspector general cautioned that the lack of guidance or a strategy could create free speech risks too. 

“While there are no apparent First Amendment implications from the FBI simply sharing information about foreign malign influence threats with social media companies, concerns may arise if that information is communicated in such a way that those communications could reasonably be perceived as constituting coercion or significant encouragement aimed at convincing those companies to act on the shared information in a way that would limit or exclude the speech of those who participate on their platforms,” investigators wrote. 

The House of Representatives last year, in a party-line vote, passed legislation that would prohibit federal employees from using their official authority to “censor” any private entity, with exceptions for “legitimate law enforcement functions.” It would include in the definition of “censor” attempts to influence or coerce the removal or suppression of “lawful speech” from or on a social media platform.   

The Supreme Court in June sided, 6-3, with the Biden administration in Murthy v. Missouri, deciding that agencies can communicate with social media companies about disinformation on their platforms. Specifically, the justices concluded that states did not have the authority to bring the lawsuit that argued such federal efforts violated the First Amendment. 

After the inspector general completed its investigation, DOJ and FBI in February implemented standard operating procedures for sharing foreign malign influence threat information with social media companies. The procedures, which are classified,  involve the criteria for determining what constitutes such a threat and supervisor approval requirements. 

The inspector general recommended that DOJ develop a strategy to ensure its information sharing with social media companies about foreign malign influence directed at U.S. elections can adapt to evolving threats and inform the public about its safeguards to protect the First Amendment when sharing such information. DOJ concurred with both recommendations. 

While the inspector general criticized the DOJ for not having specific guidance or a comprehensive strategy, the oversight office did point out that it has a model, or process, to share information about such threats with social media companies.

The FBI used that model to counter an attempt by two Iranian nationals to influence the 2020 U.S. presidential election. Specifically, the FBI obtained technical information about a video threatening American voters that it shared with a social media company, which was then able to identify it and take it down. 

Investigators also interviewed representatives from four social media companies who had received information from the FBI ahead of the 2018 and 2020 elections and found that each of them were “generally satisfied” with their interactions with the law enforcement agency.