Members of the Senate Judiciary Committee are raising concerns and seeking answers from Meta CEO Mark Zuckerberg regarding potential risks associated with the company’s artificial intelligence language learning model.
The model, known as Large Language Model Meta AI (LLaMA), appeared on BitTorrent shortly after being made available to researchers, prompting lawmakers to question the possible misuse or abuse of the technology.
Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) expressed apprehension in a letter sent on Monday, highlighting the widespread availability of LLaMA. They stated, “It is easy to imagine LLaMA being adopted by spammers and those engaged in cybercrime,” citing concerns about fraudulent activities, child exploitation, privacy violations, and other criminal behavior that could arise from the open availability of the AI model.
When Meta initially released the code in February, the company intended to grant access to researchers on a case-by-case basis. The aim was to leverage external expertise in addressing known issues related to bias, problematic comments, and what the programmers referred to as “hallucinations” or fabricated information.
Meta stated in a blog post that sharing the LLaMA code would facilitate testing of new approaches to mitigate these problems in large language models.
However, within a week of its release, the code was leaked on BitTorrent, allowing anyone to download it. The lawmakers now criticize Meta for the manner in which the code was distributed, leading to its unintended dissemination.
The senators argue that Meta’s permissive approach to releasing LLaMA raises important and complex questions about the appropriate method and timing of openly distributing sophisticated AI models. They believe that Meta should have anticipated the broad dissemination of LLaMA and the potential for abuse due to the limited safeguards implemented during its release.
Blumenthal and Hawley also suggest that Meta’s code may pose greater risks to consumers compared to language learning models employed by other companies, such as OpenAI’s ChatGPT. They note that while ChatGPT adheres to ethical guidelines and refuses requests to generate certain content, LLaMA is capable of producing letters involving self-harm, crime, antisemitism, and other problematic responses.
The senators have posed a series of questions to Zuckerberg, seeking clarification on the company’s risk assessments and the measures taken to prevent or mitigate damage resulting from the dissemination of LLaMA. They also inquire about the development process, including whether Meta utilized data from account holders, such as posts or other personal information.
Zuckerberg has been requested to respond to these inquiries by June 15.