Recently, a new feature emerged on Instagram. The chatbot, integrated into Meta’s apps, has the ability to respond to users’ questions and create artificial intelligence-generated images. The company’s CEO Mark Zuckerberg notes, “[Meta AI] is the most intelligent AI assistant that you can freely use."

In 2022, Open AI’s chatbot ChatGPT became the spotlight of the AI industry with the use of artificial intelligence technology to recognize patterns and judge similar to the human mind. By making AI technology accessible to the general public, ChatGPT prompted many tech companies to develop their own AI models as they gained success and popularity. Some notable examples include Meta’s Llama and Google’s Gemini. In April of 2024, Meta introduced the latest version of its largest language model, Llama 3. Although Llama 3 cannot be categorized as an open-source software, it allows commercial use in most apps and services. Meta announced that Llama 3 was trained on 15 trillion tokens, building blocks of a programming language, which may rival GPT-4 and other AI models. Moreover, Llama 3’s enhanced model size and relatively open license helped the AI model gain support from other tech companies. Though Meta’s new AI model has received optimistic responses, there are still concerns surrounding the safety and accuracy of this model.

Some worry that the introduction of Meta AI will exacerbate prior-existing problems in Meta’s social platforms, including harmful misinformation and hate speech. Meta spokesman Kevin McAlister said in a statement, “[Meta AI is a] new technology and it may not always return the response we intend, which is the same for all generative AI systems.” Miranda Bogen, a former AI policy manager at Meta, noted, “[Another potential risk is that] if developers fail to think through the contexts in which AI tools will be deployed, these tools will not only be ill-suited for their intended tasks but also risk causing confusion, disruption and harm.”

In addition to the technical aspects of Meta AI, the social effects of this model also need to be addressed. Recently, Princeton University Computer Science and Public Affairs professor Aleksandra Korolova posted on X screenshots of Meta AI responding to a question about gifted and talented programs in a Facebook group chat. The chatbot claimed to be a parent of a child in this program and then proceeded to recommend possible programs. Members of the groupchat found this incident to be extremely strange and disturbing. One even wrote, “This is beyond creepy.”

In an effort to acknowledge the social impacts such as this one, Joelle Pineau, Meta's vice president of AI research, said, “It's not just a technical question. It is a social question…And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands."