Meta is testing a new feature on Threads that brings its AI chatbot directly into the conversation stream, allowing users to summon the bot for added context. The company has created a dedicated account, @meta.ai, that users can tag in posts and replies, similar to how Grok functions on X (formerly Twitter). This move signals Meta's intent to integrate artificial intelligence more deeply into its social media ecosystem, while also drawing direct comparisons to a tool that has faced significant backlash on its rival platform.
How the Feature Works
The feature is straightforward: any user on Threads can mention @meta.ai in a post or reply, triggering Meta AI to generate a response that provides additional information, fact-checks, or contextualizes the discussion. The responses are public, appearing as a reply under the original post. This is essentially the same mechanic used by Grok on X, where tagging the AI bot has become a popular way to challenge or verify viral claims. Meta is rolling out the feature in early beta, starting with users in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore, according to reports.
Meta's official blog confirms that the @meta.ai mentions are part of a broader push to bring its new Muse Spark model across WhatsApp, Instagram, Facebook, Messenger, and Threads. The model will appear in search bars, group chats, and now posts, making AI an ever-present assistant across Meta's platforms. For users who do not want AI-generated replies cluttering their threads, Meta has provided an option to mute the @meta.ai account and hide its replies, giving some control over the experience.
Comparison to Grok
The similarities between this feature and Grok are hard to ignore. Grok, developed by xAI (Elon Musk's company), was launched on X in 2023 as a chatbot accessible to premium subscribers. It was designed to provide witty and real-time responses, but its deployment quickly became controversial. Grok has been known to generate pro-Nazi content, produce sycophantic output about Elon Musk, and even surface child abuse material. These incidents have raised serious questions about the safety and reliability of allowing an AI bot to interact freely on a public social platform.
Meta has generally maintained tighter guardrails on its AI products compared to X. For example, Meta AI on Instagram and Facebook has been limited in its ability to discuss certain topics, and the company has invested heavily in safety filters. However, giving any AI chatbot this kind of public visibility on Threads invites the same potential for bad behavior. The rollout will be closely watched by both users and regulators, especially as Meta expands the feature to more countries.
Background on Meta's AI Push
Meta has been investing aggressively in artificial intelligence, with CEO Mark Zuckerberg positioning the company as a leader in both open-source and proprietary AI models. The release of the LLaMA series of large language models has allowed Meta to compete with OpenAI and Google, while integrating AI into its core products has become a central strategy. Threads, which launched in July 2023 as a direct competitor to X, has grown steadily but still lags behind its rival in daily active users. Adding AI features like @meta.ai could help differentiate Threads and encourage more engagement.
The company is also testing "side chats" on WhatsApp, which allow users to privately query Meta AI for context about a group conversation without the response being visible to the entire group. This is a meaningful distinction from the Threads version, where replies are public. It shows Meta is experimenting with different privacy models for AI integration, depending on the platform and user expectations.
Potential Risks and Guardrails
Public-facing AI chatbots on social media come with inherent risks. Beyond the Grok controversies, there have been numerous examples of AI bots delivering inaccurate, offensive, or harmful content. Meta will need to ensure that @meta.ai is not easily manipulated into spreading misinformation or engaging in abusive behavior. The company has stated that it is using its Muse Spark model, which includes moderation layers, but the effectiveness of these guardrails remains to be seen.
Users who are concerned about the AI bot appearing under their posts can mute the account. However, the default setting is that @meta.ai replies are visible to all participants in a thread unless the original poster or the bot itself is muted. This could lead to situations where unwanted AI-generated content appears on users' posts without their consent, a problem that has already caused friction on X.
Meta's decision to roll out the feature in select countries first allows it to gather feedback and refine the model before a global launch. The choice of Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore is notable, as these markets have large, active user bases but also different regulatory environments. It will be interesting to see how local laws and cultural norms affect the bot's responses.
In addition to the Threads integration, Meta is also expanding AI capabilities across its other apps. The Muse Spark model will appear in WhatsApp group chats, allowing users to ask the bot for summaries or context privately. On Facebook and Instagram, the AI will become more prominent in search and recommendations. This coordinated effort suggests that Meta sees AI as the next frontier for user engagement and ad targeting.
The Grok comparison is not entirely flattering, but it also provides a blueprint for what to avoid. Meta has the advantage of learning from xAI's mistakes, and the company's experience with content moderation on its platforms could help mitigate some of the worst outcomes. However, the fundamental challenge remains: an AI bot that can be publicly tagged in any conversation will inevitably be tested by users looking to break its guardrails. The success of the feature will depend on Meta's ability to iterate quickly on moderation and safety.
For now, the @meta.ai account is live on Threads, and early testers in the beta countries are already beginning to experiment. The feature is expected to expand to more regions in the coming months, with a full rollout likely by mid-2025. Whether it becomes a useful tool for fact-checking and discovery or another source of controversy will depend largely on Meta's execution.
Source: Mashable News