Published:

Add Meta to the list of companies that have had AI wreak havoc on their internal systems. According to a report from The Information, an AI agent, working on behalf of an engineer, provided guidance that ultimately led to sensitive user data being exposed to people who weren’t authorized to see it.

As is often the case with these situations, like the one that led to an AI agent deleting critical code and knocking a server offline at Amazon, the autopsy reads like a comedy of errors. Per The Information, it started with a Meta employee asking a technical question on an internal forum designed for employees to help each other when issues arise. An engineer saw the question and asked an AI agent to analyze, which resulted in the agent actually posting a response as the engineer. The original poster saw the guidance and, thinking it was coming from a fellow Meta employee, decided to act on it.

Turns out the AI agent didn’t quite know what it was talking about. When the employee acted on its advice, it reportedly made a massive amount of data, including sensitive company and user information, available to Meta employees who did not have clearance to view or access it. The exposure lasted for about two hours before it was fixed.

It’s not the first time someone at Meta has trusted an AI agent a bit too much. Earlier this year, Summer Yue, the director of safety and alignment at Meta’s superintelligence lab, handed the open-source AI agent OpenClaw access to her inbox. It ended up deleting all of her emails, even as she pleaded with it to stop.

Maybe that’s why Meta is looking outside of its own walls to find someone to help out with security. Wired reportsthat Moxie Marlinspike, the person behind Signal and its open-source encryption protocol, is working with Meta to bring end-to-end encryption to its AI chatbots.

Marlinspike has been working on an encrypted chatbot called Confer, and will reportedly be helping Meta integrate the technology into its own AI offerings—though his platform will continue to operate independently, so it doesn’t seem he’ll be joining the company.

“We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint to a data pipeline specifically designed for extracting meaning and context,” he wrote in a blog post. “As Meta builds more AI products beyond the basic chat paradigm, the privacy technology from Confer will be a part of the foundation of everything that is to come.”

Дэлгэрэнгүйг эх сурвалжаас харах

Та юу гэж бодож байна?

Сэтгэгдлээ оруулна уу!
Please enter your name here

Холбоотой

spot_img

Шинэ

spot_img