MIT’s new PhotoGuard protects your photos from unwanted AI edits

MIT's new "PhotoGuard" technique provides a unique defense against those who want to manipulate your photos with artificial intelligence.
 MIT’s new PhotoGuard protects your photos from unwanted AI edits
READING NOW MIT’s new PhotoGuard protects your photos from unwanted AI edits

Dall-E and Stable Difusion were just the beginning. As prolific AI systems proliferate and companies work to differentiate their offerings from their competitors, online chatbots, led by the likes of Shutterstock and Adobe, are gaining the power to create new images as well as edit images. But these new AI-powered capabilities also bring with them familiar pitfalls, such as the unauthorized alteration or outright stealing of existing online drawings and images. While watermarking techniques can help mitigate the risks of theft, the new “PhotoGuard” technique developed by MIT CSAIL can help prevent unauthorized modifications.

How does PhotoGuard work?

PhotoGuard works by manipulating selected pixels in an image in a way that disrupts the AI’s ability to understand what the image is. These “irregularities,” as the research team called it, are invisible to the human eye but are easily read by machines. The “encoder” attack method that uncovers these structures targets the hidden representation of the target image, which is the algorithmic model’s complex operation that defines the position and color of each pixel in an image, essentially preventing the AI ​​from understanding what it’s looking at.

The more advanced and computationally intensive “diffusion” attack method, on the other hand, camouflages an image as a different image in the eyes of the artificial intelligence. This method identifies a target image and optimizes irregularities in the image to resemble its target. Any editing an AI tries to make on these durable images is applied to the fake “target” images, resulting in an unrealistic image.

“The encoder attack causes the model to think that the input image (to be edited) is another image (for example, a gray image),” Hadi Salman, an MIT PhD student and lead author of the study, said in a conversation with Engadget. it forces you to make edits to some target image (which can be a gray or random image).”

“A collaborative approach involving model developers, social media platforms and policymakers offers a strong defense against unauthorized image manipulation,” Salman said in a statement. Working on this urgent issue is of great importance today,” he says. “While I am pleased to contribute to this solution, more work is needed to make this protection practical. “Companies developing these models need to invest in designing robust countermeasures against potential threats posed by these AI tools,” he concludes.

Comments
Leave a Comment

Details
135 read
okunma49778