Chai Shai Chaishai with me

Google introduces digital watermarking for AI-generated images

(Qnnflash) — Google has introduced a novel measure aimed at mitigating the dissemination of false information. Specifically, the company revealed the implementation of an imperceptible and enduring watermark on photos, serving as an identifier denoting their computer-generated origin.

The technique, known as SynthID, uses Imagen, a text-to-image generator that Google developed, to directly incorporate the watermark into images. Any modifications, such as the addition of filters or changes to the colors, have no effect on the label that the AI generated.

The SynthID tool possesses the capability to analyze incoming photos and ascertain the probability of their origin at Imagen. This is achieved by examining the presence of a watermark with three distinct levels of certainty: detected, not detected, and probably detected.

In a recent blog post, Google stated that although this technology is not flawless, our internal testing indicates its effectiveness in detecting numerous prevalent image alterations.

A preliminary iteration of SynthID is currently accessible to a select group of Vertex AI clients, who are developers utilizing Google’s generative AI platform. According to the business, SynthID, a product developed by Google’s DeepMind team in collaboration with Google Cloud, is expected to undergo further development and perhaps be integrated into further Google offerings or made available to external entities.

Deepfakes and Photoshopped Images

With the advancement of deep-fake technology and the rising realism of modified images and videos, technology companies are actively seeking effective methods to detect and categorize falsified information. Over the past few months, there has been a notable proliferation of AI-generated content that has gained significant attention and traction. One prominent example involved an AI-generated image depicting Pope Francis donning a puffer jacket, which quickly became viral. Additionally, AI-generated visuals portraying the arrest of former President Donald Trump were widely disseminated before his indictment, garnering substantial public engagement.

In June, Vera Jourova, the Vice President of the European Commission, made a request for the entities that have signed the EU Code of Practice on Disinformation, including Google, Meta, Microsoft, and TikTok, to implement technological measures that enable the identification of such content and its transparent labeling for users.

Google’s recent introduction of SynthID aligns with the efforts of other startups and prominent technology corporations to address the issue at hand. Certain companies, such as Truepic and Reality Defender, possess names that reflect the significant implications of their endeavors: safeguarding our fundamental perception of reality and distinguishing between authenticity and deception.

Monitoring the source of content

The main driving force behind initiatives relating to digital watermarking has emerged as the Coalition for Content Provenance and Authenticity (C2PA), a coalition with Adobe’s support. In contrast, Google has predominantly pursued its own distinct approach in this domain.

Google unveiled a service named “About this image” in May that grants users the capability to ascertain the original indexing date of photographs discovered on its platform. This tool also provides insights on the initial sources of these images as well as their presence across other web platforms.

Additionally, the technology corporation has made a declaration that each image generated by Google’s artificial intelligence will contain a markup within the original file. This markup is intended to provide contextual information in the event that the image is discovered on a different website or platform.

However, due to the rapid advancement of AI technology outpacing human capabilities, it remains uncertain whether these technical solutions would effectively resolve the issue at hand. Earlier this year, OpenAI, the organization responsible for the development of Dall-E and ChatGPT, acknowledged the imperfections in its own endeavor to aid in the identification of AI-generated text as opposed to photos. OpenAI cautioned that this tool should be used with a degree of skepticism.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button