As artificial intelligence (AI) continues to revolutionize the way we create and consume content, educators and employers face a growing challenge: distinguishing between human-written and AI-generated text. Now, a University of Florida (UF) professor is pioneering a solution that could change the game—digital watermarks designed to detect AI-generated writing, even when it’s been altered or paraphrased.

Yuheng Bu, Ph.D., an assistant professor in UF’s Department of Electrical and Computer Engineering, is leading the charge to develop an invisible watermarking method for Large Language Models (LLMs). Using UF’s supercomputer, HiPerGator, Bu and his team are creating a system that embeds imperceptible signals into AI-generated text, allowing for reliable detection while maintaining the quality of the writing.

“If I’m a student and I’m writing my homework with ChatGPT, I don’t want my professor to detect that,” Bu said. “But with this technology, we can ensure that AI-generated content is identifiable, even if it’s been modified.”

The Challenge of AI Detection

LLMs, such as Google’s Gemini and OpenAI’s ChatGPT, are capable of producing human-like text by drawing on vast datasets. While these tools offer immense potential, they also pose significant risks in academic and professional settings, where authenticity and originality are paramount.

A 2024 study by researchers at the University of Reading in the United Kingdom highlighted the difficulty of detecting AI-generated content. The study found that 94% of AI-written assignments submitted under fake student profiles went undetected by educators. As LLMs continue to evolve, distinguishing between human and AI writing is becoming increasingly challenging—and may soon be impossible without proactive measures.

How Watermarking Works

Bu’s watermarking technology addresses this issue by embedding invisible signals into AI-generated text. These signals act as verifiable evidence of AI authorship, even if the text is paraphrased or rewritten. The key innovation lies in the method’s adaptability: it ensures that the watermark remains robust against common modifications, such as synonym replacement, while preserving the natural flow and quality of the text.

Unlike existing watermarking methods, which can degrade text quality or fail under certain conditions, Bu’s approach applies watermarks to only a subset of the text during generation. This results in better writing quality and greater resistance to removal attempts.

“Even if a user completely rewrites the watermarked text, as long as the semantics remain unchanged, the watermark remains detectable with high probability,” Bu explained.

The Key to Detection

One of the critical components of Bu’s system is the use of a private key mechanism. The entity that applies the watermark—such as OpenAI for ChatGPT—holds the key required for detection. End users, such as professors or employers, must obtain this key from the watermarking entity to verify the presence of a watermark.

However, this raises important questions about intellectual property and accessibility. “A crucial next step is to establish a comprehensive ecosystem that enforces watermarking usage and key distribution,” Bu said. “Alternatively, we need to develop more advanced techniques that do not rely on a secret key.”

A Future of Trust and Authenticity

Bu’s work has already gained recognition in the academic community. He has published multiple papers on AI watermarking, including “Adaptive Text Watermark for Large Language Models” and “Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach,” presented at the International Conference on Machine Learning.

Looking ahead, Bu envisions a future where watermarking is seamlessly integrated into educational institutions and digital platforms. “Watermarks have the potential to become a crucial tool for trust and authenticity in the era of generative AI,” he said. “I see them being used in schools to verify academic materials and across digital platforms to distinguish genuine content from misinformation.”

As AI continues to shape the way we communicate and create, technologies like Bu’s watermarking system could play a vital role in ensuring transparency and accountability. For educators, employers, and consumers of information, it’s a step toward greater confidence in the content we rely on every day.

3 responses to “University of Florida Professor Develops Watermark Technology to Detect AI-Generated Writing”

  1. […] rapid rise of generative AI has forced schools to evolve—from initial bans on ChatGPT in 2023 to professional development programs on AI integration. President Trump recently signed an […]

  2. […] take-home exams. According to The Wall Street Journal, universities such as Texas A&M, the University of Florida, and UC Berkeley have reported a surge in demand for handwritten assessments over the past two […]

  3. […] take-home exams. According to The Wall Street Journal, universities such as Texas A&M, the University of Florida, and UC Berkeley have reported a surge in demand for handwritten assessments over the past two […]

Leave a comment

Trending