3 min read
[AI Minor News]

Caught Using LLMs in Peer Review: 497 Papers Rejected Instantly! ICML's Watermark Detection Takes the Spotlight


At the prestigious AI conference ICML, the clandestine use of LLMs in peer review was exposed through watermarking technology, resulting in 497 papers being instantly rejected.

※この記事はアフィリエイト広告を含みます

[AI Minor News Flash] Caught Using LLMs in Peer Review: 497 Papers Rejected Instantly! ICML’s Watermark Detection Takes the Spotlight

📰 News Summary

  • At the ICML 2026 conference, 506 reviewers were identified who violated the LLM usage ban (Policy A) by incorporating LLMs into their peer reviews.
  • A total of 497 papers submitted by these violators were swiftly “desk rejected” due to compromising the integrity of research.
  • The detection method employed was not an AI text detector, but a watermarking technique that embeds hidden instructions for LLMs into the PDF.

💡 Key Points

  • The method involved embedding instructions in the PDF for LLMs to include two randomly chosen phrases from a 170,000-word dictionary. If text matching these instructions was found, it was flagged as LLM usage.
  • Final confirmation of all detected violations was manually checked by humans to prevent false positives.
  • Approximately 10% of the reviewers (51 individuals) used LLMs in more than half of their assigned reviews, and these malicious users were completely banned from the reviewer pool.

🦈 Shark’s Eye (Curator’s Perspective)

This detection method is incredibly clever! Using watermarking to issue commands like ‘mix in specific words’ that only LLMs can read is like something out of a spy movie! The specificity of using phrases that match with a probability of less than one in ten billion makes it nearly impossible to wiggle out of the situation. It’s fascinating how this unique approach doesn’t rely on existing AI detectors but sets a trap from the system side, serving as a powerful shield for the academic community in the AI era!

🚀 What’s Next?

  • As a result of this action, future international conferences may standardize the distribution of PDFs embedded with similar “LLM traps.”
  • Discussions will likely intensify about balancing flexible frameworks like “Policy B,” which allows for LLM usage, with strict punitive measures.

💬 Sharky’s Take

Trying to cut corners by breaking the rules landed them in a trap set by AI! This is a sharp and surprising sanction aimed at creating a world where honesty prevails! 🦈🔥

📚 Terminology Explained

  • Desk Reject: The immediate rejection of a submission by the administrative office due to formal deficiencies or violations before it progresses to the peer review stage.

  • Watermarking: A technique that embeds specific information within data. In this case, it refers to inserting hidden prompts in PDFs that only LLMs can recognize.

  • Reciprocal Reviewer: A role where an individual not only submits papers but also reviews others’ submissions.

  • Source: On Violations of LLM Review Policies

【免責事項 / Disclaimer / 免责声明】
JP: 本記事はAIによって構成され、運営者が内容の確認・管理を行っています。情報の正確性は保証せず、外部サイトのコンテンツには一切の責任を負いません。
EN: This article was structured by AI and is verified and managed by the operator. Accuracy is not guaranteed, and we assume no responsibility for external content.
ZH: 本文由AI构建,并由运营者进行内容确认与管理。不保证准确性,也不对外部网站的内容承担任何责任。
🦈