TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

Martin Gubri, Dennis Thomas Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Large Language Model (LLM) services and models often come with legal rules on *who* can use them and *how* they must use them. Assessing the compliance of the released LLMs is crucial, as these rules protect the interests of the LLM contributor and prevent misuse. In this context, we describe the novel fingerprinting problem of Black-box Identity Verification (BBIV). The goal is to determine whether a third-party application uses a certain LLM through its chat function. We propose a method called Targeted Random Adversarial Prompt (TRAP) that identifies the specific LLM in use. We repurpose adversarial suffixes, originally proposed for jailbreaking, to get a pre-defined answer from the target LLM, while other models give random answers. TRAP detects the target LLMs with over 95% true positive rate at under 0.2% false positive rate even after a single interaction. TRAP remains effective even if the LLM has minor changes that do not significantly alter the original function.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics ACL 2024
EditorsLun-Wei Ku, Andre Martins, Vivek Srikumar
VolumeFindings of the Association for Computational Linguistics ACL 2024
Place of PublicationBangkok
PublisherAssociation for Computational Linguistics
Publication dateAug 2024
Pages11496–11517
DOIs
Publication statusPublished - Aug 2024

Fingerprint

Dive into the research topics of 'TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification'. Together they form a unique fingerprint.

Cite this