TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

Martin Gubri, Dennis Thomas Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

Large Language Model (LLM) services and models often come with legal rules on *who* can use them and *how* they must use them. Assessing the compliance of the released LLMs is crucial, as these rules protect the interests of the LLM contributor and prevent misuse. In this context, we describe the novel fingerprinting problem of Black-box Identity Verification (BBIV). The goal is to determine whether a third-party application uses a certain LLM through its chat function. We propose a method called Targeted Random Adversarial Prompt (TRAP) that identifies the specific LLM in use. We repurpose adversarial suffixes, originally proposed for jailbreaking, to get a pre-defined answer from the target LLM, while other models give random answers. TRAP detects the target LLMs with over 95% true positive rate at under 0.2% false positive rate even after a single interaction. TRAP remains effective even if the LLM has minor changes that do not significantly alter the original function.
OriginalsprogEngelsk
TitelFindings of the Association for Computational Linguistics ACL 2024
RedaktørerLun-Wei Ku, Andre Martins, Vivek Srikumar
Vol/bindFindings of the Association for Computational Linguistics ACL 2024
UdgivelsesstedBangkok
ForlagAssociation for Computational Linguistics
Publikationsdatoaug. 2024
Sider11496–11517
DOI
StatusUdgivet - aug. 2024

Fingeraftryk

Dyk ned i forskningsemnerne om 'TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification'. Sammen danner de et unikt fingeraftryk.

Citationsformater