Article
Version 1
Preserved in Portico This version is not peer-reviewed
Analyzing Multi-Head Attention on Broken BERT Models
Version 1
: Received: 24 June 2024 / Approved: 24 June 2024 / Online: 24 June 2024 (13:53:36 CEST)
How to cite: Wang, J. Analyzing Multi-Head Attention on Broken BERT Models. Preprints 2024, 2024061669. https://doi.org/10.20944/preprints202406.1669.v1 Wang, J. Analyzing Multi-Head Attention on Broken BERT Models. Preprints 2024, 2024061669. https://doi.org/10.20944/preprints202406.1669.v1
Abstract
This project investigates the behavior of multi-head attention in Transformer models, specifically focusing on the differences between benign and trojan models in the context of sentiment analysis. Trojan attacks cause models to perform normally on clean inputs but exhibit misclassifications when presented with inputs containing predefined triggers. We characterize attention head functions in trojan and benign models, identifying specific 'trojan' heads and analyzing their behavior.
Keywords
multi-head attention; BERT
Subject
Computer Science and Mathematics, Computer Science
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment