Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Analyzing Multi-Head Attention on Broken BERT Models

*
Version 1 : Received: 24 June 2024 / Approved: 24 June 2024 / Online: 24 June 2024 (13:53:36 CEST)

How to cite: Wang, J. Analyzing Multi-Head Attention on Broken BERT Models. Preprints 2024, 2024061669. https://doi.org/10.20944/preprints202406.1669.v1 Wang, J. Analyzing Multi-Head Attention on Broken BERT Models. Preprints 2024, 2024061669. https://doi.org/10.20944/preprints202406.1669.v1

Abstract

This project investigates the behavior of multi-head attention in Transformer models, specifically focusing on the differences between benign and trojan models in the context of sentiment analysis. Trojan attacks cause models to perform normally on clean inputs but exhibit misclassifications when presented with inputs containing predefined triggers. We characterize attention head functions in trojan and benign models, identifying specific 'trojan' heads and analyzing their behavior.

Keywords

multi-head attention; BERT

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.