Preprint Review Version 1 Preserved in Portico This version is not peer-reviewed

How to Use Transformers for Transfer Learning?

Version 1 : Received: 28 May 2023 / Approved: 31 May 2023 / Online: 31 May 2023 (07:39:24 CEST)

How to cite: Ebrahimzadeh, M.; Asadi, H. How to Use Transformers for Transfer Learning?. Preprints 2023, 2023052185. https://doi.org/10.20944/preprints202305.2185.v1 Ebrahimzadeh, M.; Asadi, H. How to Use Transformers for Transfer Learning?. Preprints 2023, 2023052185. https://doi.org/10.20944/preprints202305.2185.v1

Abstract

Transformers are increasing replacing older generation of deep neural networks due to their success in a wide range of application. The dominant approach of using transformers is to pre-train them on a large training dataset and then fine-tune them on a downstream task. However, as transformers becoming larger, the fine-tuning approach is become an infeasible approach for transfer learning. In this short survey, we list a few recent methods that makes using transformers based on transfer learning more efficient.

Keywords

optics; photonics; light, lasers; journal manuscripts; LaTeX template

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.