Article
Version 1
Preserved in Portico This version is not peer-reviewed
WDANet: Exploring Style Feature via Dual Cross-Attention for Woodcut-Style Design
Version 1
: Received: 15 December 2023 / Approved: 18 December 2023 / Online: 19 December 2023 (09:22:12 CET)
How to cite: Ou, Y.; Xu, J. WDANet: Exploring Style Feature via Dual Cross-Attention for Woodcut-Style Design. Preprints 2023, 2023121380. https://doi.org/10.20944/preprints202312.1380.v1 Ou, Y.; Xu, J. WDANet: Exploring Style Feature via Dual Cross-Attention for Woodcut-Style Design. Preprints 2023, 2023121380. https://doi.org/10.20944/preprints202312.1380.v1
Abstract
People are drawn to woodcut-style designs due to their striking visual impact and strong contrast. However, traditional woodcut prints and previous computer-aided methods have not addressed the issues of dwindling design inspiration, lengthy production times, and complex adjustment procedures. We propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet), to tackle these challenges. Notably, our research is the first to utilize diffusion models to streamline the woodcut-style design process. We've curated the Woodcut-62 dataset, featuring works from 62 renowned historical artists, to train WDANet in absorbing and learning the aesthetic nuances of woodcut prints, offering users a wealth of design references. Based on a noise reduction network, our dual cross-attention mechanism effectively integrates text and woodcut-style image features. This allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. As confirmed by user studies, quantitative and qualitative analyses show that WDANet outperforms the current state-of-the-art in generating woodcut-style images and proves its value as a design aid.
Keywords
woodcut-style design; diffusion model; computer-aided design; text-to-image model
Subject
Computer Science and Mathematics, Computer Vision and Graphics
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment