Options
Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization
Doerrich, Sebastian; Di Salvo, Francesco; Ledig, Christian (2024): Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization, in: Bamberg: Otto-Friedrich-Universität, S. 1–10.
Faculty/Chair:
By:
Doerrich, Sebastian; ...
Publisher Information:
Year of publication:
2024
Pages:
Source/Other editions:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 / Marius George Linguraru, Qi Dou, Aasa Feragen, Stamatia Giannarou, Ben Glocker, Karim Lekadir, Julia A. Schnabel (Hg.). - Cham : Springer Nature Switzerland, 2024, S. 644–654. - ISBN: 978-3-031-72117-5
Year of first publication:
2024
Language:
English
Abstract:
Despite notable advancements, the integration of deep learning (DL) techniques into impactful clinical applications, particularly in the realm of digital histopathology, has been hindered by challenges associated with achieving robust generalization across diverse imaging domains and characteristics. Traditional mitigation strategies in this field such as data augmentation and stain color normalization have proven insufficient in addressing this limitation, necessitating the exploration of alternative methodologies. To this end, we propose a novel generative method for domain generalization in histopathology images. Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches and seamlessly infuse them into the original images, thereby creating novel, synthetic images with diverse attributes. By enriching the dataset with such synthesized images, we aim to enhance its holistic nature, facilitating improved generalization of DL models to unseen domains. Extensive experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach, outperforming the state of the art substantially, on the Camelyon17-wilds challenge dataset (+2%) and on a second epithelium-stroma dataset (+26%). Furthermore, we emphasize our method's ability to readily scale with increasingly available unlabeled data samples and more complex, higher parametric architectures. Source code is available at https://github.com/sdoerrich97/vits-are-generative-models.
GND Keywords: ; ; ; ; ;
Generalisierung
Anwendungsbereich
Maschinelles Lernen
Feature-Technologie
Orthogonalisierung
Image
Keywords: ; ; ;
domain generalization
self-supervised learning
feature orthogonalization
generative image synthesis
DDC Classification:
RVK Classification:
Peer Reviewed:
Yes:
International Distribution:
Yes:
Type:
Conferenceobject
Activation date:
December 9, 2024
Permalink
https://fis.uni-bamberg.de/handle/uniba/104846