Options
MAL-E : Understanding Text-to-Image Generation
Joachim, Silvia; Hennecke, Martin (2025): MAL-E : Understanding Text-to-Image Generation, in: Ute Schmid, Jochen L. Leidner, Michael Kohlhase, u. a. (Hrsg.), Proceedings of the Second Work shop on Artificial Intelligence for Artificial Intelligence Education (AI4AI Learning 2024), Bamberg: University of Bamberg Press, S. 23–32, doi: 10.20378/irb-108885.
Author:
Title of the compilation:
Proceedings of the Second Work shop on Artificial Intelligence for Artificial Intelligence Education (AI4AI Learning 2024)
Conference:
Second Workshop on Artificial Intelligence for Artificial Intelligence Education (AI4AI Learning 2024) ; Würzburg
Publisher Information:
Year of publication:
2025
Pages:
ISBN:
978-3-98989-054-1
Language:
English
DOI:
Abstract:
Generative AI, particularly in image generation, has attracted a lot of attention in recent years. These technologies are here to stay. Understanding how they work demystifies them, showing that they are driven by algorithms, not magic. We present a learning and experimentation module for Unplugged Text-To-Image Generation, which we have called MAL-E. It explains several key steps in the process, starting with Tokenization, Embedding, and Positional Encoding. Students learn Masked Self-Attention, an important step of the Decoder-only Transformer. They also learn about Image Components Selection and Image Assembly to produce the final image. MAL-E provides an inclusive hands-on way to understand how pre-trained neural networks and transformers create images from text prompts.
GND Keywords: ;  ;  ;  ; 
Artificial Intelligence
Generative KI
Neuronales Netz
Content Creator
Bild
Keywords: ;  ;  ;  ;  ;  ; 
Artificial Intelligence
CS Unplugged
Inclusive Material
Generative AI
AI Education
Neural Networks
K-12 students
DDC Classification:
RVK Classification:
Type:
Conferenceobject
Activation date:
July 11, 2025
Permalink
https://fis.uni-bamberg.de/handle/uniba/108885