Menchaca Resendiz, YarikYarikMenchaca ResendizKlinger, RomanRomanKlinger0000-0002-2014-66192024-03-072024-03-072023https://fis.uni-bamberg.de/handle/uniba/93873Conditional natural language generation methods often require either expensive fine-tuning or training a large language model from scratch. Both are unlikely to lead to good results without a substantial amount of data and computational resources. Prompt learning without changing the parameters of a large language model presents a promising alternative. It is a cost-effective approach, while still achieving competitive results. While this procedure is now established for zero- and few-shot text classification and structured prediction, it has received limited attention in conditional text generation. We present the first automatic prompt optimization approach for emotion-conditioned text generation with instruction-fine-tuned models. Our method uses an iterative optimization procedure that changes the prompt by adding, removing, or replacing tokens. As objective function, we only require a text classifier that measures the realization of the conditional variable in the generated text. We evaluate the method on emotion-conditioned text generation with a focus on event reports and compare it to manually designed prompts that also act as the seed for the optimization procedure. The optimized prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in contrast to manually designed seed prompts with only 0.22 macro-average F1.engEmotion-Conditioned Text Generation004Emotion-Conditioned Text Generation through Automatic Prompt Optimizationconferenceobjecthttps://aclanthology.org/2023.tllm-1.3/