Options
User’s Choice of Images and Text to Express Emotions in Twitter and Reddit
Existing situation
Ongoing
Title
User’s Choice of Images and Text to Express Emotions in Twitter and Reddit
Project leader
Start date
May 1, 2024
Category
Grundlagenforschung
Acronym
ITEM
Description
Emotions are, next to propositional information, a main ingredient of human interaction. In contrast to information extraction methods, which focus on facts and relations, emotion analysis received comparably little attention and is not yet well understood computationally. Two popular subtasks in emotion analysis in natural language processing are emotion categorization and emotion stimulus detection. For emotion categorization, text is classified into predefined categories, for instance joy, sadness, fear, anger, disgust, and surprise. In stimulus detection, textual segments that describe what happened that caused an associated emotion need to be identified. For instance, the text “I am so happy that my mother will visit me” is associated with joy and the phrase “my mother will visit me” describes the stimulus event.
Next to natural language processing, visual computing has also been applied to emotion categorization, for instance to interpret facial emotion expressions, estimate the impact of artistic peaces on a person, or evaluate depicted events or objects. Further, stimulus detection has seen a similar counterpart to NLP, in which relevant regions in images have been detected. However, no previous work in visual computing exists which puts together whole scenes (with relations between depicted objects and places) for emotion stimulus detection; particularly not informed by emotion theories (which has been done for NLP).
In the project, we advance the state of the art in several directions: (1), we will develop appraisal-theory-based interpretations of images from social media regarding their emotional connotation and stimulus depiction. (2), we will combine this research with our previous work on emotion categorization and stimulus detection in text to develop multimodal approaches. (3), we will do that from both the perspective of the author of a social media post (which emotion is she expressing?) and the intended or probable emotion of a reader (what emotion does an author want to cause, which emotion might a reader feel?).
We will therefore contribute to multimodal emotion analysis and ensure that emotion-related information is not missed or misinterpreted in social media communication because computational models do, so far, not have access to the complete picture. Further, we will answer research questions about how users of social media communicate their emotions, what influences their choices of modality and what the relation between the modalities is.
Next to natural language processing, visual computing has also been applied to emotion categorization, for instance to interpret facial emotion expressions, estimate the impact of artistic peaces on a person, or evaluate depicted events or objects. Further, stimulus detection has seen a similar counterpart to NLP, in which relevant regions in images have been detected. However, no previous work in visual computing exists which puts together whole scenes (with relations between depicted objects and places) for emotion stimulus detection; particularly not informed by emotion theories (which has been done for NLP).
In the project, we advance the state of the art in several directions: (1), we will develop appraisal-theory-based interpretations of images from social media regarding their emotional connotation and stimulus depiction. (2), we will combine this research with our previous work on emotion categorization and stimulus detection in text to develop multimodal approaches. (3), we will do that from both the perspective of the author of a social media post (which emotion is she expressing?) and the intended or probable emotion of a reader (what emotion does an author want to cause, which emotion might a reader feel?).
We will therefore contribute to multimodal emotion analysis and ensure that emotion-related information is not missed or misinterpreted in social media communication because computational models do, so far, not have access to the complete picture. Further, we will answer research questions about how users of social media communicate their emotions, what influences their choices of modality and what the relation between the modalities is.
Keywords
multimodality
emotions
social media
natural language processing
Permalink
https://fis.uni-bamberg.de/handle/uniba/94074