Options
Leveraging AI in Data-Sensitive Domains : A Framework for Developing Hybrid AI Systems with a Focus on Ensuring Information Integrity
Sigloch, Paul (2025): Leveraging AI in Data-Sensitive Domains : A Framework for Developing Hybrid AI Systems with a Focus on Ensuring Information Integrity, Bamberg: Otto-Friedrich-Universität, doi: 10.20378/irb-112146.
Author:
Publisher Information:
Year of publication:
2025
Pages:
Supervisor:
Language:
English
Remark:
Masterarbeit, Otto-Friedrich-Universität Bamberg, 2025
DOI:
Abstract:
This thesis investigates the development of hybrid Artificial Intelligence (AI) systems for data-sensitive domains, where information integrity and confidentiality are essential. While modern AI offers impressive performance, it often lacks the reliability required in critical applications, particularly when handling sensitive data where hallucinations, inconsistencies, or privacy breaches can have severe consequences. To address this challenge, a project management-inspired framework is proposed to guide the development of hybrid systems that strategically combine symbolic and sub-symbolic methods, leveraging the complementary strengths of rule-based reasoning and neural network capabilities.
The framework provides structured guidance across five interconnected phases: assessment and planning, design and implementation preparation, implementation and integration, evaluation and refinement, and deployment and continuous improvement. It emphasizes data sensitivity as a core principle throughout the development lifecycle and includes a supporting toolkit of reusable components. The framework was validated through Action Design Research methodology via the iterative development of a medical device damage assessment application in collaboration with a domain expert. This paradigm implementation demonstrates how symbolic reasoning can be effectively integrated with large language models to create systems that balance generative power with verifiable accuracy.
The resulting application proved suitable for real-world deployment, achieving a 30% reduction in report creation time while maintaining the information integrity required for professional use. The system successfully detected over 83% of hallucinated content and 90% of miss-
ing critical information in controlled tests, with user-centered evaluation confirming its practical utility and usability in actual workflows. By facilitating knowledge transfer between theoretical advances and realworld implementation, this work enables the creation of AI systems that
combine innovation with robust information guarantees. The resulting methodology offers a sustainable, privacy-aware approach for deploying AI in high-stakes domains while addressing critical concerns regarding automation bias and meaningful human oversight in AI-assisted professional workflows.
The framework provides structured guidance across five interconnected phases: assessment and planning, design and implementation preparation, implementation and integration, evaluation and refinement, and deployment and continuous improvement. It emphasizes data sensitivity as a core principle throughout the development lifecycle and includes a supporting toolkit of reusable components. The framework was validated through Action Design Research methodology via the iterative development of a medical device damage assessment application in collaboration with a domain expert. This paradigm implementation demonstrates how symbolic reasoning can be effectively integrated with large language models to create systems that balance generative power with verifiable accuracy.
The resulting application proved suitable for real-world deployment, achieving a 30% reduction in report creation time while maintaining the information integrity required for professional use. The system successfully detected over 83% of hallucinated content and 90% of miss-
ing critical information in controlled tests, with user-centered evaluation confirming its practical utility and usability in actual workflows. By facilitating knowledge transfer between theoretical advances and realworld implementation, this work enables the creation of AI systems that
combine innovation with robust information guarantees. The resulting methodology offers a sustainable, privacy-aware approach for deploying AI in high-stakes domains while addressing critical concerns regarding automation bias and meaningful human oversight in AI-assisted professional workflows.
GND Keywords: ; ; ; ;
Erklärbare künstliche Intelligenz
Logik
Datenschutz
Privatsphäre
Medizin
Keywords: ; ; ; ; ;
Artificial Intelligence
Symbolic AI
Subsymbolic AI
Data Sensitivity
Privacy
Medical Device Damage Assessment
DDC Classification:
RVK Classification:
Type:
Masterthesis
Activation date:
January 12, 2026
Permalink
https://fis.uni-bamberg.de/handle/uniba/112146