On November 20 at 4:00 p.m., Yannik Blank will defend his master's thesis on “Large Language Models as Tools for Administrative Modernisation – Automated Process Analysis from Legal Texts and Textual Descriptions” in Room 320 (Konrad Zuse House). Yannik is a master's student in the Business Informatics program and was supervised by Benjamin Nast and Kurt Sandkuhl (both Chair of Business Informatics).
Abstract
This thesis examines the potential of large language models (LLMs) for administrative modernization and develops a methodological approach for the automated generation of Business Process Model and Notation (BPMN) process models from legal texts and textual descriptions, which is aligned with the Federal Information Management (FIM) framework. The basis for this was a systematic literature analysis (SLA) that identified key potentials, challenges, and research gaps. Building on this, three approaches to LLM-based process modeling were prototypically implemented: prompting with ChatGPT, a specialized BPMN chatbot, and a Python-based code implementation via the OpenAI API. These approaches were tested using two exemplary processes of varying complexity from the FIM process documentation of the Datenverarbeitungszentrum Mecklenburg-Vorpommern GmbH (DVZ) and methodically prepared for evaluation by subject matter experts.
The results show that LLMs are fundamentally capable of analyzing legal bases for action and deriving activity lists and process models in BPMN notation from them. Their suitability lies primarily in the creation of initial drafts, which must then be validated and revised by experts. Key factors influencing the quality of the results are the complexity of the underlying processes and the design of the prompt engineering. The work thus confirms the potential of LLMs outlined in the literature, but also highlights the existing limitations, particularly with regard to completeness, clarity, and the mapping of complex decision-making structures.
The outlook emphasizes that future research should deepen the evaluation results, include additional processes, and, in particular, pursue approaches for combining LLMs with retrieval-augmented generation (RAG) methods or domain-specific training. In addition, there is a need for empirical studies in the public sector, a systematic investigation of prompt engineering strategies, and greater involvement of subject matter experts. The work thus contributes to the classification of LLMs as tools for administrative modernization and emphasizes that their usefulness depends significantly on the combination of technical automation with human expertise.

