Enhancing User Interactions with Language Models: A Platform for Personalized Prompt Engineering and Iterative Feedback

Problem Description

Prompt engineering is crucial for customizing responses from language models like GPT-4 to suit individual user contexts and needs. However, many existing methods are inefficient, requiring significant trial and error to align the model's outputs with specific user expectations. This bachelor thesis proposes the prototypical instantiation of a platform that personalizes interactions with an LLM-based assistant.  The platform is supposed to allow the labeling of real user conversations with a language model, facilitating the generation of prompts that better simulate personalized interactions in subsequent iterations.

Goal of the thesis

The focus of the thesis lies in the prototypical instantiation/extension of the already existing platform that enhances the personalization of conversations with a language model-based assistant by using a user-centric approach to prompt engineering. The platform should support iterative prompt optimization and improvement of conversation personalization through user feedback. The success of the platform will be evaluated based on its ability to reduce the effort needed for users to achieve effective and satisfying interactions with the language model.

Work Packages

  • Design a platform for personalizing conversations with a language model-based assistant, utilizing a human-centered design approach (preferably using Vue.js/React.js)

  • Analyze user feedback and interaction data to derive insights for continuous improvement of the platform's design and functionality.

Requirements

  • Python or Vue.js/React.js 

  • Interest in generative AI, large language models, or human-computer interaction
  • Strong time management and organizational abilities
  • Proficiency in English

Contact

If this topic interests you and you're considering applying, please contact Leon Hanschmann (leon.hanschmann∂kit.edu). Provide a concise statement explaining your motivation, along with your CV and the most recent transcript of your academic records. Feel free to get in touch if you have any questions beforehand.

Literature

Chen, Banghao, Zhaofeng Zhang, Nicolas Langren'e, and Shengxin Zhu. “Unleashing the Potential of Prompt Engineering in Large Language Models: A Comprehensive Review,” 2023. https://doi.org/10.48550/arXiv.2310.14735.

Cheng, Yu, Jieshan Chen, Qing Huang, Zhenchang Xing, Xiwei Xu, and Qinghua Lu. “Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains,” 2023. https://doi.org/10.1145/3638247.

Li, Yinheng. “A Practical Survey on Zero-Shot Prompt Design for In-Context Learning,” 2023. https://doi.org/10.26615/978-954-452-092-2_069.