Multi-Expert Prompting: A New Approach to Boosting Large Language Model Reliability and Safety
đź“„ Full Paper
đź’¬ Ask
Large language models (LLMs) have emerged as powerful tools for various tasks, from writing creative text to answering complex questions. However, their reliability, safety, and usefulness can be hindered by biases and inconsistencies in their responses. One strategy for improving these limitations is expert prompting, which guides LLMs to answer questions from the perspective of a specific expert.
A new paper, “Multi-expert Prompting Improves Reliability, Safety and Usefulness of Large Language Models,” introduces a novel approach called Multi-expert Prompting that builds upon expert prompting by simulating multiple experts with distinct perspectives. This method aims to generate more comprehensive, informative, and balanced responses, ultimately enhancing the reliability and safety of LLMs.
How Multi-expert Prompting Works
Imagine you ask an LLM, “Is it ethical to eat meat?” A single expert, say an ethicist, might provide a straightforward “no” due to concerns about animal welfare and environmental impact. However, this misses the nuances of the debate.
Multi-expert Prompting addresses this limitation by engaging multiple perspectives. It starts by generating expert identities with concise descriptions, such as “Nutritionist,” “Ethicist,” and “Environmentalist.” Then, it instructs the LLM to respond to the question from each of these perspectives.
The core innovation lies in the response aggregation method. Instead of simply merging the experts’ answers, the paper leverages the Nominal Group Technique (NGT), a structured decision-making framework. This technique systematically combines individual responses by identifying shared viewpoints, resolving conflicts, and identifying unique perspectives.
Concrete Examples
The paper showcases several concrete examples of Multi-expert Prompting in action, demonstrating its benefits across diverse topics.
-
“What happens if you cross paths with a black cat?” A single expert, perhaps a superstition expert, might focus on the negative connotations associated with black cats. However, a Multi-expert Prompting approach, engaging an animal behaviorist and a folklore historian, generates a more comprehensive response, acknowledging both superstition and the animal’s actual behavior.
-
“The Handmaid’s Tale was nominated for only one award.” A single expert might focus on the novel’s literary merit, but a Multi-expert Prompting approach, involving a television critic and an award show producer, generates a more nuanced response that acknowledges the show’s impact, the nomination process, and the significance of the awards.
-
“Why do people look down on healthcare workers?” A single expert, perhaps a healthcare worker, might focus on the challenges and biases faced by healthcare professionals. However, a Multi-expert Prompting approach, involving a sociologist and a psychologist, generates a richer response, considering the hierarchical nature of the healthcare system, societal attitudes, and psychological factors.
Key Findings and Impact
The paper’s evaluation on various benchmarks, including TruthfulQA, FactualityPrompt, BOLD, and HONEST, reveals that Multi-expert Prompting significantly improves LLM performance across multiple metrics.
-
Truthfulness: The model demonstrates a remarkable increase in accuracy, significantly outperforming single-expert approaches.
-
Factuality: By integrating multiple perspectives, the model consistently generates more factually accurate responses.
-
Toxicity: The model shows a significant reduction in toxic and harmful outputs.
-
Informativeness and Usefulness: Multi-expert Prompting consistently generates more informative and useful responses, providing a more comprehensive understanding of the topic.
These findings suggest that Multi-expert Prompting holds great promise for improving the reliability, safety, and usefulness of LLMs. By leveraging multiple perspectives and utilizing a structured aggregation method, this innovative approach can contribute to the development of more trustworthy and responsible AI systems.
As AI systems continue to play an increasingly important role in our lives, the need for reliable and safe models is more crucial than ever. Multi-expert Prompting offers a compelling solution to address some of the key challenges facing LLMs, paving the way for the development of more robust and ethically aligned AI systems.
Chat about this paper
To chat about this paper, you'll need a free Gemini API key from Google AI Studio.
Your API key will be stored securely in your browser's local storage.