LLM Agents Master Asynchronous Group Communication in Mafia Games
Jerusalem, Israel - A new study from researchers at the Hebrew University of Jerusalem and the University of Melbourne introduces an innovative approach for enabling Large Language Model (LLM) agents to participate effectively in asynchronous group communication, a crucial aspect of real-world interactions often overlooked in current LLM research. The findings, outlined in a paper titled “LLM Agents for Asynchronous Group Communication in Mafia Games,” demonstrate that these agents can not only decide what to say but also, critically, when to say it, mirroring human communication patterns.
Most existing LLM applications focus on synchronous settings, where users and models take turns interacting. However, scenarios like group chats, online team meetings, and social games, like the popular social deduction game Mafia, are inherently asynchronous. In these environments, participants communicate at will, making the timing of contributions a key factor in successful interaction.
To address this gap, the researchers developed an adaptive asynchronous LLM agent. Their agent operates with a two-stage prompting framework. First, a “scheduler” uses an LLM to decide whether to send a message at all, considering the game’s context and the agent’s previous communication frequency. For example, if the agent has been relatively quiet compared to other participants, the scheduler is prompted to encourage more activity. Conversely, if the agent has been highly active, the prompt advises it to speak only when necessary. Second, if the scheduler decides to speak, a “generator” uses a separate LLM to compose the message content, again considering the current game state and recent messages. A delay simulating human typing time is added to improve the agent’s believability.
The team evaluated their agent in online Mafia games, where players try to identify the “mafia” members through discussion and voting. The resulting dataset, named LLMAFIA, includes games with both human participants and the asynchronous LLM agent.
The results were promising. The agent performed on par with human players in terms of game performance (winning rates) and its ability to blend in. Human players failed to identify the agent as a bot in over 40% of the cases. Analysis revealed that the agent’s timing of messages closely resembled human patterns. However, the agent did tend to generate longer messages than humans, and these messages could be distinguished through machine learning techniques.
“Our work paves the way for integration of LLMs into realistic human group settings,” said Niv Eckhaus, lead author of the study. “This could range from assistance in team discussions to applications in educational and professional environments where complex social dynamics must be navigated.”
The researchers have released their code and dataset to encourage further research into more realistic asynchronous communication between LLM agents. This work represents a significant step towards creating more natural and effective AI assistants for collaborative environments.
Chat about this paper
To chat about this paper, you'll need a free Gemini API key from Google AI Studio.
Your API key will be stored securely in your browser's local storage.