LEMSS: LLM-Based Platform for Multi-Agent Competitive Search Simulation

Tommy Mordo - Technion
Tomer Kordonsky - Technion
Haya Nachimovsky - Technion
Moshe Tennenholtz - Technion
Oren Kurland - Technion

DOI: https://doi.org/10.1145/3726302.3730312

In competitive search settings, document publishers (authors) respond to rankings induced for queries of interest: they modify the documents to improve their future ranking. Hence, for some queries there is an on-going ranking competition. Prior empirical studies of competitive search were based on controlled ranking competitions between humans. Large Language Models (LLMs), capable of generating high quality content, provide new opportunities for studying ranking competitions. Furthermore, there is a significant amount of content on the Web, which is a canonical example of a competitive search setting, generated by LLMs. In this paper, we introduce LEMSS: a multi-agent platform that leverages LLMs as publishers in competitive search settings. In addition to enabling the execution of large-scale and highly configurable ranking competitions, LEMSS includes tools to analyze and compare the competitions using a wide range of measures. We use these tools to analyze examples of datasets that result from ranking competitions executed using LEMSS. The analysis reveals, for example, that using LLMs as publishers reduced content diversity in the corpus to a larger extent than having human publishers.

Updating Slides...