The Impact of LLMs on the Evolution of Web Search
Call for Proposals: Research Session
This is a call for proposals to participate in a research session at the 2024 Trust & Safety Research Conference.
September 26 – 27, 2024
Stanford, CA
Submission
Link
Submission Deadline: April 30, 2024
Event Description
The introduction of large language models (LLMs) is revolutionizing the way we search for information on the web.
Its use in information seeking is rapidly becoming more complex and layered, with applications such as
chain-of-thought and
retrieval-augmented generation, suggesting potential to enhance their utility,
improve the state of web search (which many have argued is
deteriorating), and create opportunities for new market entrants.
However, researchers have also raised numerous concerns about the use of LLMs in web search.
For instance, long-standing issues around diversity, bias, and
discrimination persist, it remains unclear how content producers will continue to benefit from their work,
evaluation of search quality continues to be tenuous or laborious, and
people continue to identify new challenges (often echoing older problems) across a wide range of trust and
safety-related topics.
In this research session, we will explore the impact of LLMs on the evolution of web search and the opportunities
and challenges that this new technology presents. We will build on prior critical work
situating and unsealing search and working to envision search futures.
We will also discuss methods for evaluating LLMs from a wide
variety of research perspectives, including public interest (e.g. algorithm audits), commercial (e.g. SEO), and
platform (e.g. T&S teams), and how these fields might share and learn from one another. We will consider doing a write-up where panelists would be invited to co-author a report.
Objectives
- Explore the impact of LLMs on the evolution of web search.
- Discuss the opportunities and challenges that LLMs present.
- Share and learn from the different approaches used by independent, public interest, commercial, and platform
researchers.
Potential Topics
- How to make the use of LLMs in search useful and trustworthy? What mechanisms can enable user agency and
choice, including the ability to opt-out, repair, or seek alternatives?
- How do the ways people formulate queries and interact with LLM-powered search results differ from
traditional search engines, and what are the wider implications?
- How might users' interactions with LLMs in web search (including both searchers and content creators) evolve
over time?
- What new practices, expectations, and challenges might emerge?
- What changes in policies, technologies, and social norms may be needed to support the responsible
development and deployment of LLMs in search? And to address unreliable or unsafe systems?
- What can different stakeholders - including public interest researchers, platforms, search engines,
publishers, and search engine optimizers - learn from each other's approaches to studying and optimizing for
LLMs in search? Where are the key opportunities for collaboration and knowledge-sharing?
Agenda
- Welcome/Introduction (5 minutes)
- Main (60 minutes)
- The way we organize the session is conditional on the proposals submitted but may include panels and/or lightning talks.
- QA (25 minutes)
- Continued QA, co-moderated discussion with all panelists
Submissions
Please submit to the conference-wide pool via this form.
Presentation Type |
Research Session |
Proposal Title |
Provide a title for your position on the topic. |
Proposal Description |
Please briefly discuss how your expertise might address the questions above and
suggest additional questions that should be included in the discussion. |
Feel free to email us with any questions.
Session Organizers
Ronald E. Robertson
Research Scientist, Stanford Internet Observatory
ronaldedwardrobertson.com
[email protected]
Daniel Griffin
Search Researcher, Archignes
danielsgriffin.com
[email protected]
Resources
Past Trust and Safety Conferences
Here are some resources (including prior workshops, academic papers, and news articles) that might be useful.
- Workshop: The Search Futures Workshop (March 24, 2024)
- Shah & Bender's "Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?" (April 15, 2024)
- Broderick's "Does anyone even want an AI search engine?" (February 21, 2024)
- Knight's "Chatbot
Hallucinations Are Poisoning Web Search" (October 5th, 2023)
- Workshop: Task Focused IR in the Era of Generative AI (September 28-29, 2023)
- Lindemann's
"Sealed Knowledges: A Critical Approach to the Usage of LLMs as Search Engines"(August 29, 2023)
- Hayhurst's "Generative AI Threatens Diversity and Hyperlinks" (May, 30 2023)
- Shah & Bender's "Situating Search" (March 14, 2022)
- Khattab, Potts, & Zaharia's "A Moderate Proposal for Radically Better AI-powered Web Search" (Jul 6, 2021)