When you prompt a large language model (LLM) to answer questions about recent events or about esoteric information, the model cannot answer accurately. A popular technique for overcoming this problem is retrieval-augmented generation (RAG). When a query is submitted to a RAG solution, the solution searches a knowledge base for relevant articles, crafts a prompt that includes the relevant information as context, and then sends that prompt to an LLM that generates a response to the query. A similar approach can be used to build so-called "agentic" solutions. These are applications that use LLMs to perform tasks on behalf of users, usually by calling APIs. These solutions work by pulling API reference information and samples into prompts.
These RAG and agentic solutions can be very impressive. But the stakes are higher than ever. In the past, when a user didn’t get good search results, they could see that results were poor. Then, they could rephrase their query and search again. But now, when an LLM-powered chatbot returns an authoritative-sounding answer, people take that answer at face value. And when those answer are wrong, those people are suing!
We’ve all heard that writers will be replaced by LLMs. But the key to getting these killer RAG and agentic solutions to work is pulling up-to-date, accurate, domain-specific content into LLM prompts – content created by writers. So, the truth is that content professionals are needed more than ever!
This presentation dives into content strategy for RAG and agentic solutions, and the crucial role that content professionals play in making those solutions successful.