Agents
Agents
Agents are sets of instructions that interact with LLMs and pipelines. Namely, they consist of
- A prompt with placeholders
- Tools the LLM may call during execution (in development)
To execute queries against Agents, it is necessary to specify
- An LLM endpoint
- A Pipeline
Not that Agents do not maintain a history of answers or any other state besides the documents that were loaded to the pipeline store.
Prompts
Prompts are the instructions and context you provide to an LLM to guide its responses. In Foundation4.ai, prompts are structured conversations that define how your agent interacts with users and processes information.
Message Roles
Prompts are organized as a series of messages, each with a specific role:
-
System: Defines the AI's behavior, personality, and constraints. Sets the overall context and rules for the conversation.
System: You are a technical documentation assistant. Provide clear, concise answers based on the provided context. -
Assistant: Represents previous responses from the AI. Used to establish conversation patterns or provide examples.
Assistant: I'll help you understand the documentation. What would you like to know? -
User: Contains the user's questions or input. This is typically where you place the main query.
User: {query}
Connecting to Pipelines
Prompts use placeholders to connect with your configured pipeline. When an agent executes:
- The pipeline's vector store retrieves relevant documents
- Placeholders in your prompt (like
{context}) are replaced with these documents - The complete prompt is sent to the LLM for processing
This integration allows your agent to provide responses grounded in your specific document collection rather than relying solely on the LLM's training data.
Placeholders
Placeholders are dynamic variables within your prompt that get replaced with actual content during agent execution. They are denoted by wrapping variable names in curly braces, such as {query} or {context}.
How Placeholders Work
When an agent executes a query, placeholders are filled in during the vector search stage. The agent performs the following steps:
- Vector Search: The agent uses the pipeline's vector store to retrieve relevant documents based on the user's query
- Placeholder Replacement: Retrieved documents and other relevant data are inserted into the prompt where placeholders appear
- LLM Invocation: The complete prompt with replaced placeholders is sent to the LLM for generation
Common Placeholders
{query}: The user's input question or request{context}: Retrieved documents from the vector store that are relevant to the query{documents}: Similar to context, contains the semantically similar documents found via vector search
Prompt Example with Placeholders
System: You are a helpful assistant. Use the following context to answer questions.
Context: {context}
User: {query}
In this example, when a user asks "What is RAG?", the agent will:
- Search the vector store for documents related to "RAG"
- Replace
{context}with the retrieved documents - Replace
{query}with "What is RAG?" - Send the complete prompt to the LLM for response generation