Concepts
Core Objects
The Foundation4.ai API server consists of RAG pipelines leveraging OpenAI-compatible LLMs. An LLM refers to any server providing endpoints following the Chat Completion API outlined in the OpenAI Chat Completion Documentation.
Embedding Providers are factories that offer groups of models to compute vector embeddings of textual queries. An Embedding Model is a particular instance of an Embedding Provider, which specifies the exact model and any parameters being used.
Akin to the Embedding Providers, a Text Splitter specifies a particular instantiation of a Text Splitter Provider with set parameters.
A Pipeline is a combination of:
- Vector Store (a Pipeline functions as the vector store orchestrator)
- An Embedding Model
- A default Text Splitter
- Allowed Document Classifications
A Document is a text message belonging to a Pipeline, which will be split into Document Fragments. Documents can also include metadata that may be used for filtering and narrowing queries.