Skip to main content

LLMs

Large Language Models (LLMs) are advanced AI models trained on vast amounts of text data to understand and generate human-like text. They power applications ranging from chatbots and content generation to code completion and question answering.

LLM Management in Foundation4.ai

In our system, LLMs are treated as reusable, pre-configured resources. Once you create an LLM record, it stores all the necessary connection details and configuration parameters, allowing you to invoke the model repeatedly without having to specify technical details each time.

Stored Configuration

When you register an LLM, the system stores:

  • Model identifier: The specific model name or version (e.g., gpt-4, llama-2-7b)
  • Endpoint: The API endpoint or local server address
  • Authentication: API keys or credentials needed to access the model
  • Default parameters: Temperature, max tokens, and other generation settings

Simplified Usage

After creation, you can call your LLM using just its identifier, instead of specifying all the creation values. This approach works for both remote LLMs (cloud-hosted models like OpenAI, Anthropic) and local LLMs (self-hosted models running on your infrastructure), providing a consistent interface regardless of where your model is deployed.