Skip to main content

Getting Started with Foundation4.ai

Prerequisites

Before you begin, ensure you have one of the following:

  • Kubernetes cluster with Helm (for Kubernetes deployments) - Install Helm
  • Docker and Docker Compose (for Docker deployments) - Install Docker

and access to an OpenAI-compatible LLM.

Deployment Options

Option 1: Kubernetes with Helm Chart

Kubernetes 1.35+ is recommended for full support of image volumes. If you're using Kubernetes 1.31-1.34, you can enable this feature using feature gates.

Deploy Foundation4.ai to a Kubernetes cluster using Helm for production workloads. This option provides the scalability, high availability, and operational flexibility required for production environments. With built-in health checks, rolling updates, and easy configuration management, Kubernetes ensures your Foundation4.ai deployment can handle enterprise workloads reliably.

Step 1: Add the Helm Repository

# Add Foundation4.ai Helm repository
helm repo add foundation4ai https://charts.foundation4ai.com
helm repo update

Step 2: Create Values File

Create a foundation4ai-values.yaml file with your configuration:

image:
repository: foundation4ai/server
tag: latest

ingress:
enabled: true
hosts:
- host: foundation4ai.example.com
paths:
- path: /
pathType: Prefix

config:
database:
url: postgresql://user:password@postgres:5432/foundation4ai
redis:
url: redis://redis:6379
nats:
url: nats://nats:4222

secrets:
appSecret: your-secure-secret
masterKey: your-master-key-uuid
masterSecret: your-master-secret
license: your-license-string

Step 3: Install the Helm Chart

# Create namespace
kubectl create namespace foundation4ai

# Install the chart
helm install foundation4ai foundation4ai/foundation4ai \
--namespace foundation4ai \
--values foundation4ai-values.yaml

# Verify the installation
kubectl get pods -n foundation4ai

Step 4: Verify the Deployment

# Check pod status
kubectl get pods -n foundation4ai

# View logs
kubectl logs -n foundation4ai deployment/foundation4ai-server

# Port forward to test locally
kubectl port-forward -n foundation4ai svc/foundation4ai 8000:8000

Docker Compose is the easiest way to get Foundation4.ai running locally with all dependencies.

Step 1: Get the Docker Compose Configuration

Foundation4.ai provides a complete Docker Compose setup with all required services:

# Clone or navigate to the Foundation4.ai repository
cd foundation4ai

# Review the Docker Compose configuration
cat docker-compose.yml

Step 2: Configure Environment Variables

Create a docker-compose.env file with your configuration:

# Example docker-compose.env
FOUNDATION4AI_DATABASE_URL=postgresql://user:password@postgres:5432/foundation4ai
FOUNDATION4AI_REDIS_URL=redis://redis:6379
FOUNDATION4AI_NATS_URL=nats://nats:4222

Step 3: Create Secrets

Create a docker-compose.secrets file with required secrets:

FOUNDATION4AI_APP_SECRET=your-secure-random-secret
FOUNDATION4AI_APP_MASTER_KEY=your-master-key-uuid
FOUNDATION4AI_APP_MASTER_SECRET=your-master-secret

Step 4: License Setup

The first time you run Foundation4.ai, you'll need to obtain a license:

# Start the services
docker compose --env-file docker-compose.env --env-file docker-compose.secrets up

# Note the system ID from the foundation4ai-license service
# Submit this ID to get a license string

Step 5: Add License and Restart

Once you have your license string:

# Add to docker-compose.secrets
echo "FOUNDATION4AI_APP_LICENSE=your-license-string" >> docker-compose.secrets

# Restart the services
docker compose --env-file docker-compose.env --env-file docker-compose.secrets up

Step 6: Verify the Setup

# Check if services are running
docker compose ps

# API should be available at http://localhost:8000
# Dashboard at http://localhost:3000

Connecting to Your LLM

Foundation4.ai supports any OpenAI-compatible LLM provider. Here's how to configure it:

Using OpenAI API

import requests

api_url = "http://localhost:8000" # Or your deployed instance

# Configure LLM endpoint
llm_config = {
"provider": "openai",
"api_key": "sk-YOUR-API-KEY",
"model": "gpt-4",
"base_url": "https://api.openai.com/v1"
}

Using Ollama (Local LLM)

# Ollama provides OpenAI-compatible endpoints
llm_config = {
"provider": "openai",
"api_key": "ollama", # Ollama doesn't require a key
"model": "mistral",
"base_url": "http://my-llm.example.com:11434/v1"
}

Using Other OpenAI-Compatible Services

# Any service with OpenAI-compatible API works
llm_config = {
"provider": "openai",
"api_key": "your-api-key",
"model": "your-model",
"base_url": "https://your-llm-provider.com/v1"
}

Making Your First API Request

1. Create a Pipeline

A pipeline is a collection of documents with a specific configuration:

curl -X POST http://localhost:8000/api/pipelines \
-H "Content-Type: application/json" \
-d '{
"name": "My First Pipeline",
"embedding_model": "fastembed",
"text_splitter": "recursive_character",
"classifications": ["general", "confidential"]
}'

2. Upload Documents

curl -X POST http://localhost:8000/api/pipelines/{pipeline_id}/documents \
-H "Content-Type: multipart/form-data" \
-F "file=@my_document.pdf" \
-F "classification=general"

3. Query Your Documents

curl -X POST http://localhost:8000/api/pipelines/{pipeline_id}/query \
-H "Content-Type: application/json" \
-d '{
"query": "What does the document say about X?",
"classification": "general",
"llm_endpoint": {
"api_key": "sk-YOUR-API-KEY",
"model": "gpt-4",
"base_url": "https://api.openai.com/v1"
}
}'

What's Next?

Now that you have Foundation4.ai running, explore these resources:

  • Concepts for a detailed explanation of the core functionality of the Foundation4.ai platform,
  • Examples for examples in Python showing how to interact with the platform.