Configuration
Complete reference for all configuration options in LearnPanta.
Environment Variables
Backend
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL | Yes | sqlite:///./local_dev.db | PostgreSQL connection string |
GOOGLE_API_KEY | Yes | - | Gemini API key for AI features |
GEMINI_FLASH_MODEL | No | gemini-3-flash-preview | Override fast Gemini model ID |
GEMINI_PRO_MODEL | No | gemini-3-pro-preview | Override deep reasoning Gemini model ID |
GEMINI_LEGACY_FLASH_MODEL | No | gemini-2.0-flash | Legacy fallback model (only used in compatibility paths) |
API_KEY | Yes | test-secret-key | API authentication key |
TEMPORAL_ADDRESS | No | localhost:7233 | Temporal server address |
TIMESCALE_HOST | No | timescaledb-service | TimescaleDB host for analytics |
TIMESCALE_PORT | No | 5432 | TimescaleDB port |
TIMESCALE_DB | No | metrics | TimescaleDB database name |
TIMESCALE_USER | No | postgres | TimescaleDB user |
TIMESCALE_PASSWORD | No | temporal | TimescaleDB password |
PINECONE_API_KEY | No | - | Pinecone API key for semantic search |
PINECONE_INDEX | No | exam-questions | Pinecone index name |
PINECONE_NAMESPACE | No | questions | Pinecone namespace |
ALLOWED_ORIGINS | No | - | Comma-separated CORS origins |
GOOGLE_CLOUD_PROJECT | No | Auto-detected | GCP project ID for Vertex AI |
GOOGLE_CLOUD_LOCATION | No | us-central1 | GCP region |
Frontend
| Variable | Required | Default | Description |
|---|---|---|---|
NEXT_PUBLIC_API_URL | Yes | - | Backend API URL |
NEXT_PUBLIC_WS_URL | No | - | WebSocket URL (defaults to API URL) |
NEXT_PUBLIC_FIREBASE_API_KEY | Yes (cloud) | - | Firebase web key for auth |
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN | Yes (cloud) | - | Firebase auth domain |
NEXT_PUBLIC_FIREBASE_PROJECT_ID | Yes (cloud) | - | Firebase project id |
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET | No | - | For asset uploads |
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID | No | - | Optional messaging |
NEXT_PUBLIC_FIREBASE_APP_ID | Yes (cloud) | - | Firebase app id |
NEXT_PUBLIC_TLDRAW_LICENSE_KEY | Yes (prod) | - | TLDraw commercial key (canvas) |
GEMINI_API_KEY | Yes (local) | - | Gemini API key for Next.js server actions |
TLDraw Scaffold Worker (Cloudflare)
The scaffold worker uses Wrangler and expects secrets in .dev.vars (local) or Wrangler secrets (prod).
GOOGLE_API_KEY=your_google_api_key
ANTHROPIC_API_KEY=optional
OPENAI_API_KEY=optional
When deploying, run wrangler secret put GOOGLE_API_KEY in tldraw-agent-scaffold/.
Database Configuration
Connection String Format
postgresql://USER:PASSWORD@HOST:PORT/DATABASE
Examples:
# Local development
DATABASE_URL=postgresql://postgres:password@localhost:5432/learnpanta
# Cloud SQL (via proxy)
DATABASE_URL=postgresql://admin:[email protected]:5432/learnpanta
# Cloud SQL (private IP)
DATABASE_URL=postgresql://admin:[email protected]:5432/learnpanta
Connection Pool Settings
SQLAlchemy defaults are used. For production, consider:
# In database.py
engine = create_engine(
DATABASE_URL,
pool_size=10,
max_overflow=20,
pool_pre_ping=True
)
AI Model Configuration
Gemini API Setup
- Get API key from Google AI Studio
- Set
GOOGLE_API_KEYenvironment variable
export GOOGLE_API_KEY=AIzaSy...
Model Selection
Models are configured in backend/app/services/llm.py (override via GEMINI_FLASH_MODEL / GEMINI_PRO_MODEL):
| Model | Usage | Config |
|---|---|---|
gemini-3-flash-preview | Fast tasks (analysis, scripts) | Default for speed |
gemini-3-pro-preview | Deep reasoning (feedback synthesis) | Used for complex tasks |
Note: LearnPanta defaults to Gemini 3 across primary workflows. A legacy gemini-2.0-flash fallback exists for compatibility but is not used in the main review pipeline.
Thinking Mode
Thinking mode is available for complex reasoning tasks:
# In llm.py
thinking_config = types.ThinkingConfig(
thinking_level="high"
)
Currently disabled for performance. Enable for higher-quality reasoning at cost of latency.
Temporal Configuration
Worker Settings
# In worker.py
Worker(
client,
task_queue="marathon-session-queue",
workflows=[MarathonSessionWorkflow],
activities=[
incremental_analyzer_agent,
feedback_synthesis_agent,
persist_feedback,
persist_state
]
)
Activity Timeouts
| Activity | Timeout | Retries |
|---|---|---|
incremental_analyzer_agent | 1 minute | 3 |
feedback_synthesis_agent | 3 minutes | 3 |
persist_feedback | 10 seconds | 2 |
persist_state | 5 seconds | 2 |
Workflow Settings
# History threshold for continue-as-new
if loop_count > 50:
workflow.continue_as_new(args=[session_id, user_id, self._state])
CORS Configuration
Development
origins = [
"http://localhost:3000",
"http://127.0.0.1:3000",
]
Production
Set ALLOWED_ORIGINS environment variable:
ALLOWED_ORIGINS=https://app.yourdomain.com,https://admin.yourdomain.com
Kubernetes Configuration
Secrets
Create Kubernetes secrets for sensitive values:
kubectl create secret generic backend-secrets \
--from-literal=database-url="postgresql://..." \
--from-literal=google-api-key="AIzaSy..." \
--from-literal=api-key="production-secret"
Environment Injection
In deployment YAML:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: backend-secrets
key: database-url
- name: GOOGLE_API_KEY
valueFrom:
secretKeyRef:
name: backend-secrets
key: google-api-key
Resource Limits
Recommended for production:
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
Environment Profiles
Local Development
- Backend:
.envfrom.env.example; SQLite default (sqlite:///./local_dev.db). - Frontend:
.env.localwithNEXT_PUBLIC_API_URL=http://localhost:8000/api/v1,NEXT_PUBLIC_WS_URL=ws://localhost:8000. - AI: set
GOOGLE_API_KEY(backend) andGEMINI_API_KEY(frontend server actions) if testing AI locally. - Optional:
PINECONE_API_KEYfor semantic search; TimescaleDB vars if running analytics storage.
Staging
- API/WS URLs point to staging host.
- Use staging Firebase project + TLDRAW key; lower curator/analytics concurrency to control spend.
Production
- Secrets from Secret Manager → Kubernetes secrets; avoid
.envfiles on disk. - Set
ALLOWED_ORIGINSto exact frontend domains (HTTPS + WSS). - Rotate
API_KEY, Firebase credentials, and Gemini keys quarterly; audit Cloud SQL users monthly.
Logging Configuration
Log Level
Set in main.py:
logging.basicConfig(level=logging.INFO)
For debug mode:
logging.basicConfig(level=logging.DEBUG)
Structured Logging
Temporal activities include context:
logger.info(f"Processing session {session_id}", extra={
"activity_type": "feedback_synthesis",
"attempt": activity.info().attempt
})
Feature Flags
Currently managed via environment variables:
| Flag | Variable | Default | Description |
|---|---|---|---|
| AI Features | GOOGLE_API_KEY | - | Enables all AI features when set |
| Biometrics | Frontend toggle | Off | User opt-in for camera tracking |
File Paths
Backend
| Path | Purpose |
|---|---|
/app/app/ | Application code |
/app/app/routers/ | API route handlers |
/app/app/agent/ | Temporal workflows/activities |
/app/app/services/ | External service clients |
Frontend
| Path | Purpose |
|---|---|
/app/ | Next.js app router pages |
/components/ | React components |
/services/ | API client functions |
/content/docs/ | Documentation (symlinked) |
Next Steps
- Deployment - Deploy with these settings
- Troubleshooting - Common configuration issues
- Security - Security best practices