Development
1. Local Development Setup
Prerequisites
| Tool | Version | Purpose |
|---|---|---|
| Docker | Latest | Temporal, PostgreSQL |
| Node.js | 20+ | Next.js frontend |
| Python | 3.11+ | FastAPI backend |
| pnpm | Latest | Frontend package manager |
Quick Start
# Clone repository
git clone [repo-url] && cd learnpanta
# Start infrastructure
docker-compose up -d # Temporal + PostgreSQL
# Backend
cd backend
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # Configure DATABASE_URL, GOOGLE_API_KEY
uvicorn app.main:app --reload
# Worker (separate terminal)
cd backend
python -m app.worker
# Frontend (separate terminal)
cd frontend
pnpm install
cp .env.example .env.local # Configure NEXT_PUBLIC_API_URL
pnpm dev
Environment Variables
Backend (.env):
DATABASE_URL=postgresql://postgres:password@localhost:5432/learnpanta
GOOGLE_API_KEY=your-gemini-api-key
TEMPORAL_ADDRESS=localhost:7233
API_KEY=dev-api-key
# Optional (grounded search / vector search / analytics)
# PINECONE_API_KEY=...
# TIMESCALE_HOST=localhost
Frontend (.env.local):
NEXT_PUBLIC_API_URL=http://localhost:8000/api/v1
NEXT_PUBLIC_WS_URL=ws://localhost:8000
GEMINI_API_KEY=your-gemini-api-key
2. Project Structure
learnpanta/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI entry point
│ │ ├── routers/
│ │ │ ├── sessions.py # Exam session CRUD
│ │ │ ├── academic.py # Exams, papers, curator
│ │ │ ├── debrief.py # Debrief orchestrator + brief explainer
│ │ │ ├── search.py # Semantic search (Pinecone)
│ │ │ ├── analytics.py # TimescaleDB metrics
│ │ │ └── interview.py # Voice-based oral exam
│ │ ├── agent/
│ │ │ ├── workflows.py # MarathonSessionWorkflow
│ │ │ ├── activities.py # Agent implementations
│ │ │ ├── curator.py # Content generation
│ │ │ └── ingestion.py # WebSocket telemetry
│ │ ├── services/
│ │ │ ├── llm.py # Gemini API wrapper
│ │ │ ├── vectors.py # Pinecone integration
│ │ │ └── metrics.py # TimescaleDB client
│ │ ├── models.py # SQLAlchemy models
│ │ ├── schemas.py # Pydantic schemas
│ │ ├── crud.py # Database operations
│ │ └── database.py # DB connection
│ ├── worker.py # Temporal worker entry
│ ├── requirements.txt
│ └── Dockerfile
├── frontend/
│ ├── app/ # Next.js 14 app router
│ │ ├── dashboard/ # Main dashboard
│ │ ├── solo/ # Solo exam mode
│ │ ├── docs/ # Documentation portal
│ │ └── api/ # API routes (proxy)
│ ├── components/
│ │ ├── exam/
│ │ │ ├── ExamEngine.tsx # Core exam component
│ │ │ ├── AIDebrief.tsx # Post-exam AI review shell
│ │ │ ├── ScaffoldDebriefTab.tsx # Review/Explore via scaffold agent
│ │ │ └── QuestionCard.tsx # Question display
│ │ ├── canvas/
│ │ └── MarathonAgent.tsx # Telemetry + MediaPipe
│ ├── lib/
│ │ ├── elkLayout.ts # ELK graph layout
│ │ ├── audioSync.ts # TTS audio utilities
│ │ └── debrief-orchestrator/ # Beat-synced review player
│ ├── hooks/
│ │ ├── useMediaPipe.ts # Face detection
│ │ ├── useLiveGemini.ts # Real-time Gemini
│ │ └── useAuth.ts # Authentication
│ └── services/
│ └── examService.ts # API client
└── docs/ # Documentation
3. Key Concepts
Temporal Workflows
Every exam session is a Temporal workflow:
# Starting a workflow
from temporalio.client import Client
client = await Client.connect("localhost:7233")
handle = await client.start_workflow(
MarathonSessionWorkflow.run,
MarathonSessionInput(session_id=123, paper_id=456),
id=f"marathon-session-{session_id}",
task_queue="marathon-session-queue"
)
# Sending signals (metrics)
await handle.signal("add_metric", {"type": "answer_change", ...})
# Querying state
state = await handle.query("get_state")
LLM Service
All AI interactions go through LLMService:
from app.services.llm import LLMService
llm = LLMService()
# Fast operations (review scripts, analysis)
result = await llm.generate(prompt, model=llm.flash_model_id)
# Deep reasoning (feedback synthesis)
result = await llm.generate(prompt, model=llm.pro_model_id)
# With Google Search grounding
result = await llm.grounded_search(query)
WebSocket Telemetry
Frontend streams metrics via WebSocket:
// Frontend (ExamEngine.tsx)
const ws = new WebSocket(`${WS_URL}/ws/stream/${sessionId}`);
ws.send(JSON.stringify({
type: "answer_change",
question_id: "q1",
from: "A",
to: "B",
timestamp: Date.now()
}));
// Backend (ingestion.py)
@router.websocket("/ws/stream/{session_id}")
async def websocket_endpoint(websocket: WebSocket, session_id: int):
await websocket.accept()
while True:
data = await websocket.receive_json()
await workflow_handle.signal("add_metric", data)
Frontend Architecture
The frontend uses a layered component architecture:
Frontend patterns:
- Data fetching: use
apiClient(fetch wrapper) + SWR-like manual cache; avoid directfetchin components. - State: prefer React state per page + lightweight stores; no global Redux.
- Error boundaries: wrap long-running/streaming views (Debrief, Canvas) with suspense + fallback UI; surface API errors via toasts.
- Accessibility: ensure form controls are labeled; provide keyboard shortcuts for review playback (space to play/pause planned).
TLDraw Canvas Integration
The review and explore tabs use the scaffold agent on top of TLDraw for visual explanations:
// components/exam/ScaffoldDebriefTab.tsx
// Uses TldrawAgent to prompt and apply canvas actions.
ELK Layout Engine
The elkLayout.ts module computes graph positions:
// lib/elkLayout.ts
import ELK from 'elkjs/lib/elk.bundled';
// Convert AI actions to layout graph
export function actionsToGraph(actions: CanvasAction[]): {
nodes: LayoutNode[];
edges: LayoutEdge[];
}
// Compute positions using ELK layered algorithm
export async function computeLayout(
nodes: LayoutNode[],
edges: LayoutEdge[]
): Promise<LayoutResult>
ELK handles automatic positioning with:
- Layered layout (top-to-bottom)
- Edge routing with bend points
- Overlap prevention
Scaffold Review Loop
The review tab auto-iterates wrong questions by prompting the scaffold agent and clearing the canvas between questions.
Legacy Components (Not Used)
components/canvas/DebriefCanvas.tsxcomponents/exam/BeatReviewTab.tsx
useMediaPipe: Browser-side biometrics
const { focusScore, isLookingAtScreen, blinkRate } = useMediaPipe({
videoRef,
enabled: biometricsEnabled,
});
useLiveGemini: Real-time Gemini streaming
const { messages, isStreaming, sendMessage } = useLiveGemini({
sessionId,
questionId,
});
4. Adding Features
New API Endpoint
# backend/app/routers/your_feature.py
from fastapi import APIRouter, Depends
from app.database import get_db
router = APIRouter(prefix="/your-feature", tags=["your-feature"])
@router.get("/")
async def get_something(db: Session = Depends(get_db)):
return {"data": "..."}
# Register in main.py
from app.routers import your_feature
app.include_router(your_feature.router, prefix="/api/v1")
New Temporal Activity
# backend/app/agent/activities.py
from temporalio import activity
from dataclasses import dataclass
@dataclass
class YourInput:
field: str
@activity.defn
async def your_activity(input: YourInput) -> dict:
# Do work
return {"result": "..."}
# Wire into workflow
result = await workflow.execute_activity(
your_activity,
YourInput(field="value"),
start_to_close_timeout=timedelta(minutes=5)
)
New Frontend Component
// frontend/components/YourComponent.tsx
"use client";
import { useState } from "react";
interface Props {
data: SomeType;
}
export function YourComponent({ data }: Props) {
const [state, setState] = useState(null);
return (
<div className="bg-white/5 backdrop-blur-lg rounded-xl p-6">
{/* Glassmorphic design */}
</div>
);
}
5. Testing
Backend (pytest)
cd backend
# Fast run
pytest -q
# Coverage (uses pytest.ini to scope to app/)
pytest --cov=app --cov-report=term-missing
Notes:
- Tests run against SQLite by default; no external services required. Override with
DATABASE_URLif you need Postgres. - Key suites live in
backend/tests/(auth, CRUD, LLM service, activities, workflows, session router). Fixtures are defined inbackend/tests/conftest.py. - When modifying workflows, run targeted checks:
pytest tests/test_workflows.py -k marathon.
Frontend checks
cd frontend
pnpm lint # ESLint
pnpm build # Next.js build smoke
There is no Playwright/Cypress E2E suite yet. For UI flows, rely on storybook-like manual checks or add Playwright under frontend/tests/e2e (recommended next step).
Temporal/worker smoke
- Ensure the worker boots:
python -m app.worker(backend) orpnpm start-worker(frontend TLDraw worker). - If connected Temporal server is unavailable, tests still pass because workflow tests use mocked clients.
CI expectations
Cloud Build pipelines (cloudbuild.yaml, cloudbuild-frontend.yaml) currently build and deploy images only. They do not run tests or lint. Run pytest and pnpm lint/build locally or wire them into Cloud Build before merging.
Mocking External Services
Mocking Gemini API:
# tests/conftest.py
@pytest.fixture
def mock_llm(mocker):
mock = mocker.patch("app.services.llm.llm_client.generate_content")
mock.return_value.text = '{"result": "test"}'
return mock
# In test
def test_feedback_agent(mock_llm):
result = await feedback_synthesis_agent(input)
assert mock_llm.called
assert result["summary"] is not None
Mocking Pinecone:
@pytest.fixture
def mock_pinecone(mocker):
mocker.patch("app.services.vectors.vector_service.search_similar")
return mocker
Frontend Tests
cd frontend
pnpm test # Unit tests with Vitest
pnpm test:e2e # E2E with Playwright
Component Testing:
// components/exam/__tests__/QuestionCard.test.tsx
import { render, screen } from '@testing-library/react';
test('renders question text', () => {
render(<QuestionCard question={mockQuestion} />);
expect(screen.getByText('What is...')).toBeInTheDocument();
});
Temporal Workflow Testing
# tests/test_workflows.py
from temporalio.testing import WorkflowEnvironment
async def test_marathon_workflow():
async with await WorkflowEnvironment.start_time_skipping() as env:
async with Worker(
env.client,
task_queue="test-queue",
workflows=[MarathonSessionWorkflow],
activities=[telemetry_analyzer_agent, feedback_synthesis_agent],
):
result = await env.client.execute_workflow(
MarathonSessionWorkflow.run,
MarathonSessionInput(...),
id="test-workflow",
task_queue="test-queue",
)
assert result["readiness_score"] > 0
5.5 Error Handling Patterns
Backend: Temporal Activity Retries
Activities are automatically retried on failure:
# In workflow
result = await workflow.execute_activity(
feedback_synthesis_agent,
input_data,
start_to_close_timeout=timedelta(minutes=3),
retry_policy=RetryPolicy(
maximum_attempts=3,
initial_interval=timedelta(seconds=1),
backoff_coefficient=2.0,
),
)
Backend: Graceful LLM Failures
# services/llm.py
async def generate(self, prompt: str) -> dict:
try:
response = await self._client.generate_content(prompt)
return json.loads(response.text)
except json.JSONDecodeError:
logger.error("LLM returned invalid JSON", exc_info=True)
return {"error": "Invalid response format"}
except google.api_core.exceptions.ResourceExhausted:
logger.warning("Rate limited, using fallback")
return self._get_fallback_response()
Frontend: Error Boundaries
// components/canvas/DebriefCanvas.tsx
class CanvasErrorBoundary extends React.Component {
state = { hasError: false };
static getDerivedStateFromError() {
return { hasError: true };
}
componentDidCatch(error: Error, info: React.ErrorInfo) {
console.error('Canvas error:', error, info);
}
render() {
if (this.state.hasError) {
return <div className="text-red-500">Canvas failed to load</div>;
}
return this.props.children;
}
}
Frontend: SSE Reconnection
const connectSSE = (retryCount = 0) => {
eventSource = new EventSource(url);
eventSource.onerror = () => {
if (retryCount < 3) {
setTimeout(() => connectSSE(retryCount + 1), 1000 * retryCount);
} else {
setError('Connection failed');
}
};
};
6. Coding Standards
Python (Backend)
- Async: All I/O operations must be async
- Type Hints: Required on all function signatures
- Pydantic: All API request/response bodies
- Black: Code formatting (
black .) - isort: Import sorting (
isort .)
# Good
async def get_exam(exam_id: int, db: Session) -> ExamResponse:
exam = await crud.get_exam(db, exam_id)
return ExamResponse.model_validate(exam)
# Bad
def get_exam(exam_id, db):
return db.query(Exam).get(exam_id)
TypeScript (Frontend)
- Tailwind: All styling (no CSS files)
- Shadcn: Interactive components
- Server Components: Default, use "use client" sparingly
- ESLint:
pnpm lint
// Good: Server component by default
export default async function Page() {
const data = await fetchData();
return <ClientComponent data={data} />;
}
// Good: Client component when needed
"use client";
export function InteractiveWidget() {
const [state, setState] = useState();
// ...
}
Prompts
All LLM prompts follow the structure in docs/agents.md:
prompt = f"""
ROLE: [Professional persona]
TASK: [Specific action]
CONTEXT:
{context_data}
OUTPUT SCHEMA:
{{
"field": "type"
}}
CONSTRAINTS:
- [Rules]
"""
7. Debugging
Backend Logs
# Local
uvicorn app.main:app --reload --log-level debug
# Production
kubectl logs -f deployment/backend
Temporal UI
Access at http://localhost:8080 (local) to:
- View workflow history
- Inspect workflow state
- Debug failed activities
Frontend DevTools
- React DevTools for component inspection
- Network tab for API calls
- WebSocket frames for telemetry
8. Common Tasks
Reset Local Database
docker-compose down -v
docker-compose up -d
cd backend && alembic upgrade head
Generate Migrations
cd backend
alembic revision --autogenerate -m "description"
alembic upgrade head
Migration workflow:
- Never edit
alembic/versionsby hand after creation; regenerate instead. - For schema-breaking changes, add a data backfill in the migration with
op.executeand keep it idempotent. - Before PR: run
alembic upgrade headlocally; if using SQLite, also test with Postgres if the change touches types/indexes.
Update Dependencies
# Backend
pip install --upgrade package-name
pip freeze > requirements.txt
# Frontend
pnpm update package-name
Next Steps
- Deployment - Deploy to GCP
- Architecture - System overview
- Agents - AI agent handbook