Skip to content

API Endpoint Testing

API testing in GOVERN operates at two levels: automated test suites that run in CI, and manual spot-checks against live staging and production endpoints. Both are required before a release gate opens.

Automated API Tests

The API gateway has integration tests covering every route. These run against a local Miniflare instance with seeded test data.

expressiveCode.terminalWindowFallbackTitle
# Run the full API test suite
cd packages/api-gateway
pnpm test:integration
# Run a specific route's tests
pnpm test:integration -- --grep "monitoring"

Test file structure

packages/api-gateway/src/__tests__/
├── routes/
│ ├── monitoring.test.ts
│ ├── build-events.test.ts
│ ├── assessments.test.ts
│ ├── discovery.test.ts
│ └── govern-connections.test.ts
├── lib/
│ ├── monitoring-rollup.test.ts
│ ├── monitoring-accumulator.test.ts
│ └── agent-context-assembler.test.ts
└── setup.ts

Writing a route test

import { describe, it, expect, beforeAll } from 'vitest';
import { createTestApp } from '../setup.js';
describe('POST /api/monitoring/emit', () => {
let app: ReturnType<typeof createTestApp>;
beforeAll(() => {
app = createTestApp();
});
it('accepts valid monitoring events', async () => {
const res = await app.request('/api/monitoring/emit', {
method: 'POST',
headers: {
'Authorization': 'Bearer test-secret',
'Content-Type': 'application/json',
},
body: JSON.stringify({
systemId: 'test-system-001',
eventType: 'inference',
payload: { model: 'claude-sonnet-4-20250514', tokens: 1024 },
}),
});
expect(res.status).toBe(202);
const body = await res.json();
expect(body).toMatchObject({
accepted: true,
eventId: expect.any(String),
});
});
it('rejects missing auth', async () => {
const res = await app.request('/api/monitoring/emit', {
method: 'POST',
body: JSON.stringify({ systemId: 'test' }),
});
expect(res.status).toBe(401);
});
});

Response Shape Validation

All GOVERN API responses follow a standard envelope:

// Success
{
"data": { ... },
"meta": {
"requestId": "uuid",
"timestamp": "ISO-8601",
"version": "1"
}
}
// Error
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Human-readable message",
"details": { ... } // optional
},
"meta": { "requestId": "uuid", "timestamp": "ISO-8601" }
}

Validate shapes with zod in tests:

import { z } from 'zod';
const EmitResponseSchema = z.object({
data: z.object({
accepted: z.boolean(),
eventId: z.string().uuid(),
}),
meta: z.object({
requestId: z.string(),
timestamp: z.string(),
version: z.string(),
}),
});
// In your test:
const body = await res.json();
const parsed = EmitResponseSchema.safeParse(body);
expect(parsed.success).toBe(true);

Manual Spot-Checks

Before releasing any API change, run these curl spot-checks against staging. Replace $STAGING_URL and $AUTH_TOKEN with your values.

Health check

expressiveCode.terminalWindowFallbackTitle
curl -s "$STAGING_URL/health" | jq .
# Expected: { "status": "ok", "timestamp": "...", "version": "..." }

Monitoring emit

expressiveCode.terminalWindowFallbackTitle
curl -s -X POST "$STAGING_URL/api/monitoring/emit" \
-H "Authorization: Bearer $AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"systemId": "spot-check-001",
"eventType": "inference",
"payload": { "model": "test", "tokens": 100 }
}' | jq .
# Expected: { "data": { "accepted": true, "eventId": "..." }, "meta": { ... } }

Build events

expressiveCode.terminalWindowFallbackTitle
curl -s -X POST "$STAGING_URL/api/build-events" \
-H "Authorization: Bearer $AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"type": "commit",
"archetypeIds": ["jarvis"],
"skillsExercised": ["system-integration"],
"description": "API spot-check test event"
}' | jq .

Discovery

expressiveCode.terminalWindowFallbackTitle
curl -s "$STAGING_URL/api/discovery" \
-H "Authorization: Bearer $AUTH_TOKEN" | jq '.data.systems | length'
# Expected: a number >= 0

Assessments list

expressiveCode.terminalWindowFallbackTitle
curl -s "$STAGING_URL/api/assessments" \
-H "Authorization: Bearer $AUTH_TOKEN" | jq '{
status: .data.assessments[0].status,
count: (.data.assessments | length)
}'

Rate Limit Testing

The GOVERN API enforces rate limits. Test these before release:

expressiveCode.terminalWindowFallbackTitle
# Rapid-fire 20 requests to trigger rate limiting
for i in $(seq 1 20); do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: Bearer $AUTH_TOKEN" \
"$STAGING_URL/api/monitoring/emit" \
-X POST -H "Content-Type: application/json" \
-d '{"systemId":"rate-test","eventType":"inference","payload":{}}')
echo "Request $i: $STATUS"
done

Expected: requests 1–10 return 202, subsequent requests return 429 with a Retry-After header.

Auth Boundary Testing

Every protected endpoint must reject unauthenticated requests:

expressiveCode.terminalWindowFallbackTitle
# Array of protected endpoints
ENDPOINTS=(
"GET /api/assessments"
"POST /api/monitoring/emit"
"GET /api/build-events"
"GET /api/discovery"
)
for ENDPOINT in "${ENDPOINTS[@]}"; do
METHOD=$(echo $ENDPOINT | cut -d' ' -f1)
PATH=$(echo $ENDPOINT | cut -d' ' -f2)
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
-X $METHOD "$STAGING_URL$PATH")
echo "$ENDPOINT$STATUS (expected 401)"
done

API Test Checklist

Before opening a release gate, confirm:

  • All automated integration tests pass (pnpm test:integration)
  • Health check returns 200 on staging
  • Monitoring emit accepts valid payloads
  • Auth rejection returns 401 (not 403, not 500)
  • Rate limiting triggers at expected threshold
  • Response shapes match documented envelopes
  • Error responses include machine-readable code fields
  • No 500 errors in staging logs during spot-check window

Staging Environment

The staging environment for API testing:

VariableValue
Base URLhttps://govern-api-staging.workers.dev
Auth tokenStored in 1Password: GOVERN Staging Auth Token
SupabaseSeparate staging project (not production)
WorkersStaging version deployed via wrangler deploy --env staging