Skip to content

Database Operations

The GOVERN database is Supabase (Postgres + pgvector). This page covers the database operations that engineers and operations teams perform: writing and applying migrations, verifying RLS policies, checking backups, and maintaining healthy database performance.

Migration Workflow

All database schema changes go through migration files. Migrations are the only way to change the production schema — no ad-hoc DDL.

Creating a migration

expressiveCode.terminalWindowFallbackTitle
# From the repository root
cd Chairman-Infrastructure
# Create a new migration file
supabase migration new <migration_name>
# This creates:
# supabase/migrations/YYYYMMDDHHMMSS_migration_name.sql

Name migrations descriptively: create_monitoring_pipeline, add_cost_governor_columns, add_rls_policy_monitoring_events.

Writing the migration

-- supabase/migrations/20260412120000_example_migration.sql
-- Always wrap in a transaction for safety
BEGIN;
-- Your DDL changes
CREATE TABLE example (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now()
);
-- Enable RLS on new tables immediately
ALTER TABLE example ENABLE ROW LEVEL SECURITY;
-- Add the required policies
CREATE POLICY "Users see their own records"
ON example
FOR ALL
USING (auth.uid() = user_id);
-- Rollback comments (document how to reverse if needed)
-- ROLLBACK: DROP TABLE example;
COMMIT;

Testing a migration locally

expressiveCode.terminalWindowFallbackTitle
# Apply to local Supabase
supabase db push --local
# Verify the migration applied
supabase db diff --use-migra
# Run integration tests against the new schema
pnpm test:integration

Applying to production

expressiveCode.terminalWindowFallbackTitle
# Apply to production Supabase
supabase db push
# Verify no diff remains
supabase db diff --use-migra
# Expected: no output (no diff)

Never run migrations directly in the Supabase SQL editor in production — they won’t be tracked in the migration history.

Row-Level Security (RLS) Verification

Every table in GOVERN must have RLS enabled with appropriate policies. No exceptions.

Verify RLS is enabled on all tables

-- Tables without RLS enabled
SELECT
schemaname,
tablename,
rowsecurity
FROM pg_tables
WHERE schemaname = 'public'
AND rowsecurity = false
ORDER BY tablename;

This query must return zero rows. If any tables appear, enable RLS immediately:

ALTER TABLE <tablename> ENABLE ROW LEVEL SECURITY;

Verify policies exist

-- Tables with RLS enabled but no policies (effectively blocks all access)
SELECT
t.tablename
FROM pg_tables t
LEFT JOIN pg_policies p ON p.tablename = t.tablename
AND p.schemaname = t.schemaname
WHERE t.schemaname = 'public'
AND t.rowsecurity = true
AND p.tablename IS NULL;

Standard RLS policy patterns

-- Read-own (user can only see their own rows)
CREATE POLICY "read_own" ON table_name
FOR SELECT USING (auth.uid() = user_id);
-- Write-own (user can only modify their own rows)
CREATE POLICY "write_own" ON table_name
FOR ALL USING (auth.uid() = user_id);
-- Org-scoped (user can see rows in their org)
CREATE POLICY "org_scoped" ON table_name
FOR SELECT USING (
org_id IN (
SELECT org_id FROM org_members WHERE user_id = auth.uid()
)
);
-- Service role bypass (server-side API can see all)
-- Note: service role key bypasses RLS automatically — no policy needed

Test RLS with a specific user

-- Set context to a specific user and test access
SET LOCAL role TO authenticated;
SET LOCAL request.jwt.claims TO '{"sub": "user-uuid-here"}';
-- Now run the query as if you were that user
SELECT * FROM monitoring_events LIMIT 5;
-- If RLS is working, you should only see events belonging to that user's org

Backup and Recovery

Supabase automatic backups

Supabase automatically takes daily backups on Pro and Team plans. Access backups at:

supabase.com/dashboard/project/<project-ref>/database/backups

Backup retention: 7 days on Pro plan, 30 days on Team plan.

Manual backup (before major migrations)

Before any migration that drops tables, renames columns, or changes data:

expressiveCode.terminalWindowFallbackTitle
# Export the full database schema + data
pg_dump "$SUPABASE_DB_URL" \
--no-acl \
--no-owner \
-f "backups/pre-migration-$(date +%Y%m%d-%H%M%S).sql"
# Store in R2 for durability
aws s3 cp "backups/pre-migration-*.sql" \
s3://jarvis-artifacts/backups/ \
--endpoint-url https://<account-id>.r2.cloudflarestorage.com

Point-in-time recovery (PITR)

Supabase Pro supports PITR with 7-day recovery window. To trigger a restore:

  1. Go to Supabase dashboard → Database → Backups
  2. Select “Point in Time Recovery”
  3. Choose the target time (before the incident)
  4. Confirm — this will restore the database to that point

Warning: PITR replaces all current data. Take a backup of the current state first if there is any data you want to preserve post-incident.

Database Maintenance

Vacuum and analyze

Postgres requires periodic vacuuming to reclaim dead tuple space and update query statistics. Supabase runs autovacuum, but for large tables you may need to run manually:

-- Analyze statistics on large tables (fast, no lock)
ANALYZE monitoring_events;
ANALYZE monitoring_rollup;
ANALYZE build_events;
-- Full vacuum on fragmented tables (requires brief lock)
VACUUM ANALYZE monitoring_events;

Index health check

-- Unused indexes (candidates for removal)
SELECT
schemaname,
tablename,
indexname,
idx_scan AS times_used
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND schemaname = 'public'
ORDER BY tablename;
-- Bloated indexes (may need REINDEX)
SELECT
schemaname,
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
ORDER BY pg_relation_size(indexrelid) DESC
LIMIT 10;

Data retention (archiving old events)

Monitoring events older than 90 days are archived to R2 and deleted from Supabase to keep the database lean:

-- Check how many events are candidates for archiving
SELECT
COUNT(*) AS candidate_count,
pg_size_pretty(SUM(pg_column_size(monitoring_events.*))) AS estimated_size
FROM monitoring_events
WHERE created_at < NOW() - INTERVAL '90 days';

The archiving job runs weekly via Inngest. If it falls behind, run manually:

expressiveCode.terminalWindowFallbackTitle
curl -X POST "$JARVIS_API_URL/api/admin/archive-old-events" \
-H "Authorization: Bearer $AUTH_SECRET" \
-H "Content-Type: application/json" \
-d '{"olderThanDays": 90, "dryRun": false}'

Schema Conventions

All GOVERN tables follow these conventions:

ConventionExample
UUID primary keysid UUID PRIMARY KEY DEFAULT gen_random_uuid()
Audit timestampscreated_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZ DEFAULT now()
Soft deletes (where applicable)deleted_at TIMESTAMPTZ
Foreign keys with cascadesREFERENCES orgs(id) ON DELETE CASCADE
RLS enabledALTER TABLE t ENABLE ROW LEVEL SECURITY
Indexes on FK columnsCREATE INDEX idx_{table}_{col} ON {table}({col})
Check constraints for enumsCHECK (status IN ('active','inactive'))