Deployment Guide

Related docs:

  • docs/development/python_environments.md
  • docs/operations/postgres_migration.md
  • docs/deployment/render_free_tier.md
  • docs/deployment/northflank.md

1) Prepare environment

Local

  1. Copy deployment/docker/.env.example to .env for local Docker-based full-stack work.
  2. Fill PostgreSQL credentials, Django secret key, and optional GitHub app keys.

Jenkins first-time setup (new users)

If this is your first time setting up Jenkins for this project, follow these steps.

1. Install Jenkins

Download and install Jenkins from https://www.jenkins.io/download/. On Windows, run the MSI installer and accept defaults. On Linux, use the official apt/yum repository.

2. Install required plugins

From the Jenkins dashboard: Manage Jenkins > Plugins > Available plugins. Search for and install each of the following:

PluginPurpose
PipelineEnables Jenkinsfile-based pipeline jobs
Pipeline: Stage ViewVisual stage progress in the dashboard
Environment Injector (EnvInject)Injects variables from Properties Content into builds
Docker PipelineLets pipeline steps interact with Docker
GitSCM checkout support (usually pre-installed)
CredentialsManages secrets (usually pre-installed)

After installing, restart Jenkins when prompted.

3. Configure Docker access

Jenkins must be able to run docker and docker compose commands. Verify with:

docker --version
docker compose version

On Linux, add the jenkins user to the docker group:

sudo usermod -aG docker jenkins
sudo systemctl restart jenkins

On Windows, ensure Docker Desktop is running and the Jenkins service account has access.

4. Create the pipeline job

  1. From the dashboard, click New Item.
  2. Enter a name (e.g., notechondria), select Pipeline, and click OK.
  3. Under Pipeline, set:
    • Definition: Pipeline script from SCM
    • SCM: Git
    • Repository URL: your clone URL (HTTPS for public repos, SSH for private)
    • Branch Specifier: */codex (or your deployment branch)
    • Script Path: Jenkinsfile
  4. Click Save.

5. Inject environment variables

The pipeline reads deployment credentials from Jenkins-injected environment variables (not a committed .env file). Set them up via the Environment Injector plugin:

  1. Open the job configuration.
  2. Scroll to Build Environment and check Inject environment variables to the build process.
  3. In the Properties Content text area, paste the variables listed in the next section.
  4. Save the job.

6. First build

Click Build Now. The first run will pull Docker images and build containers, which may take several minutes. Check the console output for errors.

Common first-run issues:

  • Port conflicts: Change APP_HOST_PORT, DB_HOST_PORT, etc. if another service uses those ports.
  • Docker not found: Ensure Docker is installed and accessible to the Jenkins user.
  • Git long paths (Windows): Run git config --system core.longpaths true in an admin shell.
  • Missing backup: The first backup step may skip because no database exists yet. This is expected.

Jenkins-injected deployment env

Do not commit the real deployment .env file. Instead, inject deployment variables in Jenkins and let the pipeline materialize .env.deploy during the build.

Recommended setup with the Environment Injector plugin:

  1. Open the job configuration.
  2. Enable Prepare an environment for the run.
  3. Check Keep Jenkins Environment Variables.
  4. Check Keep Jenkins Build Variables.
  5. Leave Override Build Parameters enabled only if you intentionally want injected values to win over build parameters.
  6. Use Properties Content or Properties File Path to define the deployment variables using the keys shown in deployment/jenkins/.env.example.
  7. Save the job.
  8. Run one manual build to verify the injected variables reach the pipeline.

If your repository is public, remove SCM credentials from the Pipeline SCM job configuration. The Jenkinsfile does not require repository credentials by itself.

The pipeline writes those injected variables to ${WORKSPACE}/.env.deploy through deployment/jenkins/scripts/prepare_env.sh.

Example Properties Content:

DJANGO_SECRET_KEY=replace-with-real-secret
DJANGO_DEBUG=False
DJANGO_ALLOWED_HOSTS=localhost,127.0.0.1
DJANGO_ALLOWED_HOSTS_COMPOSE=localhost 127.0.0.1
DJANGO_CSRF_TRUSTED_ORIGINS=http://localhost:9080,http://localhost:9060
DJANGO_LOG_LEVEL=INFO
DJANGO_LOG_FILE_NAME=notechondria
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=admin@example.com
DJANGO_SUPERUSER_PASSWORD=replace-with-real-password
BACKEND_CUSTOM_DOMAIN=
DJANGO_PRODUCTION_STATIC_ROOT=/home/staticfiles/
DJANGO_PRODUCTION_MEDIA_ROOT=/home/mediafiles/
POSTGRE_USERNAME=postgres
POSTGRE_PASSWORD=replace-with-real-password
POSTGRE_HOST=db
POSTGRE_PORT=5432
POSTGRE_DB=postgres
APP_HOST_PORT=9080
BACKEND_HOST_PORT=9090
FRONTEND_HOST_PORT=9060
DB_HOST_PORT=9032
ROOT_HTTP_PORT=8080
SMTP_HOST=smtp.gmail.com
SMTP_PORT=465
SMTP_USERNAME=replace-with-real-email
SMTP_PASSWORD=replace-with-real-app-password
SMTP_USE_TLS=True
SMTP_USE_SSL=False
SMTP_FROM_EMAIL=no-reply@example.com
SMTP_EMAIL_VERIFICATION_TTL_HOURS=24
FRONTEND_ORIGIN=
FRONTEND_VERIFY_URL=http://localhost:9060/#/verify
FRONTEND_API_BASE_URL=http://localhost:9060/api/v1
FRONTEND_BACKEND_ORIGIN=http://nginx
APP_BASE_HREF=/
OPENAI_API_KEY=
GITHUB_APP_ID=
GITHUB_APP_CLIENT_ID=
GITHUB_APP_CLIENT_SECRET=
GITHUB_APP_WEBHOOK_SECRET=
GITHUB_AUTHORIZED_REDIRECT_URI=
GOOGLE_OAUTH_CLIENT_ID=
GOOGLE_OAUTH_CLIENT_SECRET=
GOOGLE_AUTHORIZED_REDIRECT_URI=
# Per-app OAuth allow-lists (since 0.1.90). Comma-separated. The
# backend matches the request Origin/Referer against each entry's
# host and returns the matching URI. Each value MUST be
# pre-registered in the corresponding OAuth provider console.
# When unset, the single-value vars above are used as the sole
# allowed redirect URI.
GOOGLE_AUTHORIZED_REDIRECT_URIS=
GITHUB_AUTHORIZED_REDIRECT_URIS=
# Experimental data-sync GitHub App (since 0.1.90). Distinct from
# the OAuth App above — this one drives `/api/v1/integrations/github/`
# endpoints. The push pipeline still requires `pyjwt + cryptography`
# in backend/requirements.txt before it can sign installation tokens.
GITHUB_DATA_SYNC_APP_NAME=
GITHUB_DATA_SYNC_APP_CLIENT_ID=
GITHUB_DATA_SYNC_APP_CLIENT_SECRET=
GITHUB_DATA_SYNC_APP_PRIVATE_KEY=
GITHUB_DATA_SYNC_APP_INSTALL_URL=
NOTECHONDRIA_SHARED_NETWORK=notechondria-shared
DB_AUTO_REINIT_IF_MISMATCH=False

Note: APP_IMAGE, NGINX_IMAGE, and FRONTEND_IMAGE are not listed above because prepare_env.sh auto-generates them from the VERSION file and the Jenkins BUILD_NUMBER (e.g. v0.1.14.42). You only need to set them here if you want to override the auto-generated tags.

Important formatting notes:

  • DJANGO_ALLOWED_HOSTS should stay comma-separated for human editing.
  • DJANGO_ALLOWED_HOSTS_COMPOSE should stay space-separated because the Docker Compose app service passes it to Django as ALLOWED_HOSTS.
  • Do not wrap the values in quotes in Properties Content.
  • For Docker deployment, set POSTGRE_HOST=db. Do not switch database host to localhost just because DJANGO_DEBUG=True; inside the app container, PostgreSQL is reached through the Compose service network.

Jenkins must provide at least:

  • DJANGO_SECRET_KEY
  • DJANGO_ALLOWED_HOSTS_COMPOSE
  • APP_HOST_PORT
  • BACKEND_HOST_PORT
  • FRONTEND_HOST_PORT
  • DB_HOST_PORT
  • POSTGRE_USERNAME
  • POSTGRE_PASSWORD
  • POSTGRE_HOST
  • POSTGRE_PORT
  • POSTGRE_DB
  • SMTP_HOST, SMTP_USERNAME, SMTP_PASSWORD (required for email verification during registration)

2) Local Docker deployment

Backend stack:

cd backend
docker compose --env-file ../.env up --build -d

Frontend apps are now separate containers. Start each one from its own directory:

cd frontend/editor_app
docker compose --env-file ../../.env up --build -d
cd frontend/planner_app
docker compose --env-file ../../.env up --build -d
cd frontend/portal_app
docker compose --env-file ../../.env up --build -d

3) Initialize database

docker compose exec app python manage.py migrate
docker compose exec app python manage.py collectstatic --noinput

4) Run tests before release

cd /workspace/Notechondria
bash deployment/jenkins/scripts/test_backend.sh /workspace/Notechondria /workspace/Notechondria/.env.deploy

5) Jenkins pipeline flow

Jenkins now drives the full stack, with backend/frontend test and deploy branches running in parallel.

The pipeline runs in this order:

  1. Checkout source.
  2. Generate ${WORKSPACE}/.env.deploy from Jenkins-injected environment variables.
  3. Start the db service and back up PostgreSQL from the database container.
  4. Run backend tests.
  5. Run backend deploy.

Pipeline behavior:

  • Backend and frontend tracks run in parallel.
  • Each branch is wrapped in catchError(...) so one side can continue even if the other side fails.
  • The backend deploy script performs a post-start verification pass inside the running app container:
    • python manage.py migrate --noinput
    • python manage.py bootstrap_platform
    • python manage.py collectstatic --noinput --clear
    • followed by a second stack health wait

The relevant files are:

  • Jenkinsfile
  • deployment/jenkins/scripts/prepare_env.sh
  • deployment/jenkins/scripts/backup_postgres.sh
  • deployment/jenkins/scripts/ensure_db_ready.sh
  • deployment/jenkins/scripts/test_backend.sh
  • deployment/jenkins/scripts/test_frontends.sh
  • deployment/jenkins/scripts/wait_for_stack.sh
  • deployment/jenkins/scripts/deploy_backend.sh
  • deployment/jenkins/scripts/deploy_frontends.sh
  • deployment/jenkins/scripts/deploy_gateway.sh
  • deployment/docker/gateway/docker-compose.yml
  • deployment/render/scripts/render_backend_start.sh
  • docs/deployment/render_free_tier.md
  • northflank.json
  • deployment/northflank/scripts/northflank_start.sh
  • docs/deployment/northflank.md

Compose stack shape

The backend Compose stack is named notechondria and contains:

  • app: Django/gunicorn backend
  • db: PostgreSQL 15
  • nginx: reverse proxy/static serving

Each frontend app has its own standalone Compose stack:

  • frontend/editor_app
  • frontend/planner_app
  • frontend/portal_app

The gateway reverse proxy has its own Compose stack:

  • deployment/docker/gateway

All frontend stacks and the gateway connect to the shared Docker network:

  • NOTECHONDRIA_SHARED_NETWORK, default notechondria-shared
  • backend app and backend nginx join that network; nginx is aliased as backend_nginx
  • each frontend container joins that network with an alias (editor_frontend, planner_frontend, portal_frontend) and proxies backend traffic to http://nginx
  • the gateway resolves services by their network aliases

The frontend and gateway deploy scripts use the individual per-app Compose files, not the root full-stack docker-compose.yml. The root file is intended for local all-in-one development only.

Jenkins only needs Docker access. It does not need host python or host pg_dump. The Django container talks to PostgreSQL through the internal Compose service host db. Internal container ports stay fixed:

  • app listens on 8000
  • db listens on 5432
  • nginx listens on 80

Only the host-exposed ports are configurable:

  • APP_HOST_PORT maps host -> nginx:80
  • BACKEND_HOST_PORT maps host -> app:8000
  • FRONTEND_HOST_PORT maps host -> frontend:80
  • DB_HOST_PORT maps host -> db:5432

Deployment readiness waits at most 300 seconds before failing and stopping the web containers. The backend entrypoint now runs collectstatic --clear, verifies that Django admin and DRF assets exist under /home/staticfiles, and the stack wait step now requires both app and nginx to report healthy before Jenkins treats the deployment as ready. prepare_env.sh also normalizes PRODUCTION_STATIC_ROOT and PRODUCTION_MEDIA_ROOT to Linux-container-safe absolute paths so Windows-hosted Jenkins shells cannot accidentally leak host-style values such as C:/... into the container runtime. The test stage does not use the postgres container; it runs Django tests with settings_test directly in an app container without the production entrypoint. The app service must not mount a named volume over /home/notechondria, because that path contains the Django code copied into the image during build. The Jenkins build tags images with version and build number using APP_IMAGE, NGINX_IMAGE, and FRONTEND_IMAGE. The version is read from the VERSION file at the repo root by prepare_env.sh, producing tags like v0.1.8.42 (v<VERSION>.<BUILD_NUMBER>). To bump the version, edit VERSION and commit. Each local Jenkins instance uses its own BUILD_NUMBER, so different machines produce distinct tags without conflicts.

PostgreSQL volume behavior

The db container uses a persistent Docker volume. PostgreSQL reads POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB only when the data directory is initialized the first time.

If you later change POSTGRE_USERNAME or POSTGRE_DB in Jenkins but keep the same Docker volume, the container will start with the old cluster state and the new role may not exist. In that case you must do one of these:

  1. keep the Jenkins credential aligned with the already-initialized database role/database, or
  2. remove the existing notechondria postgres volume and let the cluster initialize again with the new env values.

The pipeline now validates database access over TCP with the configured username and password before deploying the app container. That check is meant to catch password mismatches before Django reaches manage.py migrate.

For disposable Jenkins environments, you can set:

DB_AUTO_REINIT_IF_MISMATCH=True

This allows the deploy step to remove and recreate the notechondria_postgres-data volume automatically if the configured credentials do not match the existing cluster.

For a first smoke deployment, sample.test.env now uses the default postgres role/database to reduce that mismatch risk. On a first deployment, the backup step may skip automatically because there is no usable database state yet. That is expected and does not block the rest of the pipeline.

Windows Jenkins checkout note

If Jenkins runs on Windows and checkout still fails before the pipeline starts, enable Git long-path support on the Jenkins host and keep the workspace path short.

Recommended host setting:

git config --system core.longpaths true

If needed, also move the Jenkins workspace root to a shorter directory such as C:\Jenkins.

This repository now keeps only the Monaco min/ runtime bundle under backend/static/monaco-editor/ to reduce checkout path depth.

6) Frontend GitHub Pages deployment

Frontend deployment is handled by GitHub Actions, not Jenkins.

Workflow:

  • .github/workflows/frontend-pages.yml

Deployment targets:

  • /editor/
  • /planner/
  • /portal/

Important Pages runtime notes:

  • one workflow builds/tests all three apps and deploys one combined gh-pages tree
  • Pages builds use --no-web-resources-cdn so runtime web assets are bundled locally instead of relying on Google CDN
  • the published bootstrap is rewritten to disable service-worker registration, reducing stale broken-cache behavior after bad deploys
  • the site root publishes a landing page linking to the three app paths

7) Render free-tier backend deployment

Use:

  • deployment/render/scripts/render_backend_start.sh
  • docs/deployment/render_free_tier.md

This backend-only path is intended for Render web services and keeps frontend deployment separate on GitHub Pages.

8) Northflank backend deployment

Use:

  • northflank.json (Northflank v1 template: project + postgres addon + combined service)
  • sample.northflank.env
  • deployment/northflank/scripts/northflank_start.sh
  • docs/deployment/northflank.md

Like the Render path, this is backend-only — the three Flutter apps still deploy to GitHub Pages. PostgreSQL is provisioned by Northflank's managed postgres addon; media and static files go to Cloudflare R2 because Northflank service filesystems are ephemeral across redeploys.

9) Test deployment template

Use sample.test.env as a safe starting point for a non-production Jenkins credential or local smoke deployment. Replace placeholders before any real deploy.

10) Rollback

  1. Restore database from latest SQL dump generated by CI backup step.
  2. Redeploy previous Docker image tag.