Deployment Guide
Related docs:
docs/development/python_environments.mddocs/operations/postgres_migration.mddocs/deployment/render_free_tier.mddocs/deployment/northflank.md
1) Prepare environment
Local
- Copy
deployment/docker/.env.exampleto.envfor local Docker-based full-stack work. - Fill PostgreSQL credentials, Django secret key, and optional GitHub app keys.
Jenkins first-time setup (new users)
If this is your first time setting up Jenkins for this project, follow these steps.
1. Install Jenkins
Download and install Jenkins from https://www.jenkins.io/download/. On Windows, run the MSI installer and accept defaults. On Linux, use the official apt/yum repository.
2. Install required plugins
From the Jenkins dashboard: Manage Jenkins > Plugins > Available plugins. Search for and install each of the following:
| Plugin | Purpose |
|---|---|
| Pipeline | Enables Jenkinsfile-based pipeline jobs |
| Pipeline: Stage View | Visual stage progress in the dashboard |
| Environment Injector (EnvInject) | Injects variables from Properties Content into builds |
| Docker Pipeline | Lets pipeline steps interact with Docker |
| Git | SCM checkout support (usually pre-installed) |
| Credentials | Manages secrets (usually pre-installed) |
After installing, restart Jenkins when prompted.
3. Configure Docker access
Jenkins must be able to run docker and docker compose commands. Verify with:
docker --version
docker compose version
On Linux, add the jenkins user to the docker group:
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
On Windows, ensure Docker Desktop is running and the Jenkins service account has access.
4. Create the pipeline job
- From the dashboard, click
New Item. - Enter a name (e.g.,
notechondria), selectPipeline, and click OK. - Under Pipeline, set:
- Definition:
Pipeline script from SCM - SCM:
Git - Repository URL: your clone URL (HTTPS for public repos, SSH for private)
- Branch Specifier:
*/codex(or your deployment branch) - Script Path:
Jenkinsfile
- Definition:
- Click Save.
5. Inject environment variables
The pipeline reads deployment credentials from Jenkins-injected environment variables (not a committed .env file). Set them up via the Environment Injector plugin:
- Open the job configuration.
- Scroll to Build Environment and check
Inject environment variables to the build process. - In the Properties Content text area, paste the variables listed in the next section.
- Save the job.
6. First build
Click Build Now. The first run will pull Docker images and build containers, which may take several minutes. Check the console output for errors.
Common first-run issues:
- Port conflicts: Change
APP_HOST_PORT,DB_HOST_PORT, etc. if another service uses those ports. - Docker not found: Ensure Docker is installed and accessible to the Jenkins user.
- Git long paths (Windows): Run
git config --system core.longpaths truein an admin shell. - Missing backup: The first backup step may skip because no database exists yet. This is expected.
Jenkins-injected deployment env
Do not commit the real deployment .env file. Instead, inject deployment variables in Jenkins and let the pipeline materialize .env.deploy during the build.
Recommended setup with the Environment Injector plugin:
- Open the job configuration.
- Enable
Prepare an environment for the run. - Check
Keep Jenkins Environment Variables. - Check
Keep Jenkins Build Variables. - Leave
Override Build Parametersenabled only if you intentionally want injected values to win over build parameters. - Use
Properties ContentorProperties File Pathto define the deployment variables using the keys shown indeployment/jenkins/.env.example. - Save the job.
- Run one manual build to verify the injected variables reach the pipeline.
If your repository is public, remove SCM credentials from the Pipeline SCM job configuration. The Jenkinsfile does not require repository credentials by itself.
The pipeline writes those injected variables to ${WORKSPACE}/.env.deploy through deployment/jenkins/scripts/prepare_env.sh.
Example Properties Content:
DJANGO_SECRET_KEY=replace-with-real-secret
DJANGO_DEBUG=False
DJANGO_ALLOWED_HOSTS=localhost,127.0.0.1
DJANGO_ALLOWED_HOSTS_COMPOSE=localhost 127.0.0.1
DJANGO_CSRF_TRUSTED_ORIGINS=http://localhost:9080,http://localhost:9060
DJANGO_LOG_LEVEL=INFO
DJANGO_LOG_FILE_NAME=notechondria
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=admin@example.com
DJANGO_SUPERUSER_PASSWORD=replace-with-real-password
BACKEND_CUSTOM_DOMAIN=
DJANGO_PRODUCTION_STATIC_ROOT=/home/staticfiles/
DJANGO_PRODUCTION_MEDIA_ROOT=/home/mediafiles/
POSTGRE_USERNAME=postgres
POSTGRE_PASSWORD=replace-with-real-password
POSTGRE_HOST=db
POSTGRE_PORT=5432
POSTGRE_DB=postgres
APP_HOST_PORT=9080
BACKEND_HOST_PORT=9090
FRONTEND_HOST_PORT=9060
DB_HOST_PORT=9032
ROOT_HTTP_PORT=8080
SMTP_HOST=smtp.gmail.com
SMTP_PORT=465
SMTP_USERNAME=replace-with-real-email
SMTP_PASSWORD=replace-with-real-app-password
SMTP_USE_TLS=True
SMTP_USE_SSL=False
SMTP_FROM_EMAIL=no-reply@example.com
SMTP_EMAIL_VERIFICATION_TTL_HOURS=24
FRONTEND_ORIGIN=
FRONTEND_VERIFY_URL=http://localhost:9060/#/verify
FRONTEND_API_BASE_URL=http://localhost:9060/api/v1
FRONTEND_BACKEND_ORIGIN=http://nginx
APP_BASE_HREF=/
OPENAI_API_KEY=
GITHUB_APP_ID=
GITHUB_APP_CLIENT_ID=
GITHUB_APP_CLIENT_SECRET=
GITHUB_APP_WEBHOOK_SECRET=
GITHUB_AUTHORIZED_REDIRECT_URI=
GOOGLE_OAUTH_CLIENT_ID=
GOOGLE_OAUTH_CLIENT_SECRET=
GOOGLE_AUTHORIZED_REDIRECT_URI=
# Per-app OAuth allow-lists (since 0.1.90). Comma-separated. The
# backend matches the request Origin/Referer against each entry's
# host and returns the matching URI. Each value MUST be
# pre-registered in the corresponding OAuth provider console.
# When unset, the single-value vars above are used as the sole
# allowed redirect URI.
GOOGLE_AUTHORIZED_REDIRECT_URIS=
GITHUB_AUTHORIZED_REDIRECT_URIS=
# Experimental data-sync GitHub App (since 0.1.90). Distinct from
# the OAuth App above — this one drives `/api/v1/integrations/github/`
# endpoints. The push pipeline still requires `pyjwt + cryptography`
# in backend/requirements.txt before it can sign installation tokens.
GITHUB_DATA_SYNC_APP_NAME=
GITHUB_DATA_SYNC_APP_CLIENT_ID=
GITHUB_DATA_SYNC_APP_CLIENT_SECRET=
GITHUB_DATA_SYNC_APP_PRIVATE_KEY=
GITHUB_DATA_SYNC_APP_INSTALL_URL=
NOTECHONDRIA_SHARED_NETWORK=notechondria-shared
DB_AUTO_REINIT_IF_MISMATCH=False
Note: APP_IMAGE, NGINX_IMAGE, and FRONTEND_IMAGE are not listed above because prepare_env.sh auto-generates them from the VERSION file and the Jenkins BUILD_NUMBER (e.g. v0.1.14.42). You only need to set them here if you want to override the auto-generated tags.
Important formatting notes:
DJANGO_ALLOWED_HOSTSshould stay comma-separated for human editing.DJANGO_ALLOWED_HOSTS_COMPOSEshould stay space-separated because the Docker Compose app service passes it to Django asALLOWED_HOSTS.- Do not wrap the values in quotes in
Properties Content. - For Docker deployment, set
POSTGRE_HOST=db. Do not switch database host tolocalhostjust becauseDJANGO_DEBUG=True; inside the app container, PostgreSQL is reached through the Compose service network.
Jenkins must provide at least:
DJANGO_SECRET_KEYDJANGO_ALLOWED_HOSTS_COMPOSEAPP_HOST_PORTBACKEND_HOST_PORTFRONTEND_HOST_PORTDB_HOST_PORTPOSTGRE_USERNAMEPOSTGRE_PASSWORDPOSTGRE_HOSTPOSTGRE_PORTPOSTGRE_DBSMTP_HOST,SMTP_USERNAME,SMTP_PASSWORD(required for email verification during registration)
2) Local Docker deployment
Backend stack:
cd backend
docker compose --env-file ../.env up --build -d
Frontend apps are now separate containers. Start each one from its own directory:
cd frontend/editor_app
docker compose --env-file ../../.env up --build -d
cd frontend/planner_app
docker compose --env-file ../../.env up --build -d
cd frontend/portal_app
docker compose --env-file ../../.env up --build -d
3) Initialize database
docker compose exec app python manage.py migrate
docker compose exec app python manage.py collectstatic --noinput
4) Run tests before release
cd /workspace/Notechondria
bash deployment/jenkins/scripts/test_backend.sh /workspace/Notechondria /workspace/Notechondria/.env.deploy
5) Jenkins pipeline flow
Jenkins now drives the full stack, with backend/frontend test and deploy branches running in parallel.
The pipeline runs in this order:
- Checkout source.
- Generate
${WORKSPACE}/.env.deployfrom Jenkins-injected environment variables. - Start the
dbservice and back up PostgreSQL from the database container. - Run backend tests.
- Run backend deploy.
Pipeline behavior:
- Backend and frontend tracks run in parallel.
- Each branch is wrapped in
catchError(...)so one side can continue even if the other side fails. - The backend deploy script performs a post-start verification pass inside the running app container:
python manage.py migrate --noinputpython manage.py bootstrap_platformpython manage.py collectstatic --noinput --clear- followed by a second stack health wait
The relevant files are:
Jenkinsfiledeployment/jenkins/scripts/prepare_env.shdeployment/jenkins/scripts/backup_postgres.shdeployment/jenkins/scripts/ensure_db_ready.shdeployment/jenkins/scripts/test_backend.shdeployment/jenkins/scripts/test_frontends.shdeployment/jenkins/scripts/wait_for_stack.shdeployment/jenkins/scripts/deploy_backend.shdeployment/jenkins/scripts/deploy_frontends.shdeployment/jenkins/scripts/deploy_gateway.shdeployment/docker/gateway/docker-compose.ymldeployment/render/scripts/render_backend_start.shdocs/deployment/render_free_tier.mdnorthflank.jsondeployment/northflank/scripts/northflank_start.shdocs/deployment/northflank.md
Compose stack shape
The backend Compose stack is named notechondria and contains:
app: Django/gunicorn backenddb: PostgreSQL 15nginx: reverse proxy/static serving
Each frontend app has its own standalone Compose stack:
frontend/editor_appfrontend/planner_appfrontend/portal_app
The gateway reverse proxy has its own Compose stack:
deployment/docker/gateway
All frontend stacks and the gateway connect to the shared Docker network:
NOTECHONDRIA_SHARED_NETWORK, defaultnotechondria-shared- backend
appand backendnginxjoin that network;nginxis aliased asbackend_nginx - each frontend container joins that network with an alias (
editor_frontend,planner_frontend,portal_frontend) and proxies backend traffic tohttp://nginx - the gateway resolves services by their network aliases
The frontend and gateway deploy scripts use the individual per-app Compose files, not the root full-stack docker-compose.yml. The root file is intended for local all-in-one development only.
Jenkins only needs Docker access. It does not need host python or host pg_dump.
The Django container talks to PostgreSQL through the internal Compose service host db.
Internal container ports stay fixed:
applistens on8000dblistens on5432nginxlistens on80
Only the host-exposed ports are configurable:
APP_HOST_PORTmaps host ->nginx:80BACKEND_HOST_PORTmaps host ->app:8000FRONTEND_HOST_PORTmaps host ->frontend:80DB_HOST_PORTmaps host ->db:5432
Deployment readiness waits at most 300 seconds before failing and stopping the web containers.
The backend entrypoint now runs collectstatic --clear, verifies that Django admin and DRF assets exist under /home/staticfiles, and the stack wait step now requires both app and nginx to report healthy before Jenkins treats the deployment as ready.
prepare_env.sh also normalizes PRODUCTION_STATIC_ROOT and PRODUCTION_MEDIA_ROOT to Linux-container-safe absolute paths so Windows-hosted Jenkins shells cannot accidentally leak host-style values such as C:/... into the container runtime.
The test stage does not use the postgres container; it runs Django tests with settings_test directly in an app container without the production entrypoint.
The app service must not mount a named volume over /home/notechondria, because that path contains the Django code copied into the image during build.
The Jenkins build tags images with version and build number using APP_IMAGE, NGINX_IMAGE, and FRONTEND_IMAGE. The version is read from the VERSION file at the repo root by prepare_env.sh, producing tags like v0.1.8.42 (v<VERSION>.<BUILD_NUMBER>). To bump the version, edit VERSION and commit. Each local Jenkins instance uses its own BUILD_NUMBER, so different machines produce distinct tags without conflicts.
PostgreSQL volume behavior
The db container uses a persistent Docker volume. PostgreSQL reads POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB only when the data directory is initialized the first time.
If you later change POSTGRE_USERNAME or POSTGRE_DB in Jenkins but keep the same Docker volume, the container will start with the old cluster state and the new role may not exist. In that case you must do one of these:
- keep the Jenkins credential aligned with the already-initialized database role/database, or
- remove the existing
notechondriapostgres volume and let the cluster initialize again with the new env values.
The pipeline now validates database access over TCP with the configured username and password before deploying the app container. That check is meant to catch password mismatches before Django reaches manage.py migrate.
For disposable Jenkins environments, you can set:
DB_AUTO_REINIT_IF_MISMATCH=True
This allows the deploy step to remove and recreate the notechondria_postgres-data volume automatically if the configured credentials do not match the existing cluster.
For a first smoke deployment, sample.test.env now uses the default postgres role/database to reduce that mismatch risk.
On a first deployment, the backup step may skip automatically because there is no usable database state yet. That is expected and does not block the rest of the pipeline.
Windows Jenkins checkout note
If Jenkins runs on Windows and checkout still fails before the pipeline starts, enable Git long-path support on the Jenkins host and keep the workspace path short.
Recommended host setting:
git config --system core.longpaths true
If needed, also move the Jenkins workspace root to a shorter directory such as C:\Jenkins.
This repository now keeps only the Monaco min/ runtime bundle under backend/static/monaco-editor/ to reduce checkout path depth.
6) Frontend GitHub Pages deployment
Frontend deployment is handled by GitHub Actions, not Jenkins.
Workflow:
.github/workflows/frontend-pages.yml
Deployment targets:
/editor//planner//portal/
Important Pages runtime notes:
- one workflow builds/tests all three apps and deploys one combined
gh-pagestree - Pages builds use
--no-web-resources-cdnso runtime web assets are bundled locally instead of relying on Google CDN - the published bootstrap is rewritten to disable service-worker registration, reducing stale broken-cache behavior after bad deploys
- the site root publishes a landing page linking to the three app paths
7) Render free-tier backend deployment
Use:
deployment/render/scripts/render_backend_start.shdocs/deployment/render_free_tier.md
This backend-only path is intended for Render web services and keeps frontend deployment separate on GitHub Pages.
8) Northflank backend deployment
Use:
northflank.json(Northflank v1 template: project + postgres addon + combined service)sample.northflank.envdeployment/northflank/scripts/northflank_start.shdocs/deployment/northflank.md
Like the Render path, this is backend-only — the three Flutter apps still deploy
to GitHub Pages. PostgreSQL is provisioned by Northflank's managed postgres
addon; media and static files go to Cloudflare R2 because Northflank service
filesystems are ephemeral across redeploys.
9) Test deployment template
Use sample.test.env as a safe starting point for a non-production Jenkins credential or local smoke deployment. Replace placeholders before any real deploy.
10) Rollback
- Restore database from latest SQL dump generated by CI backup step.
- Redeploy previous Docker image tag.