Configuration
DARWIS Taka’s Docker deployment is configured entirely through environment variables passed to the container. The shipped docker-compose.yml reads from a .env file alongside it, so the recommended workflow is:
- Create a
.envfile next todocker-compose.yml. - Set the variables you want to override (see table below).
- Run
docker compose up -d.
Docker Compose automatically picks up .env.
Default docker-compose.yml
services:
taka:
image: cysecurity/darwis-taka:latest
container_name: taka
restart: unless-stopped
ports:
- "${TAKA_PORT:-7331}:7331"
environment:
TZ: ${TZ:-UTC}
volumes:
- taka_data:/data
volumes:
taka_data:
Environment Variables
| Variable | Default | Description |
|---|---|---|
TAKA_PORT | 7331 | Host port to bind the Web UI on. The container always listens on 7331 internally. |
TZ | UTC | Container timezone. Affects timestamps in logs and the Web UI. Use an IANA name such as Europe/London or Asia/Kolkata. |
Note
AI provider API keys are not configured via environment variables. They are saved in the database through the Settings page so they can be rotated without restarting the container. They can also be overridden per scan.
Example .env
# Expose the Web UI on host port 8080 instead of 7331
TAKA_PORT=8080
# Use a local timezone so scan timestamps match your clock
TZ=Europe/Berlin
After editing .env, apply the changes:
docker compose up -d
Compose recreates the container with the new settings; the taka_data volume is preserved, so scans, rules, and API keys remain intact.
Binding to a private interface
By default, the port mapping "${TAKA_PORT:-7331}:7331" listens on all host interfaces. To restrict Taka to the loopback interface (so only processes on the host, or an SSH tunnel, can reach it), edit docker-compose.yml:
ports:
- "127.0.0.1:${TAKA_PORT:-7331}:7331"
This is the recommended setup when you front Taka with a reverse proxy on the same host.
Resource limits
For larger workloads you can add resource limits under the service:
services:
taka:
# ...
deploy:
resources:
limits:
cpus: "4.0"
memory: 4G
The headless Chromium crawler is the most memory-hungry component. If you run many concurrent deep scans, give the container at least 4 GB of memory.