Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

AI Verification

AI verification is an optional post-processing step that asks a large language model to review Taka’s findings and either confirm them or flag them as false positives. The scanner itself is complete without it; AI verification trades LLM cost and latency for fewer false positives and richer reasoning in reports.

How it works

  1. During a scan, Taka’s rule engine produces findings with evidence (HTTP request, response, match context, timing).
  2. When AI verification is enabled for a scan, each finding that matches the Findings to Verify filter is sent to the configured LLM together with:
    • a system prompt that tells the model what to do;
    • the raw evidence collected by the scanner; and
    • a fixed JSON output schema (status, confidence, reasoning).
  3. The LLM returns a verdict; Taka stores it alongside the finding.
  4. If Active Verification is the selected mode, the model may request additional HTTP probes; Taka sends them through the scanner’s HTTP client and loops the responses back into the prompt for a final verdict.

Rules that are deterministic (pattern-based) don’t need AI and are skipped; they’re shown with a Deterministic check chip in the UI.

Verification modes

ModeExtra traffic to target?When to use
Evidence AnalysisNo. Only the evidence the scanner captured is sent to the LLM.Production systems, client-owned targets, or scans where additional requests would be unwelcome.
Active VerificationYes. The model can request follow-up probes.Dev, staging, or CTF targets where ambiguous findings are common.

Both modes return the same verdict shape; they differ only in whether the model is allowed to initiate new requests.

Providers

The Web UI lets you pick from two providers when starting a scan or verifying a single finding:

ProviderEnvironment variable (fallback)Default model
AnthropicANTHROPIC_API_KEYclaude-sonnet-4-6
OpenAIOPENAI_API_KEYgpt-4o

Additional provider keys (Gemini, Groq) can be saved in Settings but are not currently selectable from the Web UI. Anthropic is the primary provider Taka is tested against.

Configuring keys

Three places can provide a key, with this precedence:

  1. Per-scan: entered on the New Scan form. Overrides everything else. Not reused on other scans.
  2. Global: saved in Settings. Used for every scan where the per-scan field is blank.
  3. Environment variable: read from the container’s environment at scan time. Used when neither per-scan nor global is set.

Setting a key via environment variable

Add it to docker-compose.yml:

services:
  taka:
    # ...
    environment:
      TZ: ${TZ:-UTC}
      ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY}

And put the secret in your .env:

ANTHROPIC_API_KEY=sk-ant-...

For most deployments the Settings UI is simpler; it lets you rotate keys without restarting the container.

Cost control

AI verification can call the LLM many times per scan. To keep costs bounded:

  • Set Findings to Verify on the New Scan form to a narrower scope (for example Low confidence only).
  • Prefer Evidence Analysis over Active Verification when you don’t need probing.
  • Pick a cheaper model (e.g. Claude Haiku, GPT-4o-mini) via the Model field for bulk scanning; use a larger model only for manual re-verification of disputed findings.
  • Filter the scan’s rule set with Tags so fewer findings are produced in the first place.

Verdicts

Each verified finding carries a status, a self-reported confidence (0–100%), and a reasoning block. In Active mode, the follow-up test results the AI issued are stored as well.

StatusMeaning
ConfirmedThe AI believes the finding is a true positive.
Likely False PositiveThe AI believes the finding is a false positive.
Verification FailedThe verification run failed before producing a verdict.
Partial ResultThe LLM produced output that Taka could only partially parse.
AI Verifying…The run is still in progress.
AI UnverifiedAI verification was enabled for the scan but hasn’t run on this finding.

Verdicts surface on the Finding Detail page (see Finding Details) and in both the JSON and HTML exports.

Note

A Likely False Positive verdict does not automatically delete the finding; Taka always keeps the original rule match. Treat the verdict as a triage signal, not a silencing mechanism.

Custom prompts

If the built-in prompts don’t fit your workflow, you can override them in two places:

  • Per run: tick Use custom prompts in the AI Verification drawer on a finding. Edit the system and/or user prompt inline.
  • Globally: use the AI Verification Prompts card in Settings to save overrides per mode. New drawer sessions pre-fill from these.

Leaving a custom field blank means Taka falls back to its built-in prompt for that slot.