Summary Report: Run Packs, PAS, and Claude Code Integration in Verdict
Based on my thorough exploration of the Verdict codebase, here is what I found:
---
1. RUN PACK DEFINITIONS & SCHEMA
Location: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/services/co
nfig_registry/schemas/runpack.v1.json
What Run Packs Are:
Run Packs are configuration bundles that define profile selections which compile into an
effective_config. They represent a complete configuration state for executing work in the
Verdict system.
Run Pack Structure:
{
"schema_version": "runpack.v1",
"verdict_created_with": "1.0.0",
"name": "string",
"description": "string | null",
"tags": ["array of strings"],
"profile_selections": {
"model_set": {...},
"orchestration": {...},
"routing_policy": {...},
"agent_comms": {...},
"tool_access": {...},
"prompt": {...},
"observability": {...},
"task_execution_agents": {...} // ← 8th profile type
}
}
Key Run Pack Service: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud
/services/config_registry/service/runpack_service.py
The RunPackService provides:
- CRUD operations: Create, read, update, delete run packs
- Revision management: Version control for run pack changes with ConfigRunPackRevision
tracking
- Compilation: Compiles run packs to effective_config using CompilerService
- Validation: Validates run pack content against JSON schema
- Scope management: User-scoped, org-scoped, and system-scoped run packs
- Lock/archive operations: Prevents concurrent modifications
- Copy/fork: Allows duplicating run packs
Database Models
(/Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/db/models.py):
- ConfigRunPack: Main run pack entity (pack_id, name, owner, scope, locked/archived
status)
- ConfigRunPackRevision: Revision history (content, compilation results, validation
status)
---
2. HOW RUN PACKS ARE EXECUTED
The Role Strategy Switchboard Factory: /Users/trentcarter/Artificial_Intelligence/AI_Proje
cts/Verdict/services/pas/root/strategy_switchboard.py
This is the factory that reads Run Packs and instantiates the appropriate execution
adapters for each PAS tier.
What it does:
3. Reads effective_config.task_execution_agents profile from compiled run pack
4. Validates strategy/backend combinations against PRD Section 5.4
5. Instantiates the correct adapter per tier:
- Architect tier: verdict or claude_planner strategy
- Director tier: verdict or claude_planner strategy
- Manager tier: verdict or claude_planner_only strategy
- Programmer tier: aider, claude_executor, or verdict_native strategy
Default Behavior (per PRD Section 9.1):
DEFAULT_TEA_CONFIG = {
"architect": {"strategy": "verdict"},
"director": {"strategy": "verdict", "config": {"lane_allocation_mode": "sequential"}},
"manager": {"strategy": "verdict", "config": {...}},
"programmer": {
"strategy": "aider",
"execution_backend": "aider_cli_one_shot",
"config": {"timeout_s": 300, "kill_switch_enabled": True}
}
}
Execution Flow:
Run Pack (with task_execution_agents profile)
↓
RoleStrategySwitchboard
↓
┌──────┬──────┬──────┬──────────┐
↓ ↓ ↓ ↓
Architect Director Manager Programmer
adapter adapter adapter backend
---
3. THE PAS (POLYGLOT AGENT SWARM) SYSTEM
PAS Architecture: 6-tier orchestrated system defined in
/Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/configs/pas/ports.json
PAS Tiers (from docs and code):
┌──────┬───────────┬─────────────────────┬──────────────────────────────────────────────┐
│ Tier │ Port │ Service │ Role │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 0 │ 6120 │ Local Gateway │ Entry point for Prime Directives │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 1 │ 6100 │ PAS-Root │ Orchestrator API, invokes switchboard │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 2 │ 6110 │ Architect │ Task decomposition into JobCards │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 3 │ 6111-6115 │ Directors (5 lanes) │ Lane-specific planning coordination │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 4 │ 6141-6147 │ Managers (7) │ Task breakdown, orchestration messages │
├──────┼───────────┼─────────────────────┼──────────────────────────────────────────────┤
│ 5 │ 6151-6159 │ Programmers (9) │ CODE EXECUTION (ONLY tier that mutates repo) │
└──────┴───────────┴─────────────────────┴──────────────────────────────────────────────┘
Key Principle: Only the Programmer tier mutates the repository. All upstream tiers
(Architect → Director → Manager) plan and coordinate without making changes.
What PAS Does:
- Decomposes objectives into job cards with task dependencies
- Allocates work across specialized lanes (code, data, models, docs, devsecops, test)
- Provides coordination protocol via JobCards and messages
- Routes execution to appropriate programmer backend based on run pack config
---
4. CLAUDE CODE INTEGRATION: VCCA ARCHITECTURE
The Verdict ⇄ CloudCode Adapter (VCCA): /Users/trentcarter/Artificial_Intelligence/AI_Proj
ects/Verdict/services/common/vcca/adapter.py
This is the abstraction layer that makes Claude Code Runtime (CCR) behave like a Verdict
component.
VCCA Responsibilities (from PRD Section 4):
5. JobCard translation: Convert JobCard format → Claude Code task format
6. BYOK model injection: Inject Verdict's model_set via environment variables:
ANTHROPIC_BASE_URL = http://127.0.0.1:6120/v1
ANTHROPIC_AUTH_TOKEN =
ANTHROPIC_MODEL =
7. Budget enforcement: Configurable timeouts and token limits
8. Kill-switch: Terminate runaway processes (SIGTERM → 5s grace → SIGKILL)
9. Artifact capture: Diffs, logs, test results, file snapshots
10. Policy gates: Allowed lanes/roles/files restrictions
11. Receipt generation: For billing/telemetry
VCCA Access Methods (config-driven, swappable):
- cli_subprocess: Spawn claude as child process (fastest)
- embedded: Ship pinned CCR in Verdict installer (deterministic)
- remote_service: Connect to CCR daemon/container (fleet-ready)
---
5. ARCHITECTURE: ROLE ADAPTERS (ABC INTERFACES)
All four role adapters are abstract base classes defined in /Users/trentcarter/Artificial_
Intelligence/AI_Projects/Verdict/services/common/role_adapters/:
ArchitectAdapter (architect_adapter.py)
class ArchitectAdapter(ABC):
async def plan(objective, context, constraints) -> ArchitectOutput:
# Generate task decomposition and JobCards
async def refine(feedback, previous_output) -> ArchitectOutput:
# Refine plan based on downstream feedback
Implementations:
- VerdictArchitectAdapter: Uses PAS-native task decomposition logic
- ClaudePlannerArchitectAdapter: Higher-level planning (Claude-based)
DirectorAdapter (director_adapter.py)
class DirectorAdapter(ABC):
async def plan_lane(job_cards, lane_id, context) -> DirectorOutput:
# Generate lane-specific execution plan
Implementations:
- VerdictDirectorAdapter: Lane batching + cost/quality routing
- ClaudePlannerDirectorAdapter: Strong lane-specific planning
ManagerAdapter (manager_adapter.py)
class ManagerAdapter(ABC):
async def orchestrate(job_cards, context) -> OrchestrationMessages:
# Send CMD/REQUEST/STATUS routing to programmers
Implementations:
- VerdictManagerAdapter: Deterministic orchestration, retries, budgets
- ClaudePlannerOnlyManagerAdapter: Better sequencing decisions
ProgrammerBackend (programmer_backend.py)
class ProgrammerBackend(ABC):
async def execute(job_card, context) -> ExecutionResult:
# ONLY tier that mutates repository
async def execute_streaming(job_card, context) -> AsyncIterator[ExecutionArtifact]:
# Stream artifacts as they become available
async def kill() -> bool:
# Terminate execution if running
Implementations:
- AiderCliBackend: Tight incremental edits via Aider CLI (default)
- ClaudeCodeVCCABackend: Big refactors via Claude Code + VCCA
- VerdictNativeBackend: Future native implementation
---
6. CLAUDE CODE BACKEND: DEEP DIVE
Class: ClaudeCodeVCCABackend in /Users/trentcarter/Artificial_Intelligence/AI_Projects/Ver
dict/services/common/role_adapters/programmer_backend.py
Execution Flow (PRD Section 18.2 Example 2):
7. Manager sends CMD: "Implement feature X in feature_x.py"
8. RoleStrategySwitchboard instantiates ClaudeCodeVCCABackend
9. VCCA translates JobCard to CCR task format
10. VCCA sets env vars (ANTHROPIC_BASE_URL, ANTHROPIC_MODEL, ANTHROPIC_AUTH_TOKEN)
11. VCCA spawns: claude --task "
12. Claude Code executes agentic loop, sends requests to Local Gateway (6120)
13. Local Gateway routes to configured provider per routing_policy
14. VCCA monitors timeout, captures artifacts
15. Returns ExecutionResult with status=COMPLETED|FAILED|TIMEOUT|KILLED
Features:
- Supports streaming artifacts (supports_streaming() → True)
- Grace period for shutdown: SIGTERM → 5s wait → SIGKILL
- Comprehensive error handling: timeouts, kills, execution failures
- Artifact conversion from Claude Code JSON format to ExecutionArtifact objects
- Receipt generation for billing/telemetry
Limitations by Design:
- Must be bounded: timeout_s, kill_switch, artifact capture limits
- Cannot exceed 1 hour execution time (3600s max)
- Minimum 30s timeout enforced
---
7. GATEWAY AND CLIENT INTEGRATION
Run Pack Client: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/serv
ices/gateway/run_pack_client.py
The RunPackClient fetches compiled run packs and extracts LLM settings:
class RunPackClient:
async def get_settings(run_pack_id, role=None) -> RunPackSettings:
# Fetch effective_config from Config Registry
# Extract temperature, max_tokens, role-specific prompts
# Return RunPackSettings with merged system prompts
RunPackSettings extracted from Run Pack:
- temperature: From model_set profile
- system_prompt: From prompt profile (global_base_prompt)
- role_prompts: Role-specific prompts (architect, programmer, etc.)
- model_set: Full model configuration
- effective_config_hash: For cache validation
Usage in Gateway:
settings = await get_run_pack_settings(run_pack_id, role="programmer")
temperature = settings.temperature
system_prompt = settings.get_merged_system_prompt(role="programmer")
---
8. KEY FILES & LOCATIONS
Run Pack Core:
- Schema: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/services/co
nfig_registry/schemas/runpack.v1.json
- Service: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/services/c
onfig_registry/service/runpack_service.py
- Client: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/cloud/services/ga
teway/run_pack_client.py
PAS & Orchestration:
- Switchboard: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/services/pas
/root/strategy_switchboard.py
- Ports config:
/Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/configs/pas/ports.json
Role Adapters:
- All in: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/services/common/r
ole_adapters/
- architect_adapter.py
- director_adapter.py
- manager_adapter.py
- programmer_backend.py
- __init__.py
VCCA (Claude Code):
- Adapter: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/services/common/
vcca/adapter.py
- Access methods: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/services/
common/vcca/access_methods.py
- Init: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/services/common/vcc
a/__init__.py
Documentation:
- Task Execution Agents PRD: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdic
t/docs/PRDs/PRD_Task_Execution_Agents_v2.1.md
- PAS Architecture docs: /Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/do
cs/MULTITIER_PAS_ARCHITECTURE.md
- Various PAS PRDs in:
/Users/trentcarter/Artificial_Intelligence/AI_Projects/Verdict/docs/PRDs/
---
9. KEY INSIGHTS
10. Run Packs are Profile Bundles: They're not code; they're configuration selections that
get compiled into effective_config via the Compiler Service.
11. Strategy Switchboard is the Hub: It reads run packs and instantiates the right adapter
per tier. It's the only place that knows which backend to use.
12. Programmer is the Bottleneck: Only Programmers mutate the repository. Architect →
Director → Manager are all planning-only tiers.
13. Claude Code via VCCA: Claude Code is treated as just another backend option. The VCCA
adapter isolates CCR from both PAS and LLM vendors through BYOK model injection via the
Local Gateway.
14. No Hardcoded References: Model names, adapters, and backends are all resolved from
task_execution_agents profile at runtime. This allows hot-swapping execution backends with
just a run pack config change.
15. BYOK (Bring Your Own Key): Claude Code uses Verdict's routing infrastructure (Local
Gateway 6120) to access models. No direct Claude API calls; all LLM requests flow through
Verdict's billing/telemetry.
16. Streaming Support: Claude Code backend supports streaming artifacts; Aider doesn't.
This is exposed via supports_streaming() method for upstream tiers to handle
appropriately.