|
|
|
@@ -5,8 +5,147 @@
|
|
|
|
- can use any openai/anthropic models
|
|
|
|
- can use any openai/anthropic models
|
|
|
|
- can use multiple sets of creds
|
|
|
|
- can use multiple sets of creds
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# in progress
|
|
|
|
# in progress
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Scheduled
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# other scheduled
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- persona definitions
|
|
|
|
|
|
|
|
- product
|
|
|
|
|
|
|
|
- task
|
|
|
|
|
|
|
|
- coder
|
|
|
|
|
|
|
|
- tester
|
|
|
|
|
|
|
|
- git
|
|
|
|
|
|
|
|
- handle basic git validation/maintenance
|
|
|
|
|
|
|
|
- edit + merge when conflict is low
|
|
|
|
|
|
|
|
- pass to dev when conflict is big
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- task management flow outline
|
|
|
|
|
|
|
|
- what is hard coded?
|
|
|
|
|
|
|
|
- anything that isnt 100% reliant on an llm
|
|
|
|
|
|
|
|
- complete task, next task, etc
|
|
|
|
|
|
|
|
- task dependency graph aka the next task to be assigned is x
|
|
|
|
|
|
|
|
- giga do not ever let the agent call something like this, ban it if you can
|
|
|
|
|
|
|
|
- task assignment
|
|
|
|
|
|
|
|
- task init
|
|
|
|
|
|
|
|
-
|
|
|
|
|
|
|
|
- what is sent to llm?
|
|
|
|
|
|
|
|
- the minimum possible relevant data
|
|
|
|
|
|
|
|
- task prioritization
|
|
|
|
|
|
|
|
- subtask explosion
|
|
|
|
|
|
|
|
- "clarification needed" process
|
|
|
|
|
|
|
|
- init
|
|
|
|
|
|
|
|
- planning
|
|
|
|
|
|
|
|
- prioritization
|
|
|
|
|
|
|
|
- dependency graph
|
|
|
|
|
|
|
|
- subtasks
|
|
|
|
|
|
|
|
- task/subtask status updates (pending, in progress, done, failed)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- remove todoist mcp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Considering
|
|
|
|
|
|
|
|
- adding a similar testing methodology to the python script with playwright and visual automation + banning them from hacking it with curl and whatnot
|
|
|
|
|
|
|
|
- add instruction to tell the agent under which circumstances it should consider using context7 and if it decides to use it how it should write code to interact with it rather than calling it directly
|
|
|
|
|
|
|
|
- pretty colors on terminal uwu
|
|
|
|
|
|
|
|
- agent names
|
|
|
|
|
|
|
|
- consider adding google gemini 3.1, even though it costs money it is the best prd drafter by far. might be good at tasks too
|
|
|
|
|
|
|
|
- list/select models
|
|
|
|
|
|
|
|
- selection per task/session/agent
|
|
|
|
|
|
|
|
- git orchestration
|
|
|
|
|
|
|
|
- merging
|
|
|
|
|
|
|
|
- symlinks
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# consider adding these libs
|
|
|
|
|
|
|
|
1. The Control Flow: Keep Yours, But Upgrade the Math
|
|
|
|
|
|
|
|
Right now, your PipelineExecutor (Feedback #5) is handling scheduling and topological fan-out manually. This is where homegrown DAGs usually start breaking down as relationships get complex.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What the big frameworks use: They rely on established graph theory libraries to handle the execution order.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What you should adopt: Do not write your own DAG traversal logic. Bring in a lightweight library like graphlib (or a modern TypeScript equivalent) to handle the topological sorting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2. . Tooling and Transport: The MCP SDK
|
|
|
|
|
|
|
|
You already have a src/mcp directory, which puts you ahead of the curve. But managing the low-level JSON-RPC protocol over stdio or Server-Sent Events (SSE) is notoriously fragile.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What the big frameworks use: The official @modelcontextprotocol/sdk packages provided by Anthropic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What you should adopt: If you aren't already, replace your custom src/mcp/converters.ts logic with the official SDK. Relying on the official standard ensures your orchestrator isn't permanently hard-coupled to your current Anthropic and OpenAI subscriptions. If you decide to point this engine at your local Ollama instance running behind Traefik, a standardized MCP transport layer guarantees your tools and context will work seamlessly across both your cloud models and your local open-weight ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. Process Execution: Safer Shells
|
|
|
|
|
|
|
|
When your agents execute shell commands in that AGENT_WORKTREE_ROOT, using Node's raw child_process.exec is messy. It buffers stdout/stderr poorly and makes escaping arguments dangerous.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What the big frameworks use: zx (by Google) or execa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What you should adopt: execa is fantastic for this. It handles process timeouts, cleans up orphaned child processes automatically (crucial for your retry-unrolled DAGs), and streams stdout natively so you can pipe it directly into your domain-events bus without memory bloat.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Completed
|
|
|
|
|
|
|
|
1. boilerplate typescript project for claude
|
|
|
|
|
|
|
|
- mcp server support
|
|
|
|
|
|
|
|
- generic mcp handlers
|
|
|
|
|
|
|
|
- specific mcp handlers for
|
|
|
|
|
|
|
|
- context7
|
|
|
|
|
|
|
|
- claude task manager
|
|
|
|
|
|
|
|
- concurrency, configurable max agent and max depth
|
|
|
|
|
|
|
|
- Extensible Resource Provisioning
|
|
|
|
|
|
|
|
- hard constraints
|
|
|
|
|
|
|
|
- soft constraints
|
|
|
|
|
|
|
|
- basic hygeine run
|
|
|
|
|
|
|
|
# epic
|
|
|
|
|
|
|
|
- agent orchestration system improvements
|
|
|
|
|
|
|
|
# module 1
|
|
|
|
|
|
|
|
- schema driven execution engine
|
|
|
|
|
|
|
|
- specific definitions handled in AgentManifest schema
|
|
|
|
|
|
|
|
- persona registry
|
|
|
|
|
|
|
|
- templated system prompts injected with runtime context
|
|
|
|
|
|
|
|
- tool clearances (stub this for now, add TODO for security implementation)
|
|
|
|
|
|
|
|
- allowlist
|
|
|
|
|
|
|
|
- banlist
|
|
|
|
|
|
|
|
- behavioral event handlers
|
|
|
|
|
|
|
|
- define how personas react to specific events ie. onTaskComplete, onValidationFail
|
|
|
|
|
|
|
|
# module 2
|
|
|
|
|
|
|
|
- actor oriented pipeline constrained by a strict directed acyclic graph
|
|
|
|
|
|
|
|
- relationship + pipeline graphs
|
|
|
|
|
|
|
|
- multi level topology
|
|
|
|
|
|
|
|
- hierarchical ie parent spawns 3 coder children
|
|
|
|
|
|
|
|
- unrolled retry pipelines ie coder1 > QA1 > Coder2 > QA2
|
|
|
|
|
|
|
|
- sequential ie product > task > coder > QA > git
|
|
|
|
|
|
|
|
- support for constraint definition for each concept (relationship, pipeline, topology)
|
|
|
|
|
|
|
|
- ie max depth, max retries
|
|
|
|
|
|
|
|
- state dependent routings
|
|
|
|
|
|
|
|
- support branching logic based on project history or repository state ie. project init requires product agent to generate prd, then task agent needs to create roadmap, once those exist future sessions skip those agents and go straight to coder agents
|
|
|
|
|
|
|
|
# module 3
|
|
|
|
|
|
|
|
- state/context manager
|
|
|
|
|
|
|
|
- stateless handoffs
|
|
|
|
|
|
|
|
- state and context are passed forwards through payloads via worktree/storage, not conversational memory
|
|
|
|
|
|
|
|
- fresh context per node execution
|
|
|
|
|
|
|
|
# module 4
|
|
|
|
|
|
|
|
- resource provisioning
|
|
|
|
|
|
|
|
- hierarchical resource suballocation
|
|
|
|
|
|
|
|
- when a parent agent spawns children, handle local resource management
|
|
|
|
|
|
|
|
- branche/sub-worktree provisioning
|
|
|
|
|
|
|
|
- suballocating deterministic port range provisioning
|
|
|
|
|
|
|
|
- extensibility to support future resource types
|
|
|
|
# epic
|
|
|
|
# epic
|
|
|
|
implementation of AgentManager.runRecursiveAgent
|
|
|
|
implementation of AgentManager.runRecursiveAgent
|
|
|
|
|
|
|
|
|
|
|
|
@@ -68,109 +207,352 @@ implementation of AgentManager.runRecursiveAgent
|
|
|
|
- The Abort Test: Start a parent with a 5-second sleep task, cancel the session at 1 second. Assert that the underlying LLM SDK handles were aborted and resources were released.
|
|
|
|
- The Abort Test: Start a parent with a 5-second sleep task, cancel the session at 1 second. Assert that the underlying LLM SDK handles were aborted and resources were released.
|
|
|
|
- The Isolation Test: Spawn two children concurrently. Assert they are assigned non-overlapping port ranges and isolated worktree paths.
|
|
|
|
- The Isolation Test: Spawn two children concurrently. Assert they are assigned non-overlapping port ranges and isolated worktree paths.
|
|
|
|
|
|
|
|
|
|
|
|
# Scheduled
|
|
|
|
# epic
|
|
|
|
- security implementation
|
|
|
|
|
|
|
|
- persona definitions
|
|
|
|
|
|
|
|
- product
|
|
|
|
|
|
|
|
- task
|
|
|
|
|
|
|
|
- coder
|
|
|
|
|
|
|
|
- tester
|
|
|
|
|
|
|
|
- git
|
|
|
|
|
|
|
|
- handle basic git validation/maintenance
|
|
|
|
|
|
|
|
- edit + merge when conflict is low
|
|
|
|
|
|
|
|
- pass to dev when conflict is big
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- need to untangle
|
|
|
|
# connecting pipeline engine and recursive agent management
|
|
|
|
- what goes where in terms of DAG definition vs app logic vs agent behavior
|
|
|
|
- The Pipeline Engine must be the single source of truth. The Recursive Manager should not be a separate way to run agents; it should be a Utility that the Pipeline Engine calls when it hits a node that requires a "Fan-out" (hierarchical or unrolled-retry topology).
|
|
|
|
- events
|
|
|
|
- deprecate the standalone CLI examples for runRecursiveAgent and instead wire the Manager directly inside SchemaDrivenExecutionEngine.runSession
|
|
|
|
- what events do we have
|
|
|
|
# execution driven topologies
|
|
|
|
- what personas care about what events
|
|
|
|
- currently, manifest validates that a topology is, for example, "hierarchical." But at runtime, the code just ignores that label and runs everything sequentially
|
|
|
|
- how should a persona respond to an event
|
|
|
|
- The execution loop needs a true DAG Runner
|
|
|
|
- where is this defined
|
|
|
|
- When the Orchestrator evaluates the next nodes to run, if it sees a "hierarchical" or "parallel" topology block, it must dispatch those nodes to the AgentManager concurrently using Promise.all(), rather than waiting for one to finish before starting the next
|
|
|
|
- success/failure/retry policy definitions
|
|
|
|
# project scoped data store
|
|
|
|
- where does this go?
|
|
|
|
- implement a ProjectContext store. Sessions should read from the global Project State on initialization, and write their metadata updates back to the Project State upon successful termination
|
|
|
|
- what are they?
|
|
|
|
- store should contain these domains
|
|
|
|
|
|
|
|
- global flags
|
|
|
|
|
|
|
|
- artifact pointers
|
|
|
|
|
|
|
|
- task queue
|
|
|
|
|
|
|
|
- dag orchestrator reads file at init and writes to it upon node completion
|
|
|
|
|
|
|
|
# typed domain event bus
|
|
|
|
|
|
|
|
- Implement a strongly-typed Event Bus or a Domain Event schema.
|
|
|
|
|
|
|
|
- Create a standard payload shape for events
|
|
|
|
|
|
|
|
- The Pipeline should allow edges to trigger based on specific domain events, not just basic success/fail strings
|
|
|
|
|
|
|
|
- planning events - These events occur when the project state is empty or a new major feature is requested. They transition the system from "idea" to "actionable work."
|
|
|
|
|
|
|
|
- requirements defined
|
|
|
|
|
|
|
|
- product agent triggers upon prd completion > task agent consumes
|
|
|
|
|
|
|
|
- tasks planned
|
|
|
|
|
|
|
|
- task agent triggers upon completion of dedicated claude-task-manager process > coder agent consumes
|
|
|
|
|
|
|
|
- execution events - These are the most common events. They handle the messy reality of writing code and the cyclical (but unrolled) retry pipelines.
|
|
|
|
|
|
|
|
- code committed
|
|
|
|
|
|
|
|
- task blocked (needs clarification, impossible task, max retry etc)
|
|
|
|
|
|
|
|
- validation events - These events dictate whether the DAG moves forward to integration or branches sideways into a retry pipeline.
|
|
|
|
|
|
|
|
- validation passed
|
|
|
|
|
|
|
|
- validation failed
|
|
|
|
|
|
|
|
- integration events - This event closes the loop and updates the global state.
|
|
|
|
|
|
|
|
- branch merged (also tasks updated etc)
|
|
|
|
|
|
|
|
# retry matrix and cancellation
|
|
|
|
|
|
|
|
- Implement a Status Retry Matrix and enforce AbortSignal everywhere
|
|
|
|
|
|
|
|
- Validation_Fail: Trigger the unrolled retry pipeline (send the error back to a new agent instance)
|
|
|
|
|
|
|
|
- Hard_Failure (>=2 sequential API timeouts, network drops, 403, etc): Fail fast, do not burn tokens retrying. Bubble the error up to the user
|
|
|
|
|
|
|
|
- Pass standard AbortSignal objects down into the ActorExecutionInput so the pipeline can instantly kill rogue processes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# code review epic
|
|
|
|
|
|
|
|
- Header alias inconsistency can break Claude MCP auth/config
|
|
|
|
|
|
|
|
- Normalize the config object immediately upon parsing in src/mcp/converters.ts, mapping both headers and http_headers to a single internal representation before either the Codex or Claude handlers touch them
|
|
|
|
|
|
|
|
- Update src/agents/pipeline.ts to compute an aggregate status. You should traverse the execution records and ensure all terminal nodes (leaves) in your DAG have a status of "success". If any node in the critical path fails, the whole session should be marked as a failure
|
|
|
|
|
|
|
|
- file persistence is not atomic, project-context serialization is process-local only
|
|
|
|
|
|
|
|
- Implement atomic writes
|
|
|
|
|
|
|
|
- Direct writes in state/context: src/agents/state-context.ts:203, src/agents/state-context.ts:251, src/agents/project-
|
|
|
|
|
|
|
|
context.ts:171, src/agents/project-context.ts:205.
|
|
|
|
|
|
|
|
- Queue in FileSystemProjectContextStore (src/agents/project-context.ts:145) protects only within one process.
|
|
|
|
|
|
|
|
- pipeline executor owns too many responsibilities
|
|
|
|
|
|
|
|
- Start extracting distinct policies.
|
|
|
|
|
|
|
|
- Move failure classification (hard vs soft fails) into a dedicated FailurePolicy class.
|
|
|
|
|
|
|
|
- Move persistence and event emissions into a LifecycleObserver or event bus listener rather than keeping them hardcoded in the execution loop
|
|
|
|
|
|
|
|
- Global mutable MCP handler registry limits extensibility/test isolation
|
|
|
|
|
|
|
|
- Refactor the registry into an instantiable class (e.g., McpRegistry)
|
|
|
|
|
|
|
|
- Pass this instance into your SchemaDrivenExecutionEngine and PipelineExecutor via dependency injection instead of relying on auto-installing imports
|
|
|
|
|
|
|
|
- Provider example entrypoints duplicate orchestration pattern
|
|
|
|
|
|
|
|
- Create a unified helper like createSessionContext(provider, config) that handles the provisioning, probing, and prompting loop, keeping the provider-specific code strictly limited to model initialization
|
|
|
|
|
|
|
|
- Config/env parsing is duplicated
|
|
|
|
|
|
|
|
- Create a single src/config.ts (or dedicated config service) that parses process.env, validates it, applies defaults, and freezes the object.
|
|
|
|
|
|
|
|
- Inject this single source of truth throughout the app
|
|
|
|
|
|
|
|
- Project context parsing is strict
|
|
|
|
|
|
|
|
- Update src/agents/project-context.ts:106 to merge parsed files with a set of default root keys
|
|
|
|
|
|
|
|
- Add a schemaVersion field to the JSON structure to allow for safe migrations later
|
|
|
|
|
|
|
|
# security middleware
|
|
|
|
|
|
|
|
- rely on an established AST (Abstract Syntax Tree) parser for shell scripts like bash-parser to handle tokenization
|
|
|
|
|
|
|
|
- Use an off-the-shelf parser to break commands down into executable binaries, flags, arguments, and environment variable assignments. We can scrub or inject specific environment variables securely at this layer
|
|
|
|
|
|
|
|
- focus specifically on extracting Command and Word nodes from the bash-parser output
|
|
|
|
|
|
|
|
- gives us a head start on exactly what part of the syntax tree matters for the allowlist
|
|
|
|
|
|
|
|
- AI agents frequently chain commands (&&, ||, |, >) to save turns. If your parser struggles with complex pipelines or subshells, it will artificially cripple the agents' ability to work efficiently
|
|
|
|
|
|
|
|
- rules engine
|
|
|
|
|
|
|
|
- For the simplest iteration, defining your allowlists and tool clearance schema via strictly typed Zod schemas is the most lightweight approach. You validate the AST output against the schema before passing it to the execution layer
|
|
|
|
|
|
|
|
- Implement strict binary allowlists (e.g., git, npm, node, cat) and enforce directory-bound execution (ensuring the cwd stays within AGENT_WORKTREE_ROOT)
|
|
|
|
|
|
|
|
- block path traversal attempts (e.g., ../). Even if the cwd starts in the worktree, an agent might try to read or write outside of it using relative paths in its arguments
|
|
|
|
|
|
|
|
- method for logging and profiling exactly what commands Codex and Claude are currently emitting to build a baseline allowlist for longer term best practices
|
|
|
|
|
|
|
|
- make clear todos around the need to replace/improve this
|
|
|
|
|
|
|
|
- sandbox/execution layer
|
|
|
|
|
|
|
|
- Execute commands using Node's child_process with explicitly dropped privileges (running as a non-root user via uid/gid), enforce timeouts, and stream stdout/stderr to your existing event bus for auditing.
|
|
|
|
|
|
|
|
- By default, Node child processes inherit the parent's environment variables. ensure that our env management policy is consistent and secure given this behavior
|
|
|
|
|
|
|
|
- A very modern pattern is to use your Node orchestrator to spawn a deno run child process. You can pass explicit flags like --allow-read=/target/worktree and --allow-run=git,npm. If the LLM tries to read an env file outside that directory, the Deno runtime instantly kills the process at the OS level.
|
|
|
|
|
|
|
|
- agents need to modify files in AGENT_WORKTREE_ROOT, but they must absolutely not have write access to AGENT_STATE_ROOT or AGENT_PROJECT_CONTEXT_PATH. The security middleware must strictly enforce this boundary.
|
|
|
|
|
|
|
|
- Your PipelineExecutor currently routes validation_fail into a retry-unrolled execution. You will need to define a new error class (e.g., SecurityViolationError). Should a security violation trigger a retry (telling the LLM "You can't do that, try another way"), or should it instantly hard-abort the pipeline?
|
|
|
|
|
|
|
|
- Your MCP tools currently have auto-installed builtins. The rules engine needs to apply not just to shell commands, but also to MCP tool calls. The schema for tool clearance (currently a TODO at src/agents/persona-registry.ts:79) needs to be unified with this new rules engine
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- schema differences between claude and codex - no clue if we are doing anything for this
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- MCP config boundary only verifies that the config is an "object" before casting it, risking late-stage crashes. Furthermore, the shared MCP type is missing the sdk (in-process) transport type supported by Claude
|
|
|
|
|
|
|
|
- Create a strict Zod schema for MCP configuration
|
|
|
|
|
|
|
|
- Define McpConfigSchema using Zod to strictly validate field shapes, ranges, and enums before handoff. Update src/mcp/types.ts to include sdk alongside stdio, http, and sse in your shared transport union
|
|
|
|
|
|
|
|
- Provider-specific MCP fields like enabled_tools and timeouts are used by Codex but silently dropped during conversion for Claude. This violates user expectations
|
|
|
|
|
|
|
|
- Fail fast or warn loudly: If the provider is set to Claude and these asymmetric fields are present in the parsed config, emit a clear warning log (e.g., [WARN] MCP field 'timeouts' is not supported by the Claude adapter and will be ignored)
|
|
|
|
|
|
|
|
- The SDK adapters are under-tested. The test suite covers converters and registries but misses the actual execution wiring, stream handling, and result parsing for Codex and Claude
|
|
|
|
|
|
|
|
- Implement integration/unit tests for the adapter boundaries
|
|
|
|
|
|
|
|
- add support CLAUDE_CODE_OAUTH_TOKEN instead of api key
|
|
|
|
|
|
|
|
- ensure your configuration schema can accept the new OAuth token. To keep it backward-compatible with standard API keys (in case you ever need to switch back), you can check for the OAuth token first, then fall back to the standard API key.
|
|
|
|
|
|
|
|
- runClaudePrompt drops the parsed config and lets the SDK auto-discover process.env.ANTHROPIC_API_KEY.
|
|
|
|
|
|
|
|
|
|
|
|
- task management flow
|
|
|
|
- You need to explicitly pass the anthropicToken from your configuration into the underlying Anthropic client constructor. Depending on how you are instantiating the Agent SDK, you will pass it via the authToken or apiKey property
|
|
|
|
- init
|
|
|
|
- found evidence of this drift in src/agents/provisioning.ts#L94. You will need to apply the exact same explicit wiring pattern there. Any time you instantiate the Anthropic client or the Claude Agent in a worker node, pass { apiKey: config.provider.anthropicToken }.
|
|
|
|
- planning
|
|
|
|
|
|
|
|
- prioritization
|
|
|
|
|
|
|
|
- dependency graph
|
|
|
|
|
|
|
|
- subtasks
|
|
|
|
|
|
|
|
- task/subtask status updates (pending, in progress, done, failed)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Considering
|
|
|
|
- rip out legacy/deprecated interfaces ie legacy status triggers, deprecated subagent method, etc
|
|
|
|
- model selection per task/session/agent
|
|
|
|
- recursive agent deprecation thing
|
|
|
|
- agent "notebook"
|
|
|
|
|
|
|
|
- agent run log
|
|
|
|
|
|
|
|
- agent persona support
|
|
|
|
|
|
|
|
- ping pong support - ie. product agent > dev agent, dev agent needs clarification = ping pong back to product. same with tester > dev.
|
|
|
|
|
|
|
|
- resume session aspect of this
|
|
|
|
|
|
|
|
- max ping pong length ie. tester can only pass back once otherwise mark as failed
|
|
|
|
|
|
|
|
- max ping pong length per relationship ie dev:git can ping pong 4 times, dev:product only once, etc
|
|
|
|
|
|
|
|
- git orchestration
|
|
|
|
|
|
|
|
- merging
|
|
|
|
|
|
|
|
- symlinks
|
|
|
|
|
|
|
|
- security
|
|
|
|
|
|
|
|
- whatever existing thing has
|
|
|
|
|
|
|
|
- banned commands (look up a git repo for this)
|
|
|
|
|
|
|
|
- front end
|
|
|
|
|
|
|
|
- list available models
|
|
|
|
|
|
|
|
- specific workflows
|
|
|
|
|
|
|
|
- ui
|
|
|
|
|
|
|
|
- ci/cd
|
|
|
|
|
|
|
|
- review
|
|
|
|
|
|
|
|
- testing
|
|
|
|
|
|
|
|
# Defer
|
|
|
|
|
|
|
|
# Won't Do
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Completed
|
|
|
|
|
|
|
|
1. boilerplate typescript project for claude
|
|
|
|
|
|
|
|
- mcp server support
|
|
|
|
|
|
|
|
- generic mcp handlers
|
|
|
|
|
|
|
|
- specific mcp handlers for
|
|
|
|
|
|
|
|
- context7
|
|
|
|
|
|
|
|
- claude task manager
|
|
|
|
|
|
|
|
- concurrency, configurable max agent and max depth
|
|
|
|
|
|
|
|
- Extensible Resource Provisioning
|
|
|
|
|
|
|
|
- hard constraints
|
|
|
|
|
|
|
|
- soft constraints
|
|
|
|
|
|
|
|
- basic hygeine run
|
|
|
|
|
|
|
|
# epic
|
|
|
|
# epic
|
|
|
|
- agent orchestration system improvements
|
|
|
|
legacy/deprecated interfaces
|
|
|
|
# module 1
|
|
|
|
# Phase 1: Safe & Focused Cleanups (Low/Medium Risk)
|
|
|
|
- schema driven execution engine
|
|
|
|
These can be bundled into a single PR or tackled as quick, independent tasks. They have minimal blast radius and clear mitigation paths.
|
|
|
|
- specific definitions handled in AgentManifest schema
|
|
|
|
|
|
|
|
- persona registry
|
|
|
|
Legacy status history duplication: * Action: Remove the historyEvent singular path in favor of the domain-event history. Migrate conditions from validation_fail to validation_failed.
|
|
|
|
- templated system prompts injected with runtime context
|
|
|
|
|
|
|
|
- tool clearances (stub this for now, add TODO for security implementation)
|
|
|
|
Impact: Requires updating orchestration tests (tests/orchestration-engine.test.ts) and history semantics documentation.
|
|
|
|
- allowlist
|
|
|
|
|
|
|
|
- banlist
|
|
|
|
Remove internal Claude token (anthropicToken):
|
|
|
|
- behavioral event handlers
|
|
|
|
|
|
|
|
- define how personas react to specific events ie. onTaskComplete, onValidationFail
|
|
|
|
Action: Simplify the resolver in src/config.ts to oauth/api only and drop the property.
|
|
|
|
# module 2
|
|
|
|
|
|
|
|
- actor oriented pipeline constrained by a strict directed acyclic graph
|
|
|
|
Impact: Update config tests. Low risk, but constitutes an API shape change.
|
|
|
|
- relationship + pipeline graphs
|
|
|
|
|
|
|
|
- multi level topology
|
|
|
|
Remove MCP legacy header alias (http_headers):
|
|
|
|
- hierarchical ie parent spawns 3 coder children
|
|
|
|
|
|
|
|
- unrolled retry pipelines ie coder1 > QA1 > Coder2 > QA2
|
|
|
|
Action: Drop from shared schema/type (src/mcp/types.ts) and converter merge logic (src/mcp/converters.ts).
|
|
|
|
- sequential ie product > task > coder > QA > git
|
|
|
|
|
|
|
|
- support for constraint definition for each concept (relationship, pipeline, topology)
|
|
|
|
Impact: Medium risk due to external config compatibility. Crucial: You must add a migration note for users utilizing external MCP configs.
|
|
|
|
- ie max depth, max retries
|
|
|
|
|
|
|
|
- state dependent routings
|
|
|
|
Remove legacy edge trigger aliases (Alias-only):
|
|
|
|
- support branching logic based on project history or repository state ie. project init requires product agent to generate prd, then task agent needs to create roadmap, once those exist future sessions skip those agents and go straight to coder agents
|
|
|
|
|
|
|
|
# module 3
|
|
|
|
Action: Remove the onTaskComplete and onValidationFail aliases only.
|
|
|
|
- state/context manager
|
|
|
|
|
|
|
|
- stateless handoffs
|
|
|
|
Impact: Safe to do now, as it shrinks the legacy surface area without breaking the core edge.on functionality currently heavily relied upon in tests.
|
|
|
|
- state and context are passed forwards through payloads via worktree/storage, not conversational memory
|
|
|
|
|
|
|
|
- fresh context per node execution
|
|
|
|
# Phase 2: Staged Removals (Requires Care)
|
|
|
|
# module 4
|
|
|
|
This item is deeply integrated and needs a multi-step replacement strategy rather than a direct deletion.
|
|
|
|
- resource provisioning
|
|
|
|
|
|
|
|
- hierarchical resource suballocation
|
|
|
|
Deprecate runRecursiveAgent API:
|
|
|
|
- when a parent agent spawns children, handle local resource management
|
|
|
|
|
|
|
|
- branche/sub-worktree provisioning
|
|
|
|
Action: First, update the Pipeline (src/agents/pipeline.ts) to use the new private replacement call path for recursive execution. Only after the pipeline is successfully rerouted and the manager tests are updated should you remove the public deprecated wrapper.
|
|
|
|
- suballocating deterministic port range provisioning
|
|
|
|
|
|
|
|
- extensibility to support future resource types
|
|
|
|
README Impact: You will need to remove or update the note in the "Notes" section of your README that currently advertises AgentManager.runRecursiveAgent(...) for low-level testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# Phase 3 legacy/deprecated interfaces BIG AND SCARY
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Switching to un-ts/sh-syntax is exactly the right move. It is a WebAssembly (WASM) wrapper around Go's highly respected mvdan/sh parser. It provides rigorous POSIX/Bash compliance and, crucially, ships with strict, native TypeScript definitions for its entire AST.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Here is the updated implementation guide tailored specifically to integrating un-ts/sh-syntax into your security middleware.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Phase 1: Dependency Migration
|
|
|
|
|
|
|
|
Remove the Legacy Code:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
npm uninstall bash-parser
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rm src/types/bash-parser.d.ts
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Install the Replacement:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
npm install sh-syntax
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Phase 2: Rewrite the AST Adapter (src/security/shell-parser.ts)
|
|
|
|
|
|
|
|
The most significant architectural shift here is that sh-syntax is WASM-backed, making the parsing operation asynchronous. Your adapter and the calling security middleware must be updated to handle Promises.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
You will also map your traversal logic to the mvdan/sh AST structures (e.g., File, Stmt, CallExpr, Word).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TypeScript
|
|
|
|
|
|
|
|
import { parse } from 'sh-syntax';
|
|
|
|
|
|
|
|
// Note: Depending on your runtime environment, you may need to configure the WASM loader
|
|
|
|
|
|
|
|
// via `import { getProcessor } from 'sh-syntax'` if standard Node resolution isn't sufficient.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
export interface CommandTarget {
|
|
|
|
|
|
|
|
binary: string;
|
|
|
|
|
|
|
|
args: string[];
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
export async function extractExecutionTargets(shellInput: string): Promise<CommandTarget[]> {
|
|
|
|
|
|
|
|
const targets: CommandTarget[] = [];
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// sh-syntax parsing is async due to WASM initialization
|
|
|
|
|
|
|
|
const ast = await parse(shellInput);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Walk the AST. sh-syntax types closely mirror mvdan/sh Go types.
|
|
|
|
|
|
|
|
// The root is typically a 'File' containing a list of 'Stmt' (Statements).
|
|
|
|
|
|
|
|
if (!ast || !ast.StmtList || !ast.StmtList.Stmts) return targets;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
for (const stmt of ast.StmtList.Stmts) {
|
|
|
|
|
|
|
|
const cmd = stmt.Cmd;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Check if the command is a standard function/binary call
|
|
|
|
|
|
|
|
if (cmd && cmd.type === 'CallExpr') {
|
|
|
|
|
|
|
|
const args = cmd.Args;
|
|
|
|
|
|
|
|
if (args && args.length > 0) {
|
|
|
|
|
|
|
|
// The first argument in a CallExpr is the binary name
|
|
|
|
|
|
|
|
// You must strictly check that the binary is a literal (Word) and not computed
|
|
|
|
|
|
|
|
const binaryWord = args[0];
|
|
|
|
|
|
|
|
const binaryName = extractLiteralWord(binaryWord);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
if (!binaryName) {
|
|
|
|
|
|
|
|
throw new SecurityViolationError("Dynamic or computed binary names are blocked.");
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
targets.push({
|
|
|
|
|
|
|
|
binary: binaryName,
|
|
|
|
|
|
|
|
args: args.slice(1).map(extractLiteralWord).filter(Boolean) as string[]
|
|
|
|
|
|
|
|
});
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Important: Explicitly reject subshells or commands your engine doesn't support
|
|
|
|
|
|
|
|
if (cmd && cmd.type === 'Subshell') {
|
|
|
|
|
|
|
|
throw new SecurityViolationError("Subshell execution is not permitted by security policy.");
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Analyze redirects to ensure they don't overwrite protected files (like state roots)
|
|
|
|
|
|
|
|
if (stmt.Redirs) {
|
|
|
|
|
|
|
|
enforceRedirectPolicy(stmt.Redirs);
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
return targets;
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
// Helper to safely extract string literals from Word nodes
|
|
|
|
|
|
|
|
function extractLiteralWord(wordNode: any): string | null {
|
|
|
|
|
|
|
|
// In sh-syntax, a Word contains Parts (Lit, SglQuoted, DblQuoted, ParamExp, etc.)
|
|
|
|
|
|
|
|
// You must enforce that the parts only consist of safe literals, rejecting ParamExp ($VAR).
|
|
|
|
|
|
|
|
// ...
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
Phase 3: Synchronize Orchestration & Middleware
|
|
|
|
|
|
|
|
Because extractExecutionTargets is now async:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update SecureCommandExecutor: The constructor or initialization hook where you validate the command against your allowlists must await the parsing step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Actor Execution Boundary: Ensure that wherever the LLM outputs a shell command during the DAG execution, the pipeline waits for the AST security validation before proceeding.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Phase 4: Strict Schema Alignment (src/security/schemas.ts)
|
|
|
|
|
|
|
|
Your Zod schemas do not need to validate the sh-syntax AST directly (since it is already strictly typed by the library). Instead, use Zod to validate the output array (CommandTarget[]) to ensure nothing slipped past the parser.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update the execution schema: Ensure it enforces .strict() so that if extractExecutionTargets accidentally returns extraneous fields, the engine panics and fails closed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Unified Allowlist validation: Ensure the extracted binary string is strictly validated against your AGENT_SECURITY_ALLOWED_BINARIES Zod array.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Phase 5: Revalidate Security Parity (tests/security-middleware.test.ts)
|
|
|
|
|
|
|
|
This is the final gate. The new AST structure handles complex bash semantics differently than the old untyped parser.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
WASM Test Environment: Ensure your test runner (Jest, Vitest) is configured to load WebAssembly properly, or the parse function will throw an initialization error in CI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Regression Threat Matrix:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
echo $(unauthorized_bin) -> Must be caught and throw SecurityViolationError (Subshell).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
allowed_bin && unauthorized_bin -> The StmtList must iterate over both commands and block the execution due to the second binary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
allowed_bin > /path/to/protected/file -> The Redirs property on the Stmt must trigger your boundary violation logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- review/update of readme, docs, and conf files where needed
|
|
|
|
|
|
|
|
- mvp for analytics + user notification logging
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# giga model specific behavior and strict task agent control stuff
|
|
|
|
|
|
|
|
i am too dumb to understand it, but gemini 3.1 makes it sound like a really good idea
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Architecture Brief: Deterministic Agent Execution & Policy Enforcement
|
|
|
|
|
|
|
|
Context & Goal
|
|
|
|
|
|
|
|
We are refactoring the execution layer to ensure low-level control over task agents (e.g., task_sync, task_plan_llm). The goal is to move away from open-ended, non-deterministic agent behavior and enforce a strict 4-layer control model where the LLM acts only as a bounded step within a hard-coded state machine.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Problem Statement
|
|
|
|
|
|
|
|
We currently have a critical gap in our enforcement boundary that allows policy bypasses, compounded by provider-specific SDK quirks:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Context Drop (The MCP Gap): The mcpRegistry (which defines our tool policies) is resolved globally at the orchestration layer, but it is not passed down into pipeline.ts or the ActorExecutor. As a result, the low-level execution nodes operate without awareness of the active tool clearance policies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Claude SDK Leakage: The Anthropic Claude SDK currently ignores the shared enabled_tools configuration in the MCP payload. If we rely solely on the shared MCP config, Claude can hallucinate and execute unauthorized tool calls.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Anti-Pattern Risk: The initial proposal to fix this was to pass the entire mcpRegistry down into the ActorExecutor so it could self-regulate. This is a severe anti-pattern. It tightly couples our low-level execution sandbox with our high-level orchestration logic, forcing the "dumb" executor to parse topologies, phases, and complex registry configurations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Solution: The ResolvedExecutionContext Pattern
|
|
|
|
|
|
|
|
To close the enforcement gap without violating the Inversion of Control principle, we will implement a strict separation of concerns. The Orchestrator will handle the logic; the Executor will handle the enforcement.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Instead of passing down the full registry, the orchestration layer will pre-compute a flat, immutable policy payload for that specific node attempt and inject it into the executor.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Implementation Directives
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Introduce ResolvedExecutionContext:
|
|
|
|
|
|
|
|
Create an interface that represents the fully resolved, un-negotiable constraints for a single execution step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TypeScript
|
|
|
|
|
|
|
|
export interface ResolvedExecutionContext {
|
|
|
|
|
|
|
|
phase: string;
|
|
|
|
|
|
|
|
modelConstraint: string; // e.g., 'claude-3-haiku'
|
|
|
|
|
|
|
|
allowedTools: string[]; // Flat array of resolved tool names
|
|
|
|
|
|
|
|
security: {
|
|
|
|
|
|
|
|
dropUid: boolean;
|
|
|
|
|
|
|
|
worktreePath: string;
|
|
|
|
|
|
|
|
// ... other hard constraints
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
Update Orchestration (pipeline.ts):
|
|
|
|
|
|
|
|
Before invoking an actor, the pipeline must read the AgentManifest for the current node, cross-reference its toolClearance with the mcpRegistry, and generate the ResolvedExecutionContext.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Lock Down the Executor (executor.ts):
|
|
|
|
|
|
|
|
The ActorExecutor must accept this context and enforce it blindly:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Model Enforcement: Force the SDK initialization to strictly use context.modelConstraint.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Tool Enforcement (Claude Fix): Explicitly filter the tools passed into the provider SDK using context.allowedTools, physically preventing the Claude SDK from seeing tools outside its clearance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Security Middleware: Pass context.allowedTools into the SecurityRulesEngine so any runtime attempt to bypass the SDK constraints results in an immediate AGENT_SECURITY_VIOLATION_MODE=hard_abort.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Expected Outcome
|
|
|
|
|
|
|
|
The execution nodes remain entirely decoupled from the orchestration state. The LLM cannot escalate its model tier or access unauthorized tools, and the provider SDK quirks are mitigated at the execution boundary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
# epic
|
|
|
|
|
|
|
|
# front end ui requirements
|
|
|
|
|
|
|
|
1. Graph Visualizer
|
|
|
|
|
|
|
|
Your initial thoughts on coloring by stage/agent and showing metadata (subtasks, tool calls, security violations) are spot-on. Because your backend relies heavily on DAG execution and a retry matrix, the visualizer will be the most critical piece of the UI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
What else is worth visualizing?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Based on your README, here are specific concepts you should expose on the graph:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Topology & Control Flow: Visually distinguish between sequential, parallel, hierarchical, and retry-unrolled branches. For example, a retry-unrolled node should visually indicate that it spawned a new child manager session to remediate a validation_fail.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Domain Event Edges: Since your pipeline edges route via typed events (requirements_defined, validation_failed), labeling the edges of the graph with the specific domain event that triggered the transition will make debugging orchestration loops much easier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Economics & Performance (from Runtime Events): Your NDJSON events log tokenInput, tokenOutput, durationMs, and costUsd. Surfacing the "cost" or "time" of a specific DAG node directly on the graph helps identify inefficient prompts or agents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
the "Sandbox Payload": When a user clicks or hovers over a specific node (e.g., task_plan_llm), the UI must display the ResolvedExecutionContext payload that was injected into it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Critical Path & Abort Status: If a session fails due to two consecutive hard failures, visually highlighting the exact "critical path" that led to the AbortSignal cascading through the system will save hours of log-diving.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2. Notification / Webhook Interface
|
|
|
|
|
|
|
|
Your backend already has an elegant fan-out system (NDJSON analytics log + Discord webhook). The UI should act as a control panel and an in-app inbox for this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Configuration: A form to manage AGENT_RUNTIME_DISCORD_WEBHOOK_URL, AGENT_RUNTIME_DISCORD_MIN_SEVERITY, and the ALWAYS_NOTIFY_TYPES CSV.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Live Event Feed: A real-time drawer or panel that tails the .ai_ops/events/runtime-events.ndjson file. You can parse the severity field to color-code the feed (e.g., flashing red for critical security mirror events like security.shell.command_blocked).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. Job Trigger Interface
|
|
|
|
|
|
|
|
This is your execution entrypoint (SchemaDrivenExecutionEngine.runSession).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Inputs: A clean interface to provide the initial prompt/task, select the Manifest or Topology they want to run, and override global flags.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The "Kill Switch": Since every actor execution respects an AbortSignal, your UI needs a prominent, highly responsive "Cancel Run" button that immediately aborts child recursive work.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Run History: A table view summarizing aggregate session status from AGENT_STATE_ROOT, allowing users to click into past runs to view their graph state.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4. Definition Interface (Manifest, Config, Security)
|
|
|
|
|
|
|
|
You noted that anything secure stays on the backend. The frontend here should strictly be a client that reads/writes validated JSON or environment schemas.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Manifest Builder: A UI to visually build or edit the AgentManifest (Schema "1"), defining personas, tool-clearance policies, modelConstraint (or allowedModel), and setting maxDepth/maxRetries.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Security Policy Management: An interface mapped to src/security/schemas.ts. This allows admins to define AGENT_SECURITY_ALLOWED_BINARIES, toggle AGENT_SECURITY_VIOLATION_MODE (hard_abort vs validation_fail), and manage MCP tool allowlists/banlists.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Environment & Resource Limits: Simple forms to configure agent manager limits (AGENT_MAX_CONCURRENT) and port block sizing without manually editing the .env file.
|