Error Attribution: Agent, Build Tool, or Network?
When an AI agent session fails, the first question is: what broke? Was it the agent producing bad code, the build tool rejecting valid code, or a network issue interrupting the session? Getting this wrong is expensive. Telling the agent to "fix the error" when the problem is a stale cache sends it on a goose chase that costs tokens and time.
The Three Error Sources
Agent Errors (Red)
The agent produced output that is incorrect. Examples:
- Generated code that does not compile or type-check
- Hallucinated an API that does not exist
- Produced a solution that does not match the requirements
- Entered a retry loop, repeating the same failing approach
Agent errors are the most common category. They require prompt adjustment, additional context, or switching to a more capable model.
Build Tool Errors (Yellow)
The agent's code is logically correct, but the build environment rejects it. Examples:
- TypeScript strict mode catches a valid but loosely typed pattern
- ESLint rules reject a coding style the agent uses
- Dependency version conflicts after an install
- Build cache is stale and needs clearing
Build tool errors often look like agent errors but have different solutions. Clearing the cache manually and letting the agent continue takes 10 seconds. Asking the agent to debug a caching issue takes 10 minutes and several dollars in tokens.
Network Errors (Blue)
The connection between components failed. Examples:
- API rate limit from the AI provider
- WebSocket disconnection during a long session
- DNS resolution failure
- Styrby server maintenance (rare, but it happens)
Network errors are transient. The correct response is usually to wait and retry, not to change the prompt or the code.
How Styrby Classifies Errors
Styrby uses pattern matching on the error output to classify errors into the three categories. The classifier runs on the CLI side before the error is sent to your mobile device.
Patterns for agent errors:
- Compiler errors in agent-generated files (tracked by which files the agent recently modified)
- Test failures in newly written tests
- Repeated identical error outputs (retry loop detection)
Patterns for build tool errors:
- Errors in files the agent did not modify
- Dependency resolution failures
- Cache-related error messages
- Linter/formatter errors on pre-existing code
Patterns for network errors:
- HTTP status codes (429, 502, 503, 504)
- Connection timeout messages
- DNS and TLS error strings
- WebSocket close codes
The Color-Coded Display
In the Styrby mobile app and web dashboard, errors appear with a colored badge next to the error message:
| Color | Source | Suggested Action |
|---|---|---|
| Red | Agent | Adjust prompt, add context, or switch model |
| Yellow | Build tool | Fix toolchain config, clear cache, or update deps |
| Blue | Network | Wait and retry, check provider status page |
The classification is a best guess, not a guarantee. Some errors are ambiguous. A TypeScript error could be the agent writing bad types (agent error) or a misconfigured tsconfig (build tool error). When the classifier is not confident, it shows a gray badge with "Unclassified" and lets you assign it manually.
Retry Loop Detection
Styrby specifically detects retry loops: when the agent encounters an error, attempts a fix, and hits the same error again. After three identical error cycles, Styrby notifies you with a summary:
⚠ Retry loop detected (3 cycles)
Agent: claude (sonnet-4)
Error: TypeError: Cannot read property 'map' of undefined
File: src/components/Dashboard.tsx:47
Tokens spent on retries: 45,000 (~$0.67)
Suggestion: Provide additional context about the data shapeThis notification lets you intervene before the agent spends more tokens on a failing approach. You can provide clarifying context, switch to a different model, or take over the fix manually.
Improving Classification Over Time
When you manually reclassify an error (changing gray/unclassified to the correct color), that feedback improves the classifier for your project. The patterns are stored locally and applied to future errors from the same project and agent combination.
Ready to manage your AI agents from one place?
Styrby gives you cost tracking, remote permissions, and session replay across five agents.