Research
Security research that
proves the methodology
Tachyon runs against real open-source codebases to find and responsibly disclose vulnerabilities. This is how we validate our approach — and how we contribute back to the security community.
Why open-source research
Most security tools validate themselves with synthetic benchmarks — intentionally vulnerable applications designed to be found. These benchmarks test pattern-matching, not real-world analysis capability.
We take a different approach. Tachyon runs against real, production-grade open-source codebases — the same projects your team depends on. When it finds something, we verify exploitability, work with maintainers on a fix, and publish the analysis.
This isn't a marketing exercise. It's how we stress-test our analysis engine against vulnerabilities that no one knows about yet — the only honest benchmark for a security tool.
How Tachyon analyzes code
Map attack surface
The agent identifies entry points — API routes, user inputs, authentication boundaries, trust transitions — and builds a map of where untrusted data enters the system.
Trace data flows
From each entry point, the agent traces how data flows through the codebase: across function calls, through middleware layers, into databases and external services. This is multi-file, cross-module analysis — not single-file pattern matching.
Identify security invariants
The agent reasons about what must be true for the system to be secure: authorization checks before data access, input validation before use, proper scoping of credentials. Then it looks for where those invariants break.
Validate exploitability
A potential vulnerability isn't a finding until it's validated. The agent constructs proof-of-concept exploit paths and, where possible, executes code in the sandbox to confirm the vulnerability is reachable and exploitable in practice.
Recommend structural defenses
Beyond patching the specific bug, Tachyon recommends defense-in-depth improvements: fail-closed defaults, least-privilege scoping, input normalization at trust boundaries — fixes that prevent entire classes of vulnerabilities.
What this catches that scanners miss
Traditional SAST tools match patterns within individual files. The vulnerabilities Tachyon finds require cross-file reasoning and understanding of application semantics.
Authorization logic flaws
Missing or inconsistent permission checks across different API surfaces — where one protocol enforces auth correctly but another skips it entirely.
Multi-step exploit chains
Vulnerabilities where no single line of code is wrong, but a sequence of operations — each individually correct — combines to break a security boundary.
Indirect data flows
User input that reaches a dangerous operation through multiple layers of indirection — function calls, middleware, serialization boundaries — invisible to file-scoped analysis.
Incomplete mitigations
Patches that fix the obvious attack vector but leave redirect-based, DNS rebinding, or timing-based bypasses open — requiring understanding of attacker capabilities beyond the immediate code change.
Responsible disclosure
Every vulnerability we find is responsibly disclosed to the project maintainers before any public discussion. We work with maintainers on the fix, verify the patch, and only publish our analysis after the vulnerability has been addressed.
Our published research includes assigned CVEs, detailed technical write-ups, and the full analysis methodology — so the community can learn from what we find, not just the fix.
Published research
Authorization bypass in MLflow via protocol-level inconsistency
MLflow (Databricks)
SSRF with metadata exfiltration in OpenWebUI
OpenWebUI
Sandbox escape via filesystem path traversal
AI Development Platform
Why SSRF is the trickiest class of web vulnerability
Technical deep-dive
Run this analysis on your codebase
The same methodology that finds zero-days in open-source projects, applied to your repositories on every pull request.