Local-first. Bounded by default.
Most AI coding tools either ship your code to a cloud sandbox or run with full ambient authority on your machine. Bolt Foundry does neither. We built a local-first app with explicit workspace boundaries, isolated runtime execution, and host-owned secrets.
Below are the threats we track and how we handle them. If something your team cares about is missing, tell us.
Threats we're tracking
We watch for these risks every time we ship. This list will grow. If we're missing something, flag it.
Machine-wide overreach
Most AI coding tools treat your entire machine as ambient territory. The assistant can wander through any file, service, or OS state it wants. Even when actual permissions are narrower, unclear boundaries make the whole thing feel unsafe.
Our approach: You choose the workspace. Bolt Foundry operates inside the folders, sources, and permissions you approve. Nothing else. Runtime state lives outside your canonical workspace so your source tree stays yours.
Main-process compromise
When lower-trust AI work runs inside the same process as your main app, a single bad output inherits full application authority. Nothing controls the blast radius.
Our approach: We run lower-trust assistant work outside the main app process in isolated runtime containers. The frontend never talks directly to the container. The host app supervises startup, policy, and failure handling from a separate trust boundary.
Secret leakage into runtime
If provider credentials or API keys leak into the assistant runtime as ambient environment variables, the trust model collapses. One compromised container exposes every secret.
Our approach: We keep credentials under host control. The host app owns secret storage and issues narrowly scoped, policy-bound grants instead of injecting raw long-lived secrets into containers. Containers never see your credentials by default.
Unrestricted network egress
When runtime containers have broad internet access, or when the app quietly phones home, you lose a meaningful trust boundary. You can't reason about what the assistant is actually doing.
Our approach: We route runtime outbound traffic through a host-mediated boundary with policy-controlled allow/deny. No ambient broad internet access by default. No arbitrary root-process network requests. No inbound container access. If something reaches out, it goes through an explicit, bounded path.
Silent workspace mutation
An assistant that writes directly into your canonical source tree without clear consent can damage important files. This is the 'don't explode my computer' trust requirement.
Our approach: We keep your selected workspace as the read-only source of truth. Runtime writes land in a separate active workspace. Changes only reach your source tree through explicit promotion, not silent mutation. We treat reset and apply-back as first-class concepts.
Hidden state in your workspace
When runtime artifacts, staging copies, and internal state end up mixed into your real workspace, it creates confusion about what you own versus what the tool owns. Cleanup becomes a trust problem.
Our approach: We store durable runtime state in app data and disposable material in app cache. Your workspace stays clean. Where Bolt Foundry-owned files appear inside your project, we mark them clearly and keep them bounded, never disguised as your source material.
Opaque operating state
When the real operating truth lives in a database or opaque app state you can't inspect, you can't verify what the system is doing. You're trusting a black box.
Our approach: We keep important operating information in plain Markdown files in the filesystem. People and agents read the same artifacts. The database handles ephemeral state, caching, and support, but we never treat it as the canonical source of truth for the things that matter.
Supply-chain drift
When your AI tool depends on floating upstream inputs, loosely pinned packages, and remote asset loading, the shipped behavior can change underneath you without warning.
Our approach: We lock dependency graphs, pin flake inputs, and pin runtime versions. The shipped app does not load remote fonts, images, styles, or scripts. The dependency surface stays intentionally minimal. Fewer dependencies means fewer places for vulnerabilities to hide.
Questions teams ask before they hand authority to an AI tool
See a gap? Tell us.
We'll either show you how we handle it or add it to our threat model. Reach out and we'll schedule a security walkthrough with your team.
Talk to us about security
If your team needs to understand the trust boundaries before adopting AI tooling, we want that conversation. Join the list and we will set up time to walk through the security model with your team.