FAQ

Frequently Asked Questions

ClawFS is a persistent shared filesystem for agents. This page starts with practical adoption questions, then covers the storage-engine details.

Start Here

Practical Questions Operators Ask First

What OS do you support?

The intended support matrix is Windows via user-mode NFS, macOS via the preload shared-library path and user-mode NFS, and Linux via preload, user-mode NFS, and FUSE.

In other words: Linux gets every mode, macOS gets the agent-friendly launch paths without depending on FUSE, and Windows is aimed at the NFS route.

Do I need to rewrite my tools around an SDK?

No. ClawFS exposes a filesystem interface, so existing tools can keep using files, directories, and normal path-based workflows.

That matters for agents because shells, compilers, package managers, editors, and test runners already work well with filesystems.

What launch modes exist?

There are three main paths: clawfs up for preload-based launches, clawfs mount for mounts over FUSE or NFS, and clawfs serve for launching an explicit NFS server.

Use up for the simplest agent launch path, mount when you want the CLI to mount the volume with --transport auto|fuse|nfs, and serve when you want a standalone NFS server process.

Is it safe for critical production data?

Not yet. The homepage beta note is the right framing: ClawFS passes substantial filesystem testing, but it should not be treated as the place for irreplaceable data today.

It is a strong fit for agent workspaces, build state, caches, checkpoints, and recoverable project data while the system continues to harden. It is not intended as the storage substrate for databases.

Why not just use object storage or a managed NFS share?

Object storage gives durability but not normal filesystem semantics. Managed NFS gives a filesystem but is usually heavier to provision and not built around quick attach-and-reuse workflows for agents.

ClawFS is meant to give short-lived workers a workspace they can come back to.

What kinds of workloads fit best?

Coding agents, research runs, background automation, checkpoint-heavy workflows, and multi-agent handoffs are the obvious fits. Anything that wastes time rebuilding repos, caches, embeddings, or intermediate files benefits.

If the workload is stateless, ClawFS probably does not help much. If state matters across runs, it is a better fit.

What consistency model should I expect?

The target is NFS-level consistency. ClawFS is built for shared workspaces, agent handoffs, and normal filesystem workflows.

If your application needs tightly coordinated transactional semantics, ClawFS is the wrong tool.

Short version: ClawFS gives agents a persistent workspace with normal filesystem semantics.

For Filesystem Nerds

I'm a Filesystems Nerd, Give Me the Deets

What is the storage model?

ClawFS is log-structured. Writes stage locally, flush into immutable data segments, and publish metadata as immutable inode-map shards plus per-generation delta logs.

The design keeps the durable path append-friendly and recoverable while preserving a filesystem view.

How does durability work around close and flush?

Close-time durability is backed by a local journal under $STORE/journal. Pending writes are buffered per inode, coalesced asynchronously, then flushed as a new segment plus the metadata updates needed to advance the filesystem generation.

In practice, writes are grouped and published through the segment path rather than rewriting a large mutable backing file.

What happens to large writes?

Large writes stage under $STORE/segment_stage before becoming immutable segments. That keeps the write path aligned with the segment-oriented backend instead of turning big writes into pathological read-modify-write churn.

How is metadata handled?

Metadata lives in inode cache structures, shard snapshots, delta logs, and superblock generation updates. The implementation keeps metadata changes tied to generation advancement instead of depending on a single always-mutable metadata file.

What does the runtime surface look like?

The core behavior lives in the shared filesystem layer, then gets exposed through transport-specific paths: FUSE, NFS, and a preload-based shared-library route for process injection.

The storage model stays the same across those access paths.

What are the obvious tradeoffs?

You get durable, shareable workspace semantics without asking every workload to adopt a new API. In return, you accept the complexity of a real filesystem implementation and the usual tension between POSIX-ish behavior, caching, latency, recovery guarantees, and NFS-like consistency expectations.

It is designed for workspaces and artifacts, not for running databases.

More Answers

Questions Worth Clearing Up Early

Can multiple agents share one workspace?

Yes. Shared volumes are part of the model, especially for handoffs between agents.

Do you support checkpoints and restore?

Yes. Checkpoint and restore are there for resumable work and failure recovery.

Does this replace local disks?

No. It replaces the assumption that every short-lived worker should start from a blank local disk every time.

Is this only for hosted ClawFS?

No. The site already frames hosted evaluation, bring-your-own-cloud operation, and enterprise controls as distinct paths.

What should I read next?

Use Getting Started if you want commands. Use Demo if you want to see the workflow. Use Enterprise if you need controls and compliance.

What if I mostly care about semantics?

Start from this: ClawFS keeps the filesystem abstraction and makes it usable for short-lived agent runtimes.

Want to try it?

Install the CLI, run one volume, and see whether the next agent session picks up where the last one left off.