Reference

Complete CLI & MCP Reference

v5.3

Getting Started

gnosys setup

Interactive setup wizard. Configures your LLM provider, API key, model tier, and IDE integration in one guided flow. Also detects existing installations and offers to upgrade registered projects.

Usage
gnosys setup [--non-interactive]
Options
--non-interactive Skip prompts, use defaults (for CI/scripting)
What it does
Step 1: Select LLM provider (8 providers with pricing) Step 2: Pick model tier (budget / balanced / premium) Step 3: Enter API key (saved to ~/.config/gnosys/.env) Step 4: Configure IDE (Claude Code, Cursor, Codex) Web knowledge base is set up separately: gnosys web init

gnosys init

Initialize Gnosys in the current directory. Creates project identity, registers in central DB, and sets up store.

Usage
gnosys init [options] Options: -d, --directory <dir> Target directory (default: cwd)
MCP Tool: gnosys_init

gnosys config

View and manage LLM provider configuration

Usage
gnosys config [command] Commands: show Show current LLM configuration set <key> <value> [extra...] Set a config value init Generate default gnosys.json Keys: provider, model, ollama-url, groq-model, openai-model, lmstudio-url, task <task> <provider> <model>

gnosys stores

Show all active stores, their layers, paths, and permissions

Usage
gnosys stores
MCP Tool: gnosys_stores

gnosys doctor

Check system health: stores, LLM connectivity, embeddings, archive

Usage
gnosys doctor

gnosys serve

Start the MCP server (stdio mode)

Usage
gnosys serve [options] Options: --with-maintenance Run maintenance every 6 hours in background

gnosys dashboard

Show system dashboard: memory count, health, graph stats, LLM status

Usage
gnosys dashboard [options] Options: --json Output as JSON instead of pretty table
MCP Tool: gnosys_dashboard

Installation & Setup Overview

A complete walkthrough of installing Gnosys and setting it up for AI-assisted development. If you just want IDE-specific instructions, skip to IDE & Agent Setup.

Installation

npm install -g gnosys

One-Time Global Setup (all projects)

This gives every AI coding session access to Gnosys tools. You only need to do this once per machine.

Claude Code example (see IDE section for Cursor, Codex, etc.)
# Add MCP server (Claude Code example) claude mcp add --scope user gnosys gnosys serve # Generate global agent rules gnosys sync --global

gnosys sync --global writes instructions into ~/.claude/CLAUDE.md inside GNOSYS:START/END markers. Any existing content outside those markers is preserved.

Per-Project Setup

Every project you want Gnosys to remember needs a one-time init:

cd your-project gnosys init gnosys sync --target claude # or cursor, codex, all

gnosys init creates .gnosys/gnosys.json (project identity), registers the project in the central DB, and creates a local .gnosys/ directory for project configuration. All memories are stored in the central SQLite database; markdown files are only generated on-demand via gnosys export for Obsidian.

gnosys sync writes project-specific rules into your IDE's rules file (e.g. CLAUDE.md, .cursor/rules/gnosys.mdc).

Understanding Memory Scopes

All memories live in one central database at ~/.gnosys/gnosys.db. Each memory has a scope that controls where it appears in search results:

Scope What it's for Visible when How to create
project (default) Decisions, architecture, conventions for this specific project Searching from this project only gnosys add "chose PostgreSQL"
user Your personal preferences, coding style, workflow conventions Searching from any of your projects gnosys add-structured --user ... or gnosys_preference_set
global Shared knowledge, company conventions, team standards Searching from anywhere gnosys add-structured --global ...

When you use gnosys_federated_search, all three scopes are searched with tier boosting: project results get a 1.5x boost, user gets 1.0x, and global gets 0.7x. This means project-specific knowledge ranks highest, but relevant global knowledge still surfaces.

Multi-Machine Setup (iCloud, Dropbox, NAS)

By default the central DB lives at ~/.gnosys/gnosys.db, local to each machine. To share it across machines, point the GNOSYS_GLOBAL environment variable at a cloud-synced folder.

Step 1: Create a folder on your cloud drive (e.g., iCloud Drive/gnosys/)

Step 2: Copy your existing DB there:

cp ~/.gnosys/gnosys.db ~/Library/Mobile\ Documents/com~apple~CloudDocs/gnosys/gnosys.db

Step 3: Set the GNOSYS_GLOBAL environment variable in your shell profile (~/.zshrc or ~/.bashrc):

# iCloud Drive export GNOSYS_GLOBAL="$HOME/Library/Mobile Documents/com~apple~CloudDocs/gnosys" # Dropbox export GNOSYS_GLOBAL="$HOME/Dropbox/gnosys" # OneDrive export GNOSYS_GLOBAL="$HOME/OneDrive/gnosys"

Step 4: On your second machine, install Gnosys, set the same GNOSYS_GLOBAL variable, and run gnosys sync --global. The cloud service handles syncing the database file.

Warning: Avoid running heavy writes on both machines simultaneously. SQLite handles concurrent reads well, but simultaneous writes through cloud sync can cause conflicts. In practice this is rarely an issue since you're typically on one machine at a time.

Upgrading Gnosys

After installing a new version, run gnosys upgrade to re-sync all your projects:

npm install -g gnosys gnosys upgrade

This re-initializes all registered projects, regenerates agent rules for every detected IDE, updates global rules, and stamps the central DB with the new version. Projects on other machines are reported as "skipped" — run gnosys upgrade on those machines too.

If you share a gnosys.db across machines and one machine upgrades first, every gnosys command on the other machine will warn you to upgrade.

LLM Provider Setup

The easiest way to configure your LLM provider is to run gnosys setup, which walks you through provider selection, model tier, and API key entry.

Gnosys supports 8 LLM providers. LLM features (ingestion structuring, gnosys ask, Dream Mode) require a configured provider. Core memory operations (add, read, search, recall) work without one.

Provider Type Default Model API Key Env Var
AnthropicCloudclaude-sonnet-4-20250514GNOSYS_ANTHROPIC_KEY
OllamaLocalllama3.2— (runs locally)
GroqCloudllama-3.3-70b-versatileGNOSYS_GROQ_KEY
OpenAICloudgpt-4o-miniGNOSYS_OPENAI_KEY
LM StudioLocaldefault— (runs locally)
xAICloudgrok-2GNOSYS_XAI_KEY
MistralCloudmistral-large-latestGNOSYS_MISTRAL_KEY
CustomAny(user-defined)GNOSYS_LLM_API_KEY

Setting your API key (one-time, global — configured via gnosys setup):

API key storage options (configured via gnosys setup): 1. macOS Keychain (recommended — encrypted, no plaintext on disk): security add-generic-password -a "$USER" -s "GNOSYS_ANTHROPIC_KEY" -w "your-key" 2. Environment variable (shell profile): echo 'export GNOSYS_ANTHROPIC_KEY=your-key' >> ~/.zshrc && source ~/.zshrc 3. ~/.config/gnosys/.env (least secure — plaintext on disk): echo 'GNOSYS_ANTHROPIC_KEY=your-key' >> ~/.config/gnosys/.env

Using a custom provider (any OpenAI-compatible API):

gnosys config set provider custom gnosys config set custom-url https://api.together.xyz/v1 gnosys config set custom-model meta-llama/Llama-3-70b echo 'GNOSYS_LLM_API_KEY=tok-...' >> ~/.config/gnosys/.env

API keys are resolved in order: macOS Keychain, environment variable, then ~/.config/gnosys/.env. Keys are NOT stored in IDE MCP configs. All storage methods are loaded automatically by both the CLI and MCP server.

Model lists and pricing are fetched live from OpenRouter during setup and cached for 24 hours.

IDE & Agent Setup

Setting up Gnosys requires two things: (1) connecting the MCP server so your agent has access to Gnosys tools, and (2) generating agent rules so your agent knows when to use them. Here's the complete setup for each IDE.

Fast path (v5.4.0+): run gnosys init <ide> from your project directory and Gnosys wires the MCP config for you. Supported IDE arguments: claude-desktop, claude, cursor, codex, gemini-cli, antigravity.

One-time vs. per-project: The IDE wiring writes to a user-level config (e.g. ~/Library/Application Support/Claude/...), so it only needs to happen once. The project registration that gnosys init also performs is per-directory — every codebase you want memory-enabled needs its own gnosys init.

Cowork users: Cowork sessions don't have a working directory of their own — once Gnosys is registered with Claude Desktop, the agent uses projectRoot on each tool call to scope memory to whichever codebase you're discussing. Run gnosys init claude-desktop once globally and gnosys init in each codebase.

Claude Code

Step 1: Install Gnosys and add the MCP server:

npm install -g gnosys claude mcp add gnosys gnosys serve

Step 2: Generate global agent rules (teaches Claude when to use Gnosys tools in every project):

gnosys sync --global

This writes a GNOSYS:START/END block into ~/.claude/CLAUDE.md. Your own content outside the block is preserved.

Step 3: Initialize each project you want Gnosys to remember:

cd your-project gnosys init gnosys sync --target claude

This creates .gnosys/gnosys.json (project identity), registers the project in the central DB, and writes project-specific conventions into CLAUDE.md.

Cursor

Step 1: Install Gnosys and add the MCP config to .cursor/mcp.json in your project root:

{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }

Step 2: Initialize the project and generate agent rules:

npm install -g gnosys cd your-project gnosys init gnosys sync --target cursor

This writes the Gnosys tool instructions into .cursor/rules/gnosys.mdc. Cursor loads it automatically.

Codex CLI

Step 1: Install Gnosys and add MCP config to .codex/config.toml:

[mcp.gnosys] type = "local" command = ["gnosys", "serve"]

Step 2: Initialize the project and generate agent rules:

npm install -g gnosys cd your-project gnosys init gnosys sync --target codex

This writes agent rules into .codex/gnosys.md.

Claude Desktop — covers Chat, Cowork, and Code

Fast path: from any project directory, run gnosys init claude-desktop — Gnosys writes the right config file for your platform and reminds you to restart Claude Desktop.

Manual: add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS), %APPDATA%\Claude\claude_desktop_config.json (Windows), or ~/.config/Claude/claude_desktop_config.json (Linux):

{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }

A single registration covers all three Claude Desktop surfaces: Chat, Cowork, and Code (the version embedded in the desktop app). The MCP tool descriptions guide the agent — no separate rules file needed.

Gemini CLI

Fast path: gnosys init gemini-cli.

Manual: add an mcpServers entry to ~/.gemini/settings.json (preserves your existing settings):

{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }

Antigravity

Fast path: gnosys init antigravity.

Manual: add to ~/.gemini/antigravity/mcp_config.json (Antigravity reloads MCP servers when this file changes):

{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }

Sync to all detected IDEs at once

If your project has multiple IDE configs, sync them all in one command:

gnosys sync --target all

This detects .cursor/, .claude/ or CLAUDE.md, and .codex/ in the project and writes agent rules to each.

Writing Memories

gnosys add

Add a new memory (uses LLM to structure raw input)

Usage
gnosys add [options] <input> Options: -a, --author <author> Author (human|ai|human+ai) (default: "human") --authority <authority> Authority (declared|observed|imported|inferred) (default: "declared") -s, --store <store> Target store (project|personal|global)
MCP Tool: gnosys_add

gnosys add-structured

Add a memory with structured input (no LLM needed)

Usage
gnosys add-structured [options] Options: --title <title> Memory title --category <category> Category directory name --content <content> Memory content as markdown --tags <json> Tags as JSON object (default: "{}") --relevance <keywords> Keyword cloud for discovery (default: "") -a, --author <author> Author (default: "human") --authority <authority> Authority level (default: "declared") --confidence <n> Confidence 0-1 (default: "0.8") -s, --store <store> Target store
MCP Tool: gnosys_add_structured

gnosys commit-context

Pre-compaction sweep: extract atomic memories from a context string, check novelty, commit novel ones

Usage
gnosys commit-context [options] <context> Options: --dry-run Show what would be committed without writing -s, --store <store> Target store (project|personal|global)
MCP Tool: gnosys_commit_context

gnosys bootstrap

Batch-import existing documents into the memory store

Usage
gnosys bootstrap [options] <sourceDir> Options: -p, --pattern <patterns...> File patterns to match (default: **/*.md) --skip-existing Skip files whose titles already exist -c, --category <category> Default category (default: "imported") -a, --author <author> Default author (default: "human") --authority <authority> Default authority (default: "imported") --confidence <n> Default confidence 0-1 (default: "0.7") --preserve-frontmatter Preserve existing YAML frontmatter if present --dry-run Show what would be imported without writing -s, --store <store> Target store (project|personal|global)
MCP Tool: gnosys_bootstrap

gnosys import

Bulk import structured data (CSV, JSON, JSONL) into Gnosys memories

Usage
gnosys import [options] <fileOrUrl> Options: --format <format> Data format: csv, json, jsonl --mapping <json> Field mapping as JSON --mode <mode> Processing mode: llm or structured (default: "structured") --limit <n> Max records to import --offset <n> Skip first N records --skip-existing Skip records whose titles already exist --batch-commit Single git commit for all imports (default: true) --no-batch-commit Commit each record individually --concurrency <n> Parallel LLM calls (default: 5) --dry-run Preview without writing --store <store> Target store (default: "project")
MCP Tool: gnosys_import

Reading & Search

gnosys read

Read a specific memory. Supports layer prefix (e.g., project:decisions/auth.md)

Usage
gnosys read <memoryPath>
MCP Tool: gnosys_read

gnosys discover

Discover relevant memories by keyword. Searches relevance clouds, titles, and tags — returns metadata only, no content.

Usage
gnosys discover [options] <query> Options: -n, --limit <number> Max results (default: "20")
MCP Tool: gnosys_discover

gnosys ask

Ask a natural-language question and get a synthesized answer with citations

Usage
gnosys ask [options] <question> Options: -l, --limit <n> Max memories to retrieve (default: "15") -m, --mode <mode> Search mode: keyword | semantic | hybrid (default: "hybrid") --no-stream Disable streaming output
MCP Tool: gnosys_ask

gnosys recall

Always-on memory recall — injects most relevant memories as context. Sub-50ms, no LLM needed. Used by agent orchestrators for automatic memory injection.

Usage
gnosys recall <query> [--limit <n>] [--aggressive] [--no-aggressive] [--trace-id <id>] [--json] [--host]
Options
--limit <n>Max memories to return (default from config)
--aggressiveForce aggressive mode (inject even medium-relevance memories)
--no-aggressiveForce filtered mode (hard cutoff at minRelevance)
--trace-id <id>Trace ID for audit correlation
--jsonOutput raw JSON instead of formatted text
--hostOutput in host-friendly <gnosys-recall> format (default for MCP)
MCP Tool: gnosys_recall

gnosys fsearch

Federated search across all scopes with tier boosting. Current project memories rank highest (1.8x), then project (1.5x), user (1.0x), global (0.7x). Includes recency and reinforcement boosting.

Usage
gnosys fsearch <query> [--limit <n>] [-d <directory>] [--no-global] [--json]
MCP Tool: gnosys_federated_search

Organization

gnosys list

List all memories across all stores

Usage
gnosys list [options] Options: -c, --category <category> Filter by category -t, --tag <tag> Filter by tag -s, --store <store> Filter by store layer
MCP Tool: gnosys_list

gnosys lens

Filtered view of memories. Combine criteria to focus on what matters.

Usage
gnosys lens [options] Options: -c, --category <category> Filter by category -t, --tag <tags...> Filter by tag(s) --match <mode> Tag match mode: any or all (default: "any") --status <statuses...> Filter by status (active, archived, superseded) --author <authors...> Filter by author (human, ai, human+ai) --authority <authorities...> Filter by authority (declared, observed, imported, inferred) --min-confidence <n> Minimum confidence (0-1) --max-confidence <n> Maximum confidence (0-1) --created-after <date> Created after ISO date --created-before <date> Created before ISO date --modified-after <date> Modified after ISO date --modified-before <date> Modified before ISO date --or Combine filters with OR instead of AND
MCP Tool: gnosys_lens

gnosys tags

List all tags in the registry

Usage
gnosys tags
MCP Tool: gnosys_tags

gnosys tags-add

Add a new tag to the registry

Usage
gnosys tags-add [options] Options: --category <category> Tag category (domain, type, concern, status_tag) --tag <tag> The new tag to add
MCP Tool: gnosys_tags_add

gnosys graph

Show the full cross-reference graph across all memories

Usage
gnosys graph
MCP Tool: gnosys_graph

History & Updates

gnosys update

Update an existing memory's frontmatter and/or content

Usage
gnosys update [options] <memoryPath> Options: --title <title> New title --status <status> New status (active|archived|superseded) --confidence <n> New confidence (0-1) --relevance <keywords> Updated relevance keyword cloud --supersedes <id> ID of memory this supersedes --superseded-by <id> ID of memory that supersedes this one --content <content> New markdown content (replaces body)
MCP Tool: gnosys_update

gnosys reinforce

Signal whether a memory was useful, not relevant, or outdated

Usage
gnosys reinforce [options] <memoryId> Options: --signal <signal> Reinforcement signal (useful|not_relevant|outdated) --context <context> Why this signal was given
MCP Tool: gnosys_reinforce

gnosys stale

Find memories not modified within a given number of days

Usage
gnosys stale [options] Options: -d, --days <number> Days threshold (default: "90") -n, --limit <number> Max results (default: "20")
MCP Tool: gnosys_stale

gnosys history

Show version history for a memory (git-backed)

Usage
gnosys history [options] <memoryPath> Options: -n, --limit <number> Max entries (default: "20") --diff <hash> Show diff from this commit to current
MCP Tool: gnosys_history

gnosys rollback

Rollback a memory to its state at a specific commit

Usage
gnosys rollback <memoryPath> <commitHash>
MCP Tool: gnosys_rollback

gnosys timeline

Show when memories were created and modified over time

Usage
gnosys timeline [options] Options: -p, --period <period> Group by: day, week, month, year (default: "month")
MCP Tool: gnosys_timeline

gnosys stats

Show summary statistics for the memory store

Usage
gnosys stats
MCP Tool: gnosys_stats

Web Knowledge Base

gnosys web init

Interactive setup for web knowledge base. Creates the /knowledge/ directory, configures gnosys.json, and guides you through sitemap URL, LLM enrichment, and CI/CD environment variable setup. No API keys are stored in your project.

Usage
gnosys web init [options] Options: --source <type> Source type: sitemap, directory, urls (default: "sitemap") --output <dir> Output directory (default: "./knowledge") --non-interactive Skip prompts, use defaults --no-config Skip gnosys.json modification --json Output as JSON
What it does
Step 1: Enter sitemap URL (deployed site or localhost) Step 2: Enable LLM enrichment (or use free TF-IDF mode) Step 3: Configure provider and set CI/CD env var name (e.g. GNOSYS_ANTHROPIC_KEY) API keys are NEVER stored in your project. They are read from environment variables at build time (CI/CD secrets, hosting env).

gnosys web init is interactive and helps configure the LLM provider for web builds. Set the same GNOSYS_*_KEY environment variables as CI/CD secrets in your hosting platform (Vercel, Netlify, GitHub Actions, etc.).

gnosys web ingest

Crawl the configured source and generate knowledge markdown files. Supports sitemap XML, local directories, and URL lists.

Usage
gnosys web ingest [options] Options: --source <url> Override sitemap URL or content directory --prune Remove orphaned knowledge files --no-llm Force structured mode (no LLM) --concurrency <n> Parallel processing limit (default: "3") --dry-run Show what would change without writing files --verbose Print per-page details --json Output results as JSON

gnosys web build-index

Generate search index JSON from the knowledge directory. Builds a TF-IDF weighted inverted index for zero-dependency runtime search.

Usage
gnosys web build-index [options] Options: --output <path> Output path for the index JSON --json Output as JSON

gnosys web build

Run ingest + build-index in one shot. The primary command for CI/CD pipelines and build scripts.

Usage
gnosys web build [options] Options: --source <url> Override sitemap URL or content directory --prune Remove orphaned knowledge files --no-llm Force structured mode (no LLM) --concurrency <n> Parallel processing limit (default: "3") --dry-run Show what would change without writing files --verbose Print per-page details --json Output results as JSON

gnosys web add

Ingest a single URL into the knowledge base. Fetches the page, generates a knowledge markdown file, and rebuilds the search index.

Usage
gnosys web add <url> [options] Options: --no-llm Force structured mode (no LLM) --json Output as JSON

gnosys web remove

Remove a knowledge file and rebuild the search index.

Usage
gnosys web remove <filepath> [options] Options: --json Output as JSON

gnosys web update

Re-ingest a URL or refresh a knowledge file, then rebuild the index. Uses content hashing to detect changes.

Usage
gnosys web update <urlOrPath> [options] Options: --no-llm Force structured mode (no LLM) --json Output as JSON

gnosys web status

Show the current state of the web knowledge base — file counts, index size, last build time, and configuration.

Usage
gnosys web status [options] Options: --json Output as JSON

Process Tracing

gnosys trace

Trace a codebase and store procedural 'how' memories with call-chain relationships (leads_to, follows_from, requires)

Usage
gnosys trace <directory> [options] Options: --max-files <n> Maximum number of source files to scan (default: "500") --project-id <id> Project ID to associate memories with --json Output as JSON

gnosys reflect

Reflect on an outcome to update memory confidence and create relationships. Supports --memory-ids and --failure flags.

Usage
gnosys reflect <outcome> [options] Options: --memory-ids <ids> Comma-separated list of memory IDs to relate to --failure Mark this as a failure (default: success) --notes <text> Additional notes about the outcome --confidence-delta <n> Custom confidence delta (e.g. 0.1 or -0.2) --json Output as JSON

gnosys traverse

Traverse relationship chains starting from a memory (BFS, depth-limited). Walk leads_to, follows_from, requires edges.

Usage
gnosys traverse <memoryId> [options] Options: -d, --depth <n> Maximum traversal depth (default: 3, max: 10) --rel-types <types> Comma-separated relationship types to follow (e.g. leads_to,requires) --json Output as JSON

Agent Integration

gnosys sandbox start

Start the Gnosys sandbox background process (Unix socket server with Dream Mode scheduler)

Usage
gnosys sandbox start [options] Options: --persistent Keep running across reboots (future use) --db-path <path> Custom database directory --json Output as JSON

gnosys sandbox stop

Stop the Gnosys sandbox background process

Usage
gnosys sandbox stop [--json]

gnosys sandbox status

Check if the Gnosys sandbox is running. Shows PID and socket path when active.

Usage
gnosys sandbox status [--json]

gnosys helper generate

Generate a gnosys-helper.ts file in the current directory (or specified directory) for agent integration

Usage
gnosys helper generate [options] Options: -d, --directory <dir> Target directory (default: cwd) --json Output as JSON

System & Maintenance

gnosys upgrade

Re-initialize all registered projects after a version upgrade. Updates agent rules, project registry, and stamps the central DB with the current version. Also syncs the file-based project registry with the central DB so new projects are picked up automatically.

Usage
# After updating Gnosys npm install -g gnosys gnosys upgrade
What it does
1. Reads all registered projects (file registry + central DB) 2. Re-syncs project identity with central DB for each local project 3. Regenerates agent rules for all detected IDEs (Claude, Cursor, Codex) 4. Updates global agent rules (~/.claude/CLAUDE.md) 5. Stamps the central DB with the current version and machine hostname 6. Reports which projects were upgraded, skipped (not on this machine), or failed
Multi-machine support
When using a shared gnosys.db (iCloud, Dropbox, NAS), the upgrade command tracks which machines have upgraded and which are behind. If another machine upgrades first, you'll see a warning on every command: ⚠ Gnosys DB was upgraded to v4.2.1 by another-machine.local. You are running v4.1.4. Run: npm install -g gnosys && gnosys upgrade Projects that only exist on other machines are reported as "skipped" so you know to run gnosys upgrade on those machines too.

gnosys maintain

Run vault maintenance: detect duplicates, apply confidence decay, consolidate similar memories

Usage
gnosys maintain [options] Options: --dry-run Show what would change without modifying --auto-apply Automatically apply all changes (no prompts)
MCP Tool: gnosys_maintain

gnosys reindex

Rebuild all semantic embeddings from every memory file. Downloads the model (~80 MB) on first run.

Usage
gnosys reindex
MCP Tool: gnosys_reindex

gnosys reindex-graph

Build or rebuild the wikilink graph (.gnosys/graph.json)

Usage
gnosys reindex-graph
MCP Tool: gnosys_reindex_graph

gnosys dearchive

Force-dearchive memories matching a query from archive.db back to active.

Usage
gnosys dearchive <query> [--limit <n>]
Options
--limit <n>Max memories to dearchive (default: 5)

gnosys setup dream v5.4.2+

Configure Dream Mode interactively: enable/disable, designate which machine hosts the scheduler, pick provider/model with live API validation, set idle threshold and max runtime, toggle sub-tasks (selfCritique / generateSummaries / discoverRelationships).

In a multi-machine setup, only the designated machine arms its scheduler — others stay quiet. Run this on the machine you want to host dream cycles.

Usage
gnosys setup dream

gnosys dream

Run a Dream Mode cycle now (manual trigger). Performs the same 4 phases as the scheduler: confidence decay, self-critique, summary generation, and relationship discovery. Never deletes — only suggests reviews.

Usage
gnosys dream [--max-runtime <minutes>] [--no-critique] [--no-summaries] [--no-relationships] [--json]
Options
--max-runtime <min>Maximum runtime in minutes (default: 30)
--no-critiqueSkip self-critique phase
--no-summariesSkip summary generation phase
--no-relationshipsSkip relationship discovery phase
--jsonOutput dream report as JSON
MCP Tool: gnosys_dream

gnosys dream log v5.4.2+

Show recent dream runs from the audit log. Each row pairs a dream_start with its dream_complete entry and lists per-phase counts. --failures-only filters to runs with errors or unreachable provider.

Usage
gnosys dream log [--last N] [--since YYYY-MM-DD] [--failures-only] [--json]

gnosys export

Export gnosys.db to an Obsidian-compatible vault. Creates category folders with YAML frontmatter .md files, [[wikilinks]], summaries, reviews, and graph data.

Usage
gnosys export --to <dir> [--all] [--overwrite] [--no-summaries] [--no-reviews] [--no-graph] [--json]
Options
--to <dir>Target directory for export (required)
--allInclude summaries, reviews, and graph data
--overwriteOverwrite existing files in target directory
--no-summariesExclude Dream Mode summaries
--no-reviewsExclude review suggestions
--no-graphExclude relationship graph data
--jsonOutput export report as JSON
MCP Tool: gnosys_export

gnosys audit

View the structured audit trail of memory operations. Shows reads, writes, recalls, maintenance, and more.

Usage
gnosys audit [--days <n>] [--operation <op>] [--limit <n>] [--json]
Options
--days <n>Show entries from the last N days (default: 7)
--operation <op>Filter by operation type (read, write, recall, etc.)
--limit <n>Max entries to show
--jsonOutput raw JSON instead of formatted timeline
MCP Tool: gnosys_audit

gnosys migrate

Migrate data. Use --to-central to move per-project stores into the central DB.

Usage
gnosys migrate [--dry-run] [--to-central] [--json]
Options
--dry-runPreview what would be migrated without making changes
--to-centralMigrate per-project data into central DB
--jsonOutput migration report as JSON

gnosys backup

Create a backup of the central Gnosys database. Backups are stored in ~/.gnosys/backups/ with automatic daily snapshots.

Usage
gnosys backup [--json]
MCP Tool: gnosys_backup

gnosys restore

Restore the central Gnosys database from a backup file.

Usage
gnosys restore <backupFile> [--json]
MCP Tool: gnosys_restore

gnosys projects

List all registered projects in the central database. Shows project name, working directory, and memory count.

Usage
gnosys projects [--json]
MCP Tool: gnosys_stores (use gnosys_stores or gnosys_dashboard to list projects)

gnosys register

Register a project directory in the central database. Auto-detects project identity from .git, package.json, Cargo.toml, etc.

Usage
gnosys register [-d <directory>] [--json]
MCP Tool: gnosys_init (registers project automatically)

gnosys pref

Manage user-level preferences that persist across all projects. Preferences are stored as scope='user' memories in the central DB.

Usage
gnosys pref set <key> <value> gnosys pref get [key] gnosys pref delete <key>
Subcommands
set <key> <value>Set a user preference. Key should be kebab-case (e.g. 'commit-convention')
get [key]Get a preference by key, or list all preferences if no key given
delete <key>Delete a user preference
MCP Tools: gnosys_preference_set, gnosys_preference_get, gnosys_preference_delete

gnosys sync

Regenerate agent rules files from user preferences and project conventions. Injects a GNOSYS:START/GNOSYS:END block into your IDE's rules file. User content outside the markers is preserved.

Usage
gnosys sync [options]
Options
-d, --directory <dir>Project directory (default: cwd)
-t, --target <target>Target: claude, cursor, codex, all, or global
--globalSync to global ~/.claude/CLAUDE.md (applies to all projects)
Target files
claudeCLAUDE.md
cursor.cursor/rules/gnosys.mdc
codex.codex/gnosys.md
all→ all detected IDEs in the project
global~/.claude/CLAUDE.md
MCP Tool: gnosys_sync

gnosys ambiguity

Check if a query matches memories in multiple projects. Useful for detecting cross-project knowledge conflicts.

Usage
gnosys ambiguity <query> [--json]
MCP Tool: gnosys_detect_ambiguity

gnosys briefing

Generate a project briefing — categories, recent activity, top tags, and a human-readable summary. Use --all for all registered projects.

Usage
gnosys briefing [-p <project-id>] [-a|--all] [-d <directory>] [--json]
MCP Tool: gnosys_briefing

gnosys working-set

Show the implicit working set — recently modified memories for the current project. Useful for understanding active context.

Usage
gnosys working-set [-d <directory>] [-w <hours>] [--json]
MCP Tool: gnosys_working_set

Portfolio & Status

gnosys status

Show project readiness, blockers, and action items. Reads status memories written by agents and renders a human-readable dashboard.

Usage
gnosys status [options] Options: --global Show status for all registered projects --web Open the HTML dashboard in your browser --json Output as JSON

gnosys update-status

Get the guided 8-section checklist prompt for AI agents to write dashboard-compatible status memories. Run this command and paste the output into your agent conversation to trigger a structured status update.

Usage
gnosys update-status
MCP Tool: gnosys_update_status

gnosys portfolio

Generate a cross-project portfolio dashboard showing all registered projects with readiness scores, blockers, and recent activity. Output format is auto-detected from the file extension.

Usage
gnosys portfolio [options] Options: --output <file> Output file (auto-detects format: .html, .md, .json) --html Force HTML output --json Force JSON output
MCP Tool: gnosys_portfolio

Multi-Machine Sync

gnosys setup remote

Configure or change your remote sync location. An interactive wizard validates the target path and checks SQLite compatibility. Use --path to set the path non-interactively.

Note (v5.4.2+): the previous gnosys remote configure form was removed. The wizard logic is unchanged — just renamed to fit the gnosys setup <subsection> pattern.

Usage
gnosys setup remote [options] Options: --path <path> Path to the remote gnosys.db (NAS or shared drive mount)

gnosys remote status

Show pending local changes, unresolved conflicts, and the timestamp of the last successful sync.

Usage
gnosys remote status [options] Options: --json Output as JSON
MCP Tool: gnosys_remote_status

gnosys remote sync

Two-way sync: pushes local changes to the remote, then pulls remote changes back. Detects conflicts automatically. Use --auto for unattended runs (e.g. cron), or --newer-wins to resolve conflicts by modification time without prompting.

Usage
gnosys remote sync [options] Options: --auto Run non-interactively; skip conflicts for manual review later --newer-wins Auto-resolve conflicts by keeping the more recently modified version

gnosys remote push

Push local changes to the remote database only. Does not pull remote changes. Useful when you want a one-way checkpoint before a destructive operation.

Usage
gnosys remote push [options] Options: --newer-wins Auto-resolve conflicts by keeping the more recently modified version
MCP Tool: gnosys_remote_push

gnosys remote pull

Pull remote changes into the local database only. Does not push local changes. Useful for bootstrapping a new machine from an existing remote store.

Usage
gnosys remote pull [options] Options: --newer-wins Auto-resolve conflicts by keeping the more recently modified version
MCP Tool: gnosys_remote_pull

gnosys remote resolve <memoryId>

Resolve a specific sync conflict for a memory. Choose whether to keep the local or remote version. Without --keep, an AI-mediated diff is shown and you are prompted to decide.

Usage
gnosys remote resolve <memoryId> [options] Options: --keep <local|remote> Immediately keep the specified version without prompting
MCP Tool: gnosys_remote_resolve

Network Share Guide

Gnosys supports running your centralized knowledge store on network-mounted paths like Dropbox, iCloud Drive, OneDrive, and NAS volumes. This enables you to access your memory across multiple machines safely.

New to multi-machine setup? See the step-by-step walkthrough in Installation & Setup > Multi-Machine Setup.

Using GNOSYS_GLOBAL (Recommended)

The primary way to share your knowledge store across machines is the GNOSYS_GLOBAL environment variable. Set it in your shell profile (~/.zshrc or ~/.bashrc) to point at a cloud-synced folder containing gnosys.db:

Shell profile (~/.zshrc)
export GNOSYS_GLOBAL="$HOME/Library/Mobile Documents/com~apple~CloudDocs/gnosys"

Common network paths:

  • Dropbox: ~/Dropbox/gnosys/gnosys.db
  • iCloud Drive: ~/Library/Mobile Documents/com~apple~CloudDocs/gnosys/gnosys.db
  • OneDrive: ~/OneDrive/gnosys/gnosys.db
  • NAS (mounted): /mnt/nas/shared/gnosys/gnosys.db

All Gnosys commands and the MCP server will automatically use this path for the global store.

Alternative: --db-path Flag

For sandbox-specific usage or one-off commands, you can pass the path directly:

Usage
gnosys sandbox start --db-path ~/Dropbox/gnosys/gnosys.db

Network-Safe Defaults

Gnosys automatically applies network-safe configuration when detecting a network path:

  • Retries: 5 (handles transient network errors)
  • Retry Delay: 1s (waits between attempts)
  • Busy Timeout: 10s (allows other machines to finish writes)
  • WAL Mode: Auto-enabled for safe concurrent access

Multi-Machine Concurrent Usage

Gnosys uses SQLite's Write-Ahead Logging (WAL) mode to safely handle concurrent access from multiple machines:

  • Readers on machine A don't block writers on machine B
  • Each machine sees committed changes from other machines
  • Locks are coordinated via filesystem metadata (no central server needed)

Best Practices

Before first use:

  • Ensure the network path is fully mounted on your machine
  • Test write access: touch /path/to/network/.gnosys/test.txt && rm /path/to/network/.gnosys/test.txt

After remounting or network events:

  • Stop the sandbox if path becomes unmounted
  • Remount the path
  • Restart the sandbox: gnosys sandbox stop && gnosys sandbox start

Monitoring Network Health

Use the dashboard to monitor network store status:

gnosys dashboard

Look for store health indicators and retry counts in the dashboard output.

Backup & Restore

Gnosys provides comprehensive backup and restore functionality to protect your knowledge store and enable disaster recovery. Backups are automatic and can be triggered manually or restored on demand.

Creating Backups

Create a manual backup of your centralized database:

Create a backup (saved to ~/.gnosys/backups/)
gnosys backup
Save backup to a custom location
gnosys backup --to ~/backups/my-backup.db

Restoring from Backups

Restore your database from a backup file:

Restore from the latest backup
gnosys restore latest
Restore from a specific backup file
gnosys restore --from ~/backups/my-backup.db

What Gets Backed Up

The following components are included in every backup:

  • Central Database: All memories, relationships, and metadata from ~/.gnosys/gnosys.db
  • Helper Library: Generated helper-*.ts files for agent integration
  • Rules Files: Agent rules (.cursor/rules/gnosys.mdc, CLAUDE.md)
  • Sandbox Log: Historical sandbox activity and performance metrics

Automatic Backups

Gnosys automatically creates daily backups in ~/.gnosys/backups/ using the timestamp format gnosys-YYYY-MM-DD-HH-mm-ss.db. Old backups are automatically pruned after 30 days.

Helper Library

The Gnosys helper library provides a simple TypeScript/JavaScript API for agents and applications to interact with your centralized knowledge store. Generate the helper library for your project with gnosys helper generate.

Generating the Helper

Generate a new helper library file for your project:

Generate helper library
gnosys helper generate

This creates gnosys-helper.ts in your project root (or current directory). The file includes TypeScript types and integrates with your project's MCP server configuration.

Using the Helper in Agent Code

Import and use the helper in your agent code:

TypeScript Example
import { gnosys } from "./gnosys-helper"; // Add a new memory await gnosys.add("New memory content"); // Search memories const results = await gnosys.recall("search query"); // List all memories const all = await gnosys.list(); // Update a memory await gnosys.update("memory-id", "Updated content"); // Delete a memory await gnosys.delete("memory-id");

Core API

The helper provides these main functions:

API Methods
add(content: string)Add a new memory to the store
recall(query: string)Search and recall relevant memories (sub-50ms)
list(limit?: number)List all memories (optionally with limit)
read(id: string)Read a specific memory by ID
update(id: string, content: string)Update an existing memory
delete(id: string)Delete a memory by ID
search(query: string, options?: SearchOptions)Advanced search with filters

Integration with Cursor & Claude Code

Use the helper within agent rules to automatically save context and retrieve relevant memories:

Agent Rules Example (CLAUDE.md)
## Using Gnosys Helper When starting work on a task: 1. Import the helper: \`import { gnosys } from "./gnosys-helper";\` 2. Recall relevant context: \`const context = await gnosys.recall("task topic");\` 3. Save decisions: \`await gnosys.add("Decision: choose X because Y");\` The helper connects you to the centralized knowledge store across all projects.

TypeScript Types

The helper includes full TypeScript types for memory objects:

Memory Structure
interface Memory { id: string; title: string; content: string; category: string; tags: string[]; created: Date; modified: Date; confidence: number; links: string[]; // wikilink references }

Agent Rules

Agent rules allow you to configure custom behavior and preferences for your AI agent. Rules are context directives that shape how the agent operates within your IDE or application. They're particularly useful for enforcing coding standards, specifying project conventions, and providing the agent with important context about your workflow.

Rules are stored as markdown files in your IDE configuration and are automatically included in the agent's context. Here's how to set them up for your preferred IDE:

Save to .cursor/rules/gnosys.mdc — Uses Cursor's .mdc format with YAML frontmatter for alwaysApply: true.


          

Save to CLAUDE.md at your project root — Concise imperative format with grouped tool table.