Complete CLI & MCP Reference
Getting Started
gnosys setup
Interactive setup wizard. Configures your LLM provider, API key, model tier, and IDE integration in one guided flow. Also detects existing installations and offers to upgrade registered projects.
gnosys setup [--non-interactive]
--non-interactive Skip prompts, use defaults (for CI/scripting)Step 1: Select LLM provider (8 providers with pricing)
Step 2: Pick model tier (budget / balanced / premium)
Step 3: Enter API key (saved to ~/.config/gnosys/.env)
Step 4: Configure IDE (Claude Code, Cursor, Codex)
Web knowledge base is set up separately: gnosys web initgnosys init
Initialize Gnosys in the current directory. Creates project identity, registers in central DB, and sets up store.
gnosys init [options]
Options:
-d, --directory <dir> Target directory (default: cwd)
gnosys config
View and manage LLM provider configuration
gnosys config [command]
Commands:
show Show current LLM configuration
set <key> <value> [extra...] Set a config value
init Generate default gnosys.json
Keys: provider, model, ollama-url, groq-model,
openai-model, lmstudio-url, task <task> <provider> <model>
gnosys stores
Show all active stores, their layers, paths, and permissions
gnosys stores
gnosys doctor
Check system health: stores, LLM connectivity, embeddings, archive
gnosys doctor
gnosys serve
Start the MCP server (stdio mode)
gnosys serve [options]
Options:
--with-maintenance Run maintenance every 6 hours in background
gnosys dashboard
Show system dashboard: memory count, health, graph stats, LLM status
gnosys dashboard [options]
Options:
--json Output as JSON instead of pretty table
Installation & Setup Overview
A complete walkthrough of installing Gnosys and setting it up for AI-assisted development. If you just want IDE-specific instructions, skip to IDE & Agent Setup.
Installation
npm install -g gnosys
One-Time Global Setup (all projects)
This gives every AI coding session access to Gnosys tools. You only need to do this once per machine.
# Add MCP server (Claude Code example)
claude mcp add --scope user gnosys gnosys serve
# Generate global agent rules
gnosys sync --global
gnosys sync --global writes instructions into ~/.claude/CLAUDE.md inside GNOSYS:START/END markers. Any existing content outside those markers is preserved.
Per-Project Setup
Every project you want Gnosys to remember needs a one-time init:
cd your-project
gnosys init
gnosys sync --target claude # or cursor, codex, all
gnosys init creates .gnosys/gnosys.json (project identity), registers the project in the central DB, and creates a local .gnosys/ directory for project configuration. All memories are stored in the central SQLite database; markdown files are only generated on-demand via gnosys export for Obsidian.
gnosys sync writes project-specific rules into your IDE's rules file (e.g. CLAUDE.md, .cursor/rules/gnosys.mdc).
Understanding Memory Scopes
All memories live in one central database at ~/.gnosys/gnosys.db. Each memory has a scope that controls where it appears in search results:
| Scope | What it's for | Visible when | How to create |
|---|---|---|---|
| project (default) | Decisions, architecture, conventions for this specific project | Searching from this project only | gnosys add "chose PostgreSQL" |
| user | Your personal preferences, coding style, workflow conventions | Searching from any of your projects | gnosys add-structured --user ... or gnosys_preference_set |
| global | Shared knowledge, company conventions, team standards | Searching from anywhere | gnosys add-structured --global ... |
When you use gnosys_federated_search, all three scopes are searched with tier boosting: project results get a 1.5x boost, user gets 1.0x, and global gets 0.7x. This means project-specific knowledge ranks highest, but relevant global knowledge still surfaces.
Multi-Machine Setup (iCloud, Dropbox, NAS)
By default the central DB lives at ~/.gnosys/gnosys.db, local to each machine. To share it across machines, point the GNOSYS_GLOBAL environment variable at a cloud-synced folder.
Step 1: Create a folder on your cloud drive (e.g., iCloud Drive/gnosys/)
Step 2: Copy your existing DB there:
cp ~/.gnosys/gnosys.db ~/Library/Mobile\ Documents/com~apple~CloudDocs/gnosys/gnosys.db
Step 3: Set the GNOSYS_GLOBAL environment variable in your shell profile (~/.zshrc or ~/.bashrc):
# iCloud Drive
export GNOSYS_GLOBAL="$HOME/Library/Mobile Documents/com~apple~CloudDocs/gnosys"
# Dropbox
export GNOSYS_GLOBAL="$HOME/Dropbox/gnosys"
# OneDrive
export GNOSYS_GLOBAL="$HOME/OneDrive/gnosys"
Step 4: On your second machine, install Gnosys, set the same GNOSYS_GLOBAL variable, and run gnosys sync --global. The cloud service handles syncing the database file.
Warning: Avoid running heavy writes on both machines simultaneously. SQLite handles concurrent reads well, but simultaneous writes through cloud sync can cause conflicts. In practice this is rarely an issue since you're typically on one machine at a time.
Upgrading Gnosys
After installing a new version, run gnosys upgrade to re-sync all your projects:
npm install -g gnosys
gnosys upgrade
This re-initializes all registered projects, regenerates agent rules for every detected IDE, updates global rules, and stamps the central DB with the new version. Projects on other machines are reported as "skipped" — run gnosys upgrade on those machines too.
If you share a gnosys.db across machines and one machine upgrades first, every gnosys command on the other machine will warn you to upgrade.
LLM Provider Setup
The easiest way to configure your LLM provider is to run gnosys setup, which walks you through provider selection, model tier, and API key entry.
Gnosys supports 8 LLM providers. LLM features (ingestion structuring, gnosys ask, Dream Mode) require a configured provider. Core memory operations (add, read, search, recall) work without one.
| Provider | Type | Default Model | API Key Env Var |
|---|---|---|---|
| Anthropic | Cloud | claude-sonnet-4-20250514 | GNOSYS_ANTHROPIC_KEY |
| Ollama | Local | llama3.2 | — (runs locally) |
| Groq | Cloud | llama-3.3-70b-versatile | GNOSYS_GROQ_KEY |
| OpenAI | Cloud | gpt-4o-mini | GNOSYS_OPENAI_KEY |
| LM Studio | Local | default | — (runs locally) |
| xAI | Cloud | grok-2 | GNOSYS_XAI_KEY |
| Mistral | Cloud | mistral-large-latest | GNOSYS_MISTRAL_KEY |
| Custom | Any | (user-defined) | GNOSYS_LLM_API_KEY |
Setting your API key (one-time, global — configured via gnosys setup):
API key storage options (configured via gnosys setup):
1. macOS Keychain (recommended — encrypted, no plaintext on disk):
security add-generic-password -a "$USER" -s "GNOSYS_ANTHROPIC_KEY" -w "your-key"
2. Environment variable (shell profile):
echo 'export GNOSYS_ANTHROPIC_KEY=your-key' >> ~/.zshrc && source ~/.zshrc
3. ~/.config/gnosys/.env (least secure — plaintext on disk):
echo 'GNOSYS_ANTHROPIC_KEY=your-key' >> ~/.config/gnosys/.env
Using a custom provider (any OpenAI-compatible API):
gnosys config set provider custom
gnosys config set custom-url https://api.together.xyz/v1
gnosys config set custom-model meta-llama/Llama-3-70b
echo 'GNOSYS_LLM_API_KEY=tok-...' >> ~/.config/gnosys/.env
API keys are resolved in order: macOS Keychain, environment variable, then ~/.config/gnosys/.env. Keys are NOT stored in IDE MCP configs. All storage methods are loaded automatically by both the CLI and MCP server.
Model lists and pricing are fetched live from OpenRouter during setup and cached for 24 hours.
IDE & Agent Setup
Setting up Gnosys requires two things: (1) connecting the MCP server so your agent has access to Gnosys tools, and (2) generating agent rules so your agent knows when to use them. Here's the complete setup for each IDE.
Fast path (v5.4.0+): run gnosys init <ide> from your project directory and Gnosys wires the MCP config for you. Supported IDE arguments: claude-desktop, claude, cursor, codex, gemini-cli, antigravity.
One-time vs. per-project: The IDE wiring writes to a user-level config (e.g. ~/Library/Application Support/Claude/...), so it only needs to happen once. The project registration that gnosys init also performs is per-directory — every codebase you want memory-enabled needs its own gnosys init.
Cowork users: Cowork sessions don't have a working directory of their own — once Gnosys is registered with Claude Desktop, the agent uses projectRoot on each tool call to scope memory to whichever codebase you're discussing. Run gnosys init claude-desktop once globally and gnosys init in each codebase.
Claude Code
Step 1: Install Gnosys and add the MCP server:
npm install -g gnosys
claude mcp add gnosys gnosys serve
Step 2: Generate global agent rules (teaches Claude when to use Gnosys tools in every project):
gnosys sync --global
This writes a GNOSYS:START/END block into ~/.claude/CLAUDE.md. Your own content outside the block is preserved.
Step 3: Initialize each project you want Gnosys to remember:
cd your-project
gnosys init
gnosys sync --target claude
This creates .gnosys/gnosys.json (project identity), registers the project in the central DB, and writes project-specific conventions into CLAUDE.md.
Cursor
Step 1: Install Gnosys and add the MCP config to .cursor/mcp.json in your project root:
{
"mcpServers": {
"gnosys": {
"command": "gnosys",
"args": ["serve"]
}
}
}
Step 2: Initialize the project and generate agent rules:
npm install -g gnosys
cd your-project
gnosys init
gnosys sync --target cursor
This writes the Gnosys tool instructions into .cursor/rules/gnosys.mdc. Cursor loads it automatically.
Codex CLI
Step 1: Install Gnosys and add MCP config to .codex/config.toml:
[mcp.gnosys]
type = "local"
command = ["gnosys", "serve"]
Step 2: Initialize the project and generate agent rules:
npm install -g gnosys
cd your-project
gnosys init
gnosys sync --target codex
This writes agent rules into .codex/gnosys.md.
Claude Desktop — covers Chat, Cowork, and Code
Fast path: from any project directory, run gnosys init claude-desktop — Gnosys writes the right config file for your platform and reminds you to restart Claude Desktop.
Manual: add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS), %APPDATA%\Claude\claude_desktop_config.json (Windows), or ~/.config/Claude/claude_desktop_config.json (Linux):
{
"mcpServers": {
"gnosys": {
"command": "gnosys",
"args": ["serve"]
}
}
}
A single registration covers all three Claude Desktop surfaces: Chat, Cowork, and Code (the version embedded in the desktop app). The MCP tool descriptions guide the agent — no separate rules file needed.
Gemini CLI
Fast path: gnosys init gemini-cli.
Manual: add an mcpServers entry to ~/.gemini/settings.json (preserves your existing settings):
{
"mcpServers": {
"gnosys": {
"command": "gnosys",
"args": ["serve"]
}
}
}
Antigravity
Fast path: gnosys init antigravity.
Manual: add to ~/.gemini/antigravity/mcp_config.json (Antigravity reloads MCP servers when this file changes):
{
"mcpServers": {
"gnosys": {
"command": "gnosys",
"args": ["serve"]
}
}
}
Sync to all detected IDEs at once
If your project has multiple IDE configs, sync them all in one command:
gnosys sync --target all
This detects .cursor/, .claude/ or CLAUDE.md, and .codex/ in the project and writes agent rules to each.
Writing Memories
gnosys add
Add a new memory (uses LLM to structure raw input)
gnosys add [options] <input>
Options:
-a, --author <author> Author (human|ai|human+ai) (default: "human")
--authority <authority> Authority (declared|observed|imported|inferred) (default: "declared")
-s, --store <store> Target store (project|personal|global)
gnosys add-structured
Add a memory with structured input (no LLM needed)
gnosys add-structured [options]
Options:
--title <title> Memory title
--category <category> Category directory name
--content <content> Memory content as markdown
--tags <json> Tags as JSON object (default: "{}")
--relevance <keywords> Keyword cloud for discovery (default: "")
-a, --author <author> Author (default: "human")
--authority <authority> Authority level (default: "declared")
--confidence <n> Confidence 0-1 (default: "0.8")
-s, --store <store> Target store
gnosys commit-context
Pre-compaction sweep: extract atomic memories from a context string, check novelty, commit novel ones
gnosys commit-context [options] <context>
Options:
--dry-run Show what would be committed without writing
-s, --store <store> Target store (project|personal|global)
gnosys bootstrap
Batch-import existing documents into the memory store
gnosys bootstrap [options] <sourceDir>
Options:
-p, --pattern <patterns...> File patterns to match (default: **/*.md)
--skip-existing Skip files whose titles already exist
-c, --category <category> Default category (default: "imported")
-a, --author <author> Default author (default: "human")
--authority <authority> Default authority (default: "imported")
--confidence <n> Default confidence 0-1 (default: "0.7")
--preserve-frontmatter Preserve existing YAML frontmatter if present
--dry-run Show what would be imported without writing
-s, --store <store> Target store (project|personal|global)
gnosys import
Bulk import structured data (CSV, JSON, JSONL) into Gnosys memories
gnosys import [options] <fileOrUrl>
Options:
--format <format> Data format: csv, json, jsonl
--mapping <json> Field mapping as JSON
--mode <mode> Processing mode: llm or structured (default: "structured")
--limit <n> Max records to import
--offset <n> Skip first N records
--skip-existing Skip records whose titles already exist
--batch-commit Single git commit for all imports (default: true)
--no-batch-commit Commit each record individually
--concurrency <n> Parallel LLM calls (default: 5)
--dry-run Preview without writing
--store <store> Target store (default: "project")
Reading & Search
gnosys read
Read a specific memory. Supports layer prefix (e.g., project:decisions/auth.md)
gnosys read <memoryPath>
gnosys discover
Discover relevant memories by keyword. Searches relevance clouds, titles, and tags — returns metadata only, no content.
gnosys discover [options] <query>
Options:
-n, --limit <number> Max results (default: "20")
gnosys search
Search memories by keyword. Use --federated for tier-boosted cross-scope search.
gnosys search [options] <query>
Options:
-n, --limit <number> Max results (default: "20")
gnosys hybrid-search
Search using hybrid keyword + semantic fusion (RRF). Use --federated for cross-scope.
gnosys hybrid-search [options] <query>
Options:
-l, --limit <n> Max results (default: "15")
-m, --mode <mode> Search mode: keyword | semantic | hybrid (default: "hybrid")
gnosys semantic-search
Search using semantic similarity only (requires embeddings)
gnosys semantic-search [options] <query>
Options:
-l, --limit <n> Max results (default: "15")
gnosys ask
Ask a natural-language question and get a synthesized answer with citations
gnosys ask [options] <question>
Options:
-l, --limit <n> Max memories to retrieve (default: "15")
-m, --mode <mode> Search mode: keyword | semantic | hybrid (default: "hybrid")
--no-stream Disable streaming output
gnosys recall
Always-on memory recall — injects most relevant memories as context. Sub-50ms, no LLM needed. Used by agent orchestrators for automatic memory injection.
gnosys recall <query> [--limit <n>] [--aggressive] [--no-aggressive] [--trace-id <id>] [--json] [--host]
--limit <n> | Max memories to return (default from config) |
--aggressive | Force aggressive mode (inject even medium-relevance memories) |
--no-aggressive | Force filtered mode (hard cutoff at minRelevance) |
--trace-id <id> | Trace ID for audit correlation |
--json | Output raw JSON instead of formatted text |
--host | Output in host-friendly <gnosys-recall> format (default for MCP) |
gnosys fsearch
Federated search across all scopes with tier boosting. Current project memories rank highest (1.8x), then project (1.5x), user (1.0x), global (0.7x). Includes recency and reinforcement boosting.
gnosys fsearch <query> [--limit <n>] [-d <directory>] [--no-global] [--json]
Organization
gnosys list
List all memories across all stores
gnosys list [options]
Options:
-c, --category <category> Filter by category
-t, --tag <tag> Filter by tag
-s, --store <store> Filter by store layer
gnosys lens
Filtered view of memories. Combine criteria to focus on what matters.
gnosys lens [options]
Options:
-c, --category <category> Filter by category
-t, --tag <tags...> Filter by tag(s)
--match <mode> Tag match mode: any or all (default: "any")
--status <statuses...> Filter by status (active, archived, superseded)
--author <authors...> Filter by author (human, ai, human+ai)
--authority <authorities...> Filter by authority (declared, observed, imported, inferred)
--min-confidence <n> Minimum confidence (0-1)
--max-confidence <n> Maximum confidence (0-1)
--created-after <date> Created after ISO date
--created-before <date> Created before ISO date
--modified-after <date> Modified after ISO date
--modified-before <date> Modified before ISO date
--or Combine filters with OR instead of AND
gnosys tags
List all tags in the registry
gnosys tags
gnosys tags-add
Add a new tag to the registry
gnosys tags-add [options]
Options:
--category <category> Tag category (domain, type, concern, status_tag)
--tag <tag> The new tag to add
gnosys links
Show wikilinks for a memory — both outgoing [[links]] and backlinks from other memories
gnosys links <memoryPath>
gnosys graph
Show the full cross-reference graph across all memories
gnosys graph
History & Updates
gnosys update
Update an existing memory's frontmatter and/or content
gnosys update [options] <memoryPath>
Options:
--title <title> New title
--status <status> New status (active|archived|superseded)
--confidence <n> New confidence (0-1)
--relevance <keywords> Updated relevance keyword cloud
--supersedes <id> ID of memory this supersedes
--superseded-by <id> ID of memory that supersedes this one
--content <content> New markdown content (replaces body)
gnosys reinforce
Signal whether a memory was useful, not relevant, or outdated
gnosys reinforce [options] <memoryId>
Options:
--signal <signal> Reinforcement signal (useful|not_relevant|outdated)
--context <context> Why this signal was given
gnosys stale
Find memories not modified within a given number of days
gnosys stale [options]
Options:
-d, --days <number> Days threshold (default: "90")
-n, --limit <number> Max results (default: "20")
gnosys history
Show version history for a memory (git-backed)
gnosys history [options] <memoryPath>
Options:
-n, --limit <number> Max entries (default: "20")
--diff <hash> Show diff from this commit to current
gnosys rollback
Rollback a memory to its state at a specific commit
gnosys rollback <memoryPath> <commitHash>
gnosys timeline
Show when memories were created and modified over time
gnosys timeline [options]
Options:
-p, --period <period> Group by: day, week, month, year (default: "month")
gnosys stats
Show summary statistics for the memory store
gnosys stats
Web Knowledge Base
gnosys web init
Interactive setup for web knowledge base. Creates the /knowledge/ directory, configures gnosys.json, and guides you through sitemap URL, LLM enrichment, and CI/CD environment variable setup. No API keys are stored in your project.
gnosys web init [options]
Options:
--source <type> Source type: sitemap, directory, urls (default: "sitemap")
--output <dir> Output directory (default: "./knowledge")
--non-interactive Skip prompts, use defaults
--no-config Skip gnosys.json modification
--json Output as JSON
Step 1: Enter sitemap URL (deployed site or localhost)
Step 2: Enable LLM enrichment (or use free TF-IDF mode)
Step 3: Configure provider and set CI/CD env var name (e.g. GNOSYS_ANTHROPIC_KEY)
API keys are NEVER stored in your project. They are read from
environment variables at build time (CI/CD secrets, hosting env).gnosys web init is interactive and helps configure the LLM provider for web builds. Set the same GNOSYS_*_KEY environment variables as CI/CD secrets in your hosting platform (Vercel, Netlify, GitHub Actions, etc.).
gnosys web ingest
Crawl the configured source and generate knowledge markdown files. Supports sitemap XML, local directories, and URL lists.
gnosys web ingest [options]
Options:
--source <url> Override sitemap URL or content directory
--prune Remove orphaned knowledge files
--no-llm Force structured mode (no LLM)
--concurrency <n> Parallel processing limit (default: "3")
--dry-run Show what would change without writing files
--verbose Print per-page details
--json Output results as JSON
gnosys web build-index
Generate search index JSON from the knowledge directory. Builds a TF-IDF weighted inverted index for zero-dependency runtime search.
gnosys web build-index [options]
Options:
--output <path> Output path for the index JSON
--json Output as JSON
gnosys web build
Run ingest + build-index in one shot. The primary command for CI/CD pipelines and build scripts.
gnosys web build [options]
Options:
--source <url> Override sitemap URL or content directory
--prune Remove orphaned knowledge files
--no-llm Force structured mode (no LLM)
--concurrency <n> Parallel processing limit (default: "3")
--dry-run Show what would change without writing files
--verbose Print per-page details
--json Output results as JSON
gnosys web add
Ingest a single URL into the knowledge base. Fetches the page, generates a knowledge markdown file, and rebuilds the search index.
gnosys web add <url> [options]
Options:
--no-llm Force structured mode (no LLM)
--json Output as JSON
gnosys web remove
Remove a knowledge file and rebuild the search index.
gnosys web remove <filepath> [options]
Options:
--json Output as JSON
gnosys web update
Re-ingest a URL or refresh a knowledge file, then rebuild the index. Uses content hashing to detect changes.
gnosys web update <urlOrPath> [options]
Options:
--no-llm Force structured mode (no LLM)
--json Output as JSON
gnosys web status
Show the current state of the web knowledge base — file counts, index size, last build time, and configuration.
gnosys web status [options]
Options:
--json Output as JSON
Process Tracing
gnosys trace
Trace a codebase and store procedural 'how' memories with call-chain relationships (leads_to, follows_from, requires)
gnosys trace <directory> [options]
Options:
--max-files <n> Maximum number of source files to scan (default: "500")
--project-id <id> Project ID to associate memories with
--json Output as JSON
gnosys reflect
Reflect on an outcome to update memory confidence and create relationships. Supports --memory-ids and --failure flags.
gnosys reflect <outcome> [options]
Options:
--memory-ids <ids> Comma-separated list of memory IDs to relate to
--failure Mark this as a failure (default: success)
--notes <text> Additional notes about the outcome
--confidence-delta <n> Custom confidence delta (e.g. 0.1 or -0.2)
--json Output as JSON
gnosys traverse
Traverse relationship chains starting from a memory (BFS, depth-limited). Walk leads_to, follows_from, requires edges.
gnosys traverse <memoryId> [options]
Options:
-d, --depth <n> Maximum traversal depth (default: 3, max: 10)
--rel-types <types> Comma-separated relationship types to follow (e.g. leads_to,requires)
--json Output as JSON
Agent Integration
gnosys sandbox start
Start the Gnosys sandbox background process (Unix socket server with Dream Mode scheduler)
gnosys sandbox start [options]
Options:
--persistent Keep running across reboots (future use)
--db-path <path> Custom database directory
--json Output as JSON
gnosys sandbox stop
Stop the Gnosys sandbox background process
gnosys sandbox stop [--json]
gnosys sandbox status
Check if the Gnosys sandbox is running. Shows PID and socket path when active.
gnosys sandbox status [--json]
gnosys helper generate
Generate a gnosys-helper.ts file in the current directory (or specified directory) for agent integration
gnosys helper generate [options]
Options:
-d, --directory <dir> Target directory (default: cwd)
--json Output as JSON
System & Maintenance
gnosys upgrade
Re-initialize all registered projects after a version upgrade. Updates agent rules, project registry, and stamps the central DB with the current version. Also syncs the file-based project registry with the central DB so new projects are picked up automatically.
# After updating Gnosys
npm install -g gnosys
gnosys upgrade
1. Reads all registered projects (file registry + central DB)
2. Re-syncs project identity with central DB for each local project
3. Regenerates agent rules for all detected IDEs (Claude, Cursor, Codex)
4. Updates global agent rules (~/.claude/CLAUDE.md)
5. Stamps the central DB with the current version and machine hostname
6. Reports which projects were upgraded, skipped (not on this machine), or failed
When using a shared gnosys.db (iCloud, Dropbox, NAS), the upgrade
command tracks which machines have upgraded and which are behind.
If another machine upgrades first, you'll see a warning on every command:
⚠ Gnosys DB was upgraded to v4.2.1 by another-machine.local.
You are running v4.1.4. Run: npm install -g gnosys && gnosys upgrade
Projects that only exist on other machines are reported as "skipped"
so you know to run gnosys upgrade on those machines too.
gnosys maintain
Run vault maintenance: detect duplicates, apply confidence decay, consolidate similar memories
gnosys maintain [options]
Options:
--dry-run Show what would change without modifying
--auto-apply Automatically apply all changes (no prompts)
gnosys reindex
Rebuild all semantic embeddings from every memory file. Downloads the model (~80 MB) on first run.
gnosys reindex
gnosys reindex-graph
Build or rebuild the wikilink graph (.gnosys/graph.json)
gnosys reindex-graph
gnosys dearchive
Force-dearchive memories matching a query from archive.db back to active.
gnosys dearchive <query> [--limit <n>]
--limit <n> | Max memories to dearchive (default: 5) |
gnosys setup dream v5.4.2+
Configure Dream Mode interactively: enable/disable, designate which machine hosts the scheduler, pick provider/model with live API validation, set idle threshold and max runtime, toggle sub-tasks (selfCritique / generateSummaries / discoverRelationships).
In a multi-machine setup, only the designated machine arms its scheduler — others stay quiet. Run this on the machine you want to host dream cycles.
gnosys setup dream
gnosys dream
Run a Dream Mode cycle now (manual trigger). Performs the same 4 phases as the scheduler: confidence decay, self-critique, summary generation, and relationship discovery. Never deletes — only suggests reviews.
gnosys dream [--max-runtime <minutes>] [--no-critique] [--no-summaries] [--no-relationships] [--json]
--max-runtime <min> | Maximum runtime in minutes (default: 30) |
--no-critique | Skip self-critique phase |
--no-summaries | Skip summary generation phase |
--no-relationships | Skip relationship discovery phase |
--json | Output dream report as JSON |
gnosys dream log v5.4.2+
Show recent dream runs from the audit log. Each row pairs a dream_start with its dream_complete entry and lists per-phase counts. --failures-only filters to runs with errors or unreachable provider.
gnosys dream log [--last N] [--since YYYY-MM-DD] [--failures-only] [--json]
gnosys export
Export gnosys.db to an Obsidian-compatible vault. Creates category folders with YAML frontmatter .md files, [[wikilinks]], summaries, reviews, and graph data.
gnosys export --to <dir> [--all] [--overwrite] [--no-summaries] [--no-reviews] [--no-graph] [--json]
--to <dir> | Target directory for export (required) |
--all | Include summaries, reviews, and graph data |
--overwrite | Overwrite existing files in target directory |
--no-summaries | Exclude Dream Mode summaries |
--no-reviews | Exclude review suggestions |
--no-graph | Exclude relationship graph data |
--json | Output export report as JSON |
gnosys audit
View the structured audit trail of memory operations. Shows reads, writes, recalls, maintenance, and more.
gnosys audit [--days <n>] [--operation <op>] [--limit <n>] [--json]
--days <n> | Show entries from the last N days (default: 7) |
--operation <op> | Filter by operation type (read, write, recall, etc.) |
--limit <n> | Max entries to show |
--json | Output raw JSON instead of formatted timeline |
gnosys migrate
Migrate data. Use --to-central to move per-project stores into the central DB.
gnosys migrate [--dry-run] [--to-central] [--json]
--dry-run | Preview what would be migrated without making changes |
--to-central | Migrate per-project data into central DB |
--json | Output migration report as JSON |
gnosys backup
Create a backup of the central Gnosys database. Backups are stored in ~/.gnosys/backups/ with automatic daily snapshots.
gnosys backup [--json]
gnosys restore
Restore the central Gnosys database from a backup file.
gnosys restore <backupFile> [--json]
gnosys projects
List all registered projects in the central database. Shows project name, working directory, and memory count.
gnosys projects [--json]
gnosys register
Register a project directory in the central database. Auto-detects project identity from .git, package.json, Cargo.toml, etc.
gnosys register [-d <directory>] [--json]
gnosys pref
Manage user-level preferences that persist across all projects. Preferences are stored as scope='user' memories in the central DB.
gnosys pref set <key> <value>
gnosys pref get [key]
gnosys pref delete <key>
set <key> <value> | Set a user preference. Key should be kebab-case (e.g. 'commit-convention') |
get [key] | Get a preference by key, or list all preferences if no key given |
delete <key> | Delete a user preference |
gnosys sync
Regenerate agent rules files from user preferences and project conventions. Injects a GNOSYS:START/GNOSYS:END block into your IDE's rules file. User content outside the markers is preserved.
gnosys sync [options]
-d, --directory <dir> | Project directory (default: cwd) |
-t, --target <target> | Target: claude, cursor, codex, all, or global |
--global | Sync to global ~/.claude/CLAUDE.md (applies to all projects) |
claude | → CLAUDE.md |
cursor | → .cursor/rules/gnosys.mdc |
codex | → .codex/gnosys.md |
all | → all detected IDEs in the project |
global | → ~/.claude/CLAUDE.md |
gnosys ambiguity
Check if a query matches memories in multiple projects. Useful for detecting cross-project knowledge conflicts.
gnosys ambiguity <query> [--json]
gnosys briefing
Generate a project briefing — categories, recent activity, top tags, and a human-readable summary. Use --all for all registered projects.
gnosys briefing [-p <project-id>] [-a|--all] [-d <directory>] [--json]
gnosys working-set
Show the implicit working set — recently modified memories for the current project. Useful for understanding active context.
gnosys working-set [-d <directory>] [-w <hours>] [--json]
Portfolio & Status
gnosys status
Show project readiness, blockers, and action items. Reads status memories written by agents and renders a human-readable dashboard.
gnosys status [options]
Options:
--global Show status for all registered projects
--web Open the HTML dashboard in your browser
--json Output as JSON
gnosys update-status
Get the guided 8-section checklist prompt for AI agents to write dashboard-compatible status memories. Run this command and paste the output into your agent conversation to trigger a structured status update.
gnosys update-status
gnosys portfolio
Generate a cross-project portfolio dashboard showing all registered projects with readiness scores, blockers, and recent activity. Output format is auto-detected from the file extension.
gnosys portfolio [options]
Options:
--output <file> Output file (auto-detects format: .html, .md, .json)
--html Force HTML output
--json Force JSON output
Multi-Machine Sync
gnosys setup remote
Configure or change your remote sync location. An interactive wizard validates the target path and checks SQLite compatibility. Use --path to set the path non-interactively.
Note (v5.4.2+): the previous gnosys remote configure form was removed. The wizard logic is unchanged — just renamed to fit the gnosys setup <subsection> pattern.
gnosys setup remote [options]
Options:
--path <path> Path to the remote gnosys.db (NAS or shared drive mount)
gnosys remote status
Show pending local changes, unresolved conflicts, and the timestamp of the last successful sync.
gnosys remote status [options]
Options:
--json Output as JSON
gnosys remote sync
Two-way sync: pushes local changes to the remote, then pulls remote changes back. Detects conflicts automatically. Use --auto for unattended runs (e.g. cron), or --newer-wins to resolve conflicts by modification time without prompting.
gnosys remote sync [options]
Options:
--auto Run non-interactively; skip conflicts for manual review later
--newer-wins Auto-resolve conflicts by keeping the more recently modified version
gnosys remote push
Push local changes to the remote database only. Does not pull remote changes. Useful when you want a one-way checkpoint before a destructive operation.
gnosys remote push [options]
Options:
--newer-wins Auto-resolve conflicts by keeping the more recently modified version
gnosys remote pull
Pull remote changes into the local database only. Does not push local changes. Useful for bootstrapping a new machine from an existing remote store.
gnosys remote pull [options]
Options:
--newer-wins Auto-resolve conflicts by keeping the more recently modified version
gnosys remote resolve <memoryId>
Resolve a specific sync conflict for a memory. Choose whether to keep the local or remote version. Without --keep, an AI-mediated diff is shown and you are prompted to decide.
gnosys remote resolve <memoryId> [options]
Options:
--keep <local|remote> Immediately keep the specified version without prompting
Backup & Restore
Gnosys provides comprehensive backup and restore functionality to protect your knowledge store and enable disaster recovery. Backups are automatic and can be triggered manually or restored on demand.
Creating Backups
Create a manual backup of your centralized database:
gnosys backup
gnosys backup --to ~/backups/my-backup.db
Restoring from Backups
Restore your database from a backup file:
gnosys restore latest
gnosys restore --from ~/backups/my-backup.db
What Gets Backed Up
The following components are included in every backup:
- Central Database: All memories, relationships, and metadata from ~/.gnosys/gnosys.db
- Helper Library: Generated helper-*.ts files for agent integration
- Rules Files: Agent rules (.cursor/rules/gnosys.mdc, CLAUDE.md)
- Sandbox Log: Historical sandbox activity and performance metrics
Automatic Backups
Gnosys automatically creates daily backups in ~/.gnosys/backups/ using the timestamp format gnosys-YYYY-MM-DD-HH-mm-ss.db. Old backups are automatically pruned after 30 days.
Helper Library
The Gnosys helper library provides a simple TypeScript/JavaScript API for agents and applications to interact with your centralized knowledge store. Generate the helper library for your project with gnosys helper generate.
Generating the Helper
Generate a new helper library file for your project:
gnosys helper generate
This creates gnosys-helper.ts in your project root (or current directory). The file includes TypeScript types and integrates with your project's MCP server configuration.
Using the Helper in Agent Code
Import and use the helper in your agent code:
import { gnosys } from "./gnosys-helper";
// Add a new memory
await gnosys.add("New memory content");
// Search memories
const results = await gnosys.recall("search query");
// List all memories
const all = await gnosys.list();
// Update a memory
await gnosys.update("memory-id", "Updated content");
// Delete a memory
await gnosys.delete("memory-id");
Core API
The helper provides these main functions:
add(content: string) | Add a new memory to the store |
recall(query: string) | Search and recall relevant memories (sub-50ms) |
list(limit?: number) | List all memories (optionally with limit) |
read(id: string) | Read a specific memory by ID |
update(id: string, content: string) | Update an existing memory |
delete(id: string) | Delete a memory by ID |
search(query: string, options?: SearchOptions) | Advanced search with filters |
Integration with Cursor & Claude Code
Use the helper within agent rules to automatically save context and retrieve relevant memories:
## Using Gnosys Helper
When starting work on a task:
1. Import the helper: \`import { gnosys } from "./gnosys-helper";\`
2. Recall relevant context: \`const context = await gnosys.recall("task topic");\`
3. Save decisions: \`await gnosys.add("Decision: choose X because Y");\`
The helper connects you to the centralized knowledge store across all projects.
TypeScript Types
The helper includes full TypeScript types for memory objects:
interface Memory {
id: string;
title: string;
content: string;
category: string;
tags: string[];
created: Date;
modified: Date;
confidence: number;
links: string[]; // wikilink references
}
Agent Rules
Agent rules allow you to configure custom behavior and preferences for your AI agent. Rules are context directives that shape how the agent operates within your IDE or application. They're particularly useful for enforcing coding standards, specifying project conventions, and providing the agent with important context about your workflow.
Rules are stored as markdown files in your IDE configuration and are automatically included in the agent's context. Here's how to set them up for your preferred IDE:
Save to .cursor/rules/gnosys.mdc — Uses Cursor's .mdc format with YAML frontmatter for alwaysApply: true.
Save to CLAUDE.md at your project root — Concise imperative format with grouped tool table.