Get started with Nemory
Low-friction context for high-performing agents!
Nemory builds a unified, searchable graph of your business data by extracting contextual information from your data sources. It supports databases and dbt projects today, with file connectivity and more coming soon.
Using this guide, it will take you 5 - 10 minutes to install Nemory, run the sample e-commerce project, build a context artifact, and use it dynamically in your AI agent via a local MCP.
Instructions for Windows are not available at this time. We’re working on adding them soon.
Nemory is in closed alpha. Access is limited, features may change, and it is not intended for production use.
Stuck or have questions? Join us on Discord
Before you start
Unix / macOS
-
Make sure your bash version is 4.2 or newer.
Terminalbash -version -
To evaluate the context built with Nemory, you need access to a LLM.
Ollama
If you haven’t already, download Ollama , launch it, and download a model. The sample project has been tested with gpt-oss:20b.
Note that models can be large in size, often over 10 GB. -
To use the context in Claude or Cursor using a local Nemory MCP server, make sure you have the tool installed and an active license.
Install Nemory
-
To download and install the latest version of Nemory with its dependencies, run:
Unix / macOS
Terminalcurl -L https://raw.githubusercontent.com/JetBrains/nemory-releases/refs/heads/main/install.sh | sh -
When prompted review and accept the Terms and Conditions .
-
When prompted, choose whether you want to add Nemory to your PATH. We recommend that you do so to make it easier to run it.
Nemory will be installed to cli/bin/ in the directory you ran the installation command from.
Download the sample project
You can use the sample project to build a Nemory context artifact and quickly validate the setup and its value before pointing Nemory at your own data.
The sample project is called Toastie winkel, and it models a small ecommerce shop with customers and orders.
-
Download the sample project from Google Drive.
-
(Optional) Get familiar with the sample project. It includes the following assets:
Asset Description dbt/directoryA preconfigured dbt project data/directoryAn embedded DuckDB database nemory_project/An initialized Nemory project .ipynbfiles in the root directoryJupyter notebooks with questions about the dataset
Build context
-
In Terminal, navigate to the
toastie_winkel/nemory_project/. -
To build context from the dbt project and database, run:
Terminalnemory buildWhen you run Nemory for the first time, it also downloads the OpenJDK JDK 25 binary.
What you should see:
- Messages about loading duckdb.yml and dbt artifacts (manifest.json and catalog.json)
- Once Nemory build has finished, it will save the results to the
output/directory. - Results of every run are saved to a dedicated directory whose name contains the run timestamp.
- The main context artifact is
output/run-timestamp/all_results.yaml.
- Inspect the context artifact and look for populated entries in DBT and DATABASE section.
Use context
You are now ready to use the Nemory context artifact in 2 ways:
-
Dynamically
Nemory exposes the context through a local MCP Server, so your agent can access the latest context at runtime.
Model Context Protocol (MCP) lets your agent talk to local tools over a small server. For more details on MCP, see the official documentation .
-
Statically
You provide the generated context to the AI agent as a static artifact.
Cursor and Claude can change their UI and dependencies. Some screenshots or steps in this guide may become outdated. Follow the current UI using the same values, or contact us on Discord for guidance.
Claude
-
In Claude desktop, click on your name at the bottom left and select Settings.
-
Switch to Developer.
-
Click Edit config.
Claude will open a new Finder window with the config file.
-
Open the config file and add the Nemory MCP details as follows. Save the file after.
claude_desktop_config.json{ "mcpServers": { "nemory-tool-server": { "command": "/Users/username/cli/bin/nemory", "args": ["--project-dir", "/User/path-to-sample-project/toastie_winkel/nemory_project/", "mcp"] } } }If you ran the command to install Nemory from your home directory, it’s installed in
/Users/{username}/cli/bin/. If you installed it from a different directory, you can find it incli/bin/in that directory -
In Claude, open a new chat.
-
Click and enable the toggle next to all-results-tool-server.
-
Ask a question that is related to the project context, for example:
Find the total amount spent by each customer across all their ordersWrite an SQL query that returns the average order value for orders paid by credit card.Which customers have places more than one order? How many of them are shipped?Claude will use the context built by Nemory to answer your question.
Evaluate context
Nemory’s Benchmark tool enables you to assess the quality of your context by assessing how accurately LLMs answer questions based on it.
This step is optional but highly recommended. Running the benchmark gives you a framework to track performance over time as your business and data evolve.
Based on the results, you can enhance your data sources by adding new sources, documentation, or by adjusting semantic information. Then re-run evaluations to observe how the quality improves over time.
-
Navigate to the
nemory_project/benchmark/directory and open themodels.ymlfile. -
Modify the LLM configuration for the provider you want to use:
Ollama
benchmark/models.yml- languageModel: OLLAMA modelName: "gpt-oss:20b" meta: url: "http://localhost:11434"You can provide multiple LLM configurations. Later, you can select which of the LLMs to run benchmarking on.
-
Start the Benchmark tool:
Terminalnemory benchmarkThe tool will open in your browser.
-
In Model, select the model you want to use to run benchmarking:
-
In Context folder, select the context you built.
-
Click Generate answers.
Nemory will send the questions and context to the LLM model you selected and show you answers.
-
Review the answers and select if they’re helpful or not.
Next steps
You did it! You now have a working context artifact built from a known dataset and a clear path to apply the same steps to your own use-case and environment. Continue with the next steps below to use Nemory on your own data.
-
Create a new Nemory project with your own data sources
Start fresh and point Nemory at your dbt project and databases. Follow our Create project documentation to set up a new nemory project, run a build, and review the entries.
-
Keep improving the context
Run the benchmark on your data’s context and compare results across runs. Follow our Evaluate context documentation for guidance.
-
Share it with your team
Export, version, and share your project and context so others can benefit.
Join our Discord server!
Stuck or have questions about Nemory? Join JetBrains’ Discord server to get help, connect with developers, and request features.
Join Discord