Skip to Content
Join the Databao Discord server to get help, connect with developers, and request features →
Get started

Get started with Nemory

Low-friction context for high-performing agents!

Nemory builds a unified, searchable graph of your business data by extracting contextual information from your data sources. It supports databases and dbt projects today, with file connectivity and more coming soon.

Using this guide, it will take you 5 - 10 minutes to install Nemory, run the sample e-commerce project, build a context artifact, and use it dynamically in your AI agent via a local MCP.

Instructions for Windows are not available at this time. We’re working on adding them soon.

Nemory is in closed alpha. Access is limited, features may change, and it is not intended for production use.

Stuck or have questions? Join us on Discord 

Before you start

  1. Make sure your bash version is 4.2 or newer.

    Terminal
    bash -version
  2. To evaluate the context built with Nemory, you need access to a LLM.

    If you haven’t already, download Ollama , launch it, and download a model. The sample project has been tested with gpt-oss:20b.

    Note that models can be large in size, often over 10 GB.
  3. To use the context in Claude or Cursor using a local Nemory MCP server, make sure you have the tool installed and an active license.

Install Nemory

  1. To download and install the latest version of Nemory with its dependencies, run:

    Terminal
    curl -L https://raw.githubusercontent.com/JetBrains/nemory-releases/refs/heads/main/install.sh | sh
  2. When prompted review and accept the Terms and Conditions .

  3. When prompted, choose whether you want to add Nemory to your PATH. We recommend that you do so to make it easier to run it.

Nemory will be installed to cli/bin/ in the directory you ran the installation command from.

Download the sample project

You can use the sample project to build a Nemory context artifact and quickly validate the setup and its value before pointing Nemory at your own data.

The sample project is called Toastie winkel, and it models a small ecommerce shop with customers and orders.

  1. Download the sample project  from Google Drive.

  2. (Optional) Get familiar with the sample project. It includes the following assets:

    AssetDescription
    dbt/ directoryA preconfigured dbt project
    data/ directoryAn embedded DuckDB database
    nemory_project/An initialized Nemory project
    .ipynb files in the root directoryJupyter notebooks with questions about the dataset

Build context

  1. In Terminal, navigate to the toastie_winkel/nemory_project/.

  2. To build context from the dbt project and database, run:

    Terminal
    nemory build

    When you run Nemory for the first time, it also downloads the OpenJDK JDK 25 binary.

What you should see:

  • Messages about loading duckdb.yml and dbt artifacts (manifest.json and catalog.json)
  • Once Nemory build has finished, it will save the results to the output/ directory.
  • Results of every run are saved to a dedicated directory whose name contains the run timestamp.
  • The main context artifact is output/run-timestamp/all_results.yaml.
  1. Inspect the context artifact and look for populated entries in DBT and DATABASE section.

Use context

You are now ready to use the Nemory context artifact in 2 ways:

  • Dynamically

    Nemory exposes the context through a local MCP Server, so your agent can access the latest context at runtime.

    Model Context Protocol (MCP) lets your agent talk to local tools over a small server. For more details on MCP, see the official documentation .

  • Statically

    You provide the generated context to the AI agent as a static artifact.

Cursor and Claude can change their UI and dependencies. Some screenshots or steps in this guide may become outdated. Follow the current UI using the same values, or contact us on Discord for guidance.

  1. In Claude desktop, click on your name at the bottom left and select Settings.

  2. Switch to Developer.

  3. Click Edit config.

    Claude will open a new Finder window with the config file.

  4. Open the config file and add the Nemory MCP details as follows. Save the file after.

    claude_desktop_config.json
    { "mcpServers": { "nemory-tool-server": { "command": "/Users/username/cli/bin/nemory", "args": ["--project-dir", "/User/path-to-sample-project/toastie_winkel/nemory_project/", "mcp"] } } }

    If you ran the command to install Nemory from your home directory, it’s installed in /Users/{username}/cli/bin/. If you installed it from a different directory, you can find it in cli/bin/ in that directory

  5. In Claude, open a new chat.

  6. Click and enable the toggle next to all-results-tool-server.

  7. Ask a question that is related to the project context, for example:

    Find the total amount spent by each customer across all their orders
    Write an SQL query that returns the average order value for orders paid by credit card.
    Which customers have places more than one order? How many of them are shipped?

    Claude will use the context built by Nemory to answer your question.

Evaluate context

Nemory’s Benchmark tool enables you to assess the quality of your context by assessing how accurately LLMs answer questions based on it.

This step is optional but highly recommended. Running the benchmark gives you a framework to track performance over time as your business and data evolve.

Based on the results, you can enhance your data sources by adding new sources, documentation, or by adjusting semantic information. Then re-run evaluations to observe how the quality improves over time.

  1. Navigate to the nemory_project/benchmark/ directory and open the models.yml file.

  2. Modify the LLM configuration for the provider you want to use:

    benchmark/models.yml
    - languageModel: OLLAMA modelName: "gpt-oss:20b" meta: url: "http://localhost:11434"

    You can provide multiple LLM configurations. Later, you can select which of the LLMs to run benchmarking on.

  3. Start the Benchmark tool:

    Terminal
    nemory benchmark

    The tool will open in your browser.

  4. In Model, select the model you want to use to run benchmarking:

    Nemory benchmark tool
  5. In Context folder, select the context you built.

  6. Click Generate answers.

    Nemory will send the questions and context to the LLM model you selected and show you answers.

  7. Review the answers and select if they’re helpful or not.

Next steps

You did it! You now have a working context artifact built from a known dataset and a clear path to apply the same steps to your own use-case and environment. Continue with the next steps below to use Nemory on your own data.

  1. Create a new Nemory project with your own data sources

    Start fresh and point Nemory at your dbt project and databases. Follow our Create project documentation to set up a new nemory project, run a build, and review the entries.

  2. Keep improving the context

    Run the benchmark on your data’s context and compare results across runs. Follow our Evaluate context documentation for guidance.

  3. Share it with your team

    Export, version, and share your project and context so others can benefit.

Join our Discord server!

Stuck or have questions about Nemory? Join JetBrains’ Discord server to get help, connect with developers, and request features.

Join Discord
Last updated on