I product the other day, which I think may be perfect for AI-related use cases. Ghost, from ghost.build, describes itself as “the first database built for agents.”

Ghost is an “agent-first” Postgres database platform that lets developers and AI agents create, fork, inspect, query, manipulate and delete entire databases with ease. 

It’s also completely free to use.

The developers who built it had a simple idea: if agents are going to build software, test migrations, inspect schemas, run SQL, and experiment with data, then they need databases that are as disposable and programmable as code sandboxes. That is where Ghost fits. 

When you create a Ghost database or fork existing ones, those databases live on Ghost’s Cloud infrastructure, not on your local system. Ghost is especially useful for testing, prototyping, agent workflows, branch databases, migration experiments, and disposable database environments. 

Traditional managed databases are designed around long-lived production infrastructure. You create an instance, configure networking, manage credentials, connect applications, and then treat the database as something valuable and fragile. Ghost keeps the power of Postgres, but adds a workflow that feels much closer to modern agentic development: create a database on demand, fork it when you need an isolated copy, run SQL against it, inspect the schema, try different table configurations, and throw it away when you are done.

This makes Ghost especially well-suited to AI tools like Codex and Claude Code. Those tools can reason about code, write migrations, debug queries, generate seed data, inspect logs, and use MCP tools. Ghost’s built-in MCP server gives Codex direct database management capabilities, rather than forcing the agent to rely on vague instructions, copied connection strings, or manual dashboard work.

In the rest of this article, I’ll take you through how to install Ghost on your local system. We’ll then look at four concrete examples of using Ghost with the Codex agent and its own CLI.

N.B. I have no affiliation or association with the company or team behind Ghost or the Ghost product itself.

Prerequisites

I’m assuming you already have one of Ghost’s supported coding agents installed on your system. These include:

Claude Code
Codex
Cursor
Gemini CLI
Google Antigravity
Kiro CLI
VS Code
Windsurf

You will also need a GitHub account, which you will need to grant Ghost access to when logging in.

Installing Ghost

On Linux, WSL on Windows, or macOS, you can use the following curl command.

$ curl -fsSL https://install.ghost.build | sh

If, like me, you’re on Windows, you can use this command from a PowerShell terminal.

PS C:\Users\thoma> irm https://install.ghost.build/install.ps1 | iex

Then run,

PS C:\Users\thoma> ghost login

Opening browser for authentication...
Found space: nj5scy2orp
Successfully logged in as [email protected]

You’ll be presented with a screen like this. Authorise Ghost to continue.

Ghost: A Database for Our Times?

After you’re logged in correctly, the next thing you should do is make the Ghost MCP server available to your favourite coding agent. In my case, I’m using Codex.

Type the following into the command line and choose your agent from the displayed list.

PS C:\Users\thoma> ghost mcp install

Select an MCP client to configure:
  1. Claude Code
> 2. Codex
  3. Cursor
  4. Gemini CLI
  5. Google Antigravity
  6. Kiro CLI
  7. VS Code
  8. Windsurf

Typing: 2
Successfully installed Ghost MCP server configuration for codex
Configuration file: C:\Users\thoma\.codex\config.toml

Next steps:
   1. Restart codex to load the new configuration
   2. The Ghost MCP server will be available as 'ghost'

PS C:\Users\thoma>

Check that your setup is OK

  • Type in the following to check that the Ghost MCP is installed correctly.
PS C:\Users\thoma> ghost mcp list

TYPE    NAME
prompt  design-postgis-tables
prompt  design-postgres-tables
prompt  find-hypertable-candidates
prompt  migrate-postgres-tables-to-hypertables
prompt  pgvector-semantic-search
prompt  postgres
prompt  postgres-hybrid-text-search
prompt  setup-timescaledb-hypertables
tool    ghost_connect
tool    ghost_create
tool    ghost_create_dedicated
tool    ghost_delete
tool    ghost_feedback
tool    ghost_fork
tool    ghost_fork_dedicated
tool    ghost_list
tool    ghost_login
tool    ghost_logs
tool    ghost_password
tool    ghost_pause
tool    ghost_rename
tool    ghost_resume
tool    ghost_schema
tool    ghost_sql
tool    ghost_status
tool    search_docs
tool    view_skill

Check that your agent can use the Ghost MCP server. You should see output similar to this (for Codex).

PS C:\Users\thoma> codex mcp list

Name   Command                                                Args       Env  Cwd  Status   Auth
ghost  C:\Users\thoma\AppData\Local\Programs\Ghost\ghost.exe  mcp start  -    -    enabled  Unsupported

Assuming everything is OK, we can move on to more examples of using Ghost via its CLI and agents through its MCP server.

Example 1 — Creating a sales data database

Open up your coding agent and type the following into it.

Codex> Create a ghost database called sales_data that stores 10,000 dummy sales 
records. In the database, create whichever tables are suitable for such 
data. Do not include any indexes, primary or foreign keys for now.

I got this output from Codex.

Worked for 1m 32s
Created a fresh Ghost database named sales_data.

It contains 10,000 dummy sales order records and supporting tables:

customers           1,000
products            150
sales_reps          30
sales_orders        10,000
sales_order_items   30,000
Per your instruction, I did not create any:

primary keys
foreign keys
indexes

Verification confirmed there are no primary key constraints, no foreign key 
constraints, and no indexes on the sales tables.

As you can see, on my system, this took just over a minute and a half, which is a pretty impressive statistic for what was accomplished.

Example 2 — Using the Ghost CLI

Now that we have a database to play with, let’s try out some of the Ghost CLI commands. Note that these commands will normally not need to be used since we can do all that I’m about to show you via Codex and the Ghost MCP, but it’s useful to know how in any case.

PS C:\Users\thoma> ghost list

ID          NAME        STATUS   STORAGE
sip3qcs40a  sales_data  running  211MiB
  • See the database tables and structure
PS C:\Users\thoma> ghost schema sales_data

DATABASE: sales_data (sip3qcs40a)

TABLE: customers
  customer_id    BIGINT GENERATED ALWAYS AS IDENTITY
  customer_name  TEXT NOT NULL
  email          TEXT NOT NULL
  country_code   TEXT NOT NULL CHECK ((length(country_code) = 2))
  segment        TEXT NOT NULL CHECK ((segment = ANY (ARRAY['consumer'::text, 'small_business'::text, 'enterprise'::text])))
  created_at     TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now()

TABLE: products
  product_id    BIGINT GENERATED ALWAYS AS IDENTITY
  sku           TEXT NOT NULL
  product_name  TEXT NOT NULL
  category      TEXT NOT NULL CHECK ((category = ANY (ARRAY['software'::text, 'hardware'::text, 'services'::text, 'training'::text, 'support'::text])))
  base_price    NUMERIC(10,2) NOT NULL CHECK ((base_price > (0)::numeric))
  active        BOOLEAN NOT NULL DEFAULT true

TABLE: sales_order_items
  sales_order_item_id  BIGINT GENERATED ALWAYS AS IDENTITY
  sales_order_id       BIGINT NOT NULL
  product_id           BIGINT NOT NULL
  quantity             INTEGER NOT NULL CHECK (((quantity >= 1) AND (quantity <= 6)))
  unit_price           NUMERIC(10,2) NOT NULL CHECK ((unit_price > (0)::numeric))
  discount_amount      NUMERIC(10,2) NOT NULL DEFAULT 0 CHECK ((discount_amount >= (0)::numeric))
  line_total           NUMERIC(12,2) NOT NULL CHECK ((line_total >= (0)::numeric))

TABLE: sales_orders
  sales_order_id   BIGINT GENERATED ALWAYS AS IDENTITY
  customer_id      BIGINT NOT NULL
  sales_rep_id     BIGINT NOT NULL
  order_status     TEXT NOT NULL CHECK ((order_status = ANY (ARRAY['pending'::text, 'completed'::text, 'shipped'::text, 'cancelled'::text, 'refunded'::text])))
  ordered_at       TIMESTAMP WITH TIME ZONE NOT NULL
  payment_method   TEXT NOT NULL CHECK ((payment_method = ANY (ARRAY['card'::text, 'bank_transfer'::text, 'paypal'::text, 'invoice'::text])))
  currency_code    TEXT NOT NULL DEFAULT 'USD'::text CHECK ((currency_code = 'USD'::text))
  subtotal_amount  NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((subtotal_amount >= (0)::numeric))
  tax_amount       NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((tax_amount >= (0)::numeric))
  shipping_amount  NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((shipping_amount >= (0)::numeric))
  total_amount     NUMERIC(12,2) NOT NULL DEFAULT 0 CHECK ((total_amount >= (0)::numeric))

TABLE: sales_reps
  sales_rep_id  BIGINT GENERATED ALWAYS AS IDENTITY
  rep_name      TEXT NOT NULL
  region        TEXT NOT NULL CHECK ((region = ANY (ARRAY['north_america'::text, 'europe'::text, 'asia_pacific'::text, 'latin_america'::text])))
  hired_at      DATE NOT NULL

VIEW: pg_buffercache
  bufferid          INTEGER
  relfilenode       OID
  reltablespace     OID
  reldatabase       OID
  relforknumber     SMALLINT
  relblocknumber    BIGINT
  isdirty           BOOLEAN
  usagecount        SMALLINT
  pinning_backends  INTEGER

VIEW: pg_buffercache_numa
  bufferid     INTEGER
  os_page_num  BIGINT
  numa_node    INTEGER
  • Select some data from the sales_order table
PS C:\Users\thoma> ghost sql sales_data "SELECT * FROM sales_orders LIMIT 10;"

 sales_order_id │ customer_id │ sales_rep_id │ order_status │ ordered_at                    │ payment_method │ currency_code │ subtotal_amount │ tax_amount │ shipping_amount │ total_amount
────────────────┼─────────────┼──────────────┼──────────────┼───────────────────────────────┼────────────────┼───────────────┼─────────────────┼────────────┼─────────────────┼──────────────
 1              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1796.88         │ 143.75     │ 0.00            │ 1940.63
 2              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1622.63         │ 129.81     │ 0.00            │ 1752.44
 3              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1867.25         │ 149.38     │ 0.00            │ 2016.63
 4              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 606.02          │ 48.48      │ 0.00            │ 654.50
 5              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1450.11         │ 116.01     │ 0.00            │ 1566.12
 6              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1227.85         │ 98.23      │ 0.00            │ 1326.08
 7              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 7086.55         │ 566.92     │ 0.00            │ 7653.47
 8              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 696.92          │ 55.75      │ 0.00            │ 752.67
 9              │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 1063.73         │ 85.10      │ 0.00            │ 1148.83
 10             │ 8           │ 9            │ completed    │ 2024-05-21 05:08:34.332998+00 │ paypal         │ USD           │ 2489.06         │ 199.12     │ 0.00            │ 2688.18
(10 rows)
  • Worried about your agent trashing your database? Easy, just set your system to read-only
PS C:\Users\thoma> ghost config set read_only true

Now in Codex,

Codex> delete from sales_order_items where sales_order_item_id = 12

Worked for 5s
The DELETE was blocked, as expected:

ERROR: cannot execute DELETE in a read-only transaction (SQLSTATE 25006)

There’s a whole bunch of useful commands you can use with the ghost CLI. For a complete list, type in ghost — help into your command line. 

But, as I said before, one of the main reasons to use Ghost is that we can manipulate databases using our AI agent. Let’s see that in action.

Example 3 — Using the Ghost MCP with our agent.

We saw already in example 1 how quick and easy it was to create a database using Codex, but let’s show how easy it is to experiment with databases using an agent.

  • Add 500,000 records to the sales_order_items table
Codex> Add 500,000 dummy sales order records to the sales_order_items table


Added 500,000 dummy rows to sales_data.sales_order_items.

Final counts:

sales_orders        10,000
products            150
sales_order_items   530,000

I also recalculated the 10,000 order totals from the expanded line-item table.

Totals now:

Subtotal sum:      1,530,141,808.73
Tax sum:             124,981,128.08
Total sales:       1,639,903,892.69
Avg order value:         163,990.39

Confirmed the tables still have 0 indexes.
  • Select some data from the sales_order_id table
Codex> Retrieve the sales_order_items record for 
sales_order_items_id = 519932 and tell me how long it took to retrieve

Record found:

sales_order_item_id: 519932
sales_order_id:      9932
product_id:          15
quantity:            3
unit_price:          91.47
discount_amount:     0.00
line_total:          274.41
Timing:

Database execution time:     25.833 ms
Ghost MCP end-to-end time:  146.733 ms
Because there are no indexes, Postgres used a sequential scan and filtered through 529,999 rows to find the record.
  • Now for some performance tuning
Codex> Fork 3 duplicate sales-data databases in parallel, naming them 
sales_data-1, etc.... Using differering strategies in each fork, tune the 
database so that the above query runs more quickly. Output your results in 
an ascii table with database name, tuning strategy and database execution time.


I'll create three forks concurrently, then apply a different lookup strategy 
to each fork: a plain btree index, a covering btree index, and a physically 
clustered table plus index. Then I'll measure the same EXPLAIN ANALYZE 
execution time on each.

Used Ghost

Only the source sales_data exists, so there are no fork-name collisions. 
I'm launching the three fork/tune/measure workers now; each worker waits only 
for its own fork before tuning it.

Worked for 2m 40s

Created and tuned three forked databases in parallel.

Baseline on untuned sales_data was previously 25.833 ms with a sequential scan.

+--------------+---------------------------------------------------------------+--------------------------+
| Database     | Tuning strategy                                               | Database execution time  |
+--------------+---------------------------------------------------------------+--------------------------+
| sales_data-1 | Btree index on sales_order_item_id                            | 0.131 ms                 |
| sales_data-2 | Covering btree index including all projected query columns    | 0.143 ms                 |
| sales_data-3 | Hash index on sales_order_item_id for equality lookup         | 0.158 ms                 |
+--------------+---------------------------------------------------------------+--------------------------+

Best result: sales_data-1 with a standard btree index at 0.131 ms.

As you can see, the speedup after applying the tuning was pretty significant, as we would expect. Even so, a +200x improvement in the query runtime is not too shabby. 

Example 4 — Building a dashboard app for our database

This example shows how easy it is to blend an app built with coding agents with a database built with coding agents.

Codex> Use your performance tuning findings to upgrade the sales_data 
database. Then construct a dashboard using javascript and HTML for the 
database. The dashboard should be dynamic. I want to be able to filter on data 
and the dashboard should only show data that is inside the database.


I'll turn this into a small live dashboard app: an Express backend with 
read-only Postgres queries, and a browser UI that fetches filtered data 
from /api/dashboard. I'll keep the database URL in an environment file 
rather than putting credentials in the HTML.

Ran 3 commands
The workspace already has pg; I'll add express and dotenv, then fetch a 
read-only Ghost connection string for the app.

Ran 1 command, used Ghost
I'll avoid putting the database password into the HTML. The server will get a 
read-only connection from the Ghost CLI at startup if DATABASE_URL is not 
set, and all browser requests will go through parameterized API endpoints.

Ran & 'C:\Users\thoma\AppData\Local\Programs\Ghost\ghost.exe' connect --read-only sales_data
The read-only Ghost connection string works and includes sslmode=require 
plus Ghost's protected read-only option. I'm adding the backend and replacing 
the HTML with a live API-driven dashboard.

The backend API is in place. I'm replacing the dashboard HTML with a 
frontend that calls /api/options and /api/dashboard whenever filters change.

Created 1 file
The live frontend and backend are written. I'm starting the server now and 
checking the API endpoints before I hand you the URL.

The server is now running on port 5101. I'm verifying the browser HTML and a 
filtered API call against the live database.

Worked for 7m 28s
Done. The dashboard is now a live browser app backed by the Ghost sales_data 
database.

Run/open it here:

http://localhost:5101/sales_dashboard.html

What changed:

Added server.js with an Express API.
Updated sales_dashboard.html so filters call the API and redraw charts/tables.
Updated package.json with npm start.
The backend gets a Ghost read-only connection via ghost connect --read-only sales_data.
Browser JavaScript never sees the database password.

When I clicked the HTML link, I got a dynamic dashboard that responded accurately to different filter values.

Ghost: A Database for Our Times?
Image by Author

Summary

Hopefully, I’ve shown you enough of Ghost’s database’s capabilities to whet your appetite to find out more about this useful product.

The ease and speed with which it’s possible to create, destroy, copy and manipulate Postgres databases using this tool is remarkable. 

My favourite aspect of Ghost is how easy it is to run what-if scenarios with real databases. Need some new copies of an existing database to try things out? It’s a snap. You can chop and change indexes and primary keys, or create materialised views, just by describing what you want in plain English using your favourite agentic coding tools, with a fallback to a CLI if you really need to.

As I showed in my final example, because Ghost works so well with agentic coding tools its pretty straightforward to have those agents create code that integrates well with Ghost databases to produce useful apps like dashboards and other types of SaaS applications.

Now, although you might want to pause before storing any production data on Ghost, it’s a great platform for Proofs of Concept and general experimentation. And, once you have a database structure you’re happy with using Ghost, replicating that on your production database should be straightforward.

For more information on using Ghost and full documentation, please visit the officlal home page below.

https://ghost.build/docs/#introduction