Show HN: Pylar – Fix over-querying, data leaks, and governance for AI agents

pylar.ai

1 points by Hoshang07 5 hours ago

Hey HN! We're Hoshang & Vishal, the team behind Pylar - a governed access layer between databases and LLMs. We previously led data and AI and we kept seeing the same problem across teams using LLMs internally: agents are great with unstructured data, but the moment you want them touching your actual systems of record — Snowflake, Postgres, CRMs, product DBs — everything becomes fragile, risky, or outright unsafe.

Two issues show up every single time:

1. Agents over-querying They don’t understand cost. They’ll happily generate queries that blow up your warehouse bill.

2. Accidental data exposure PII, financials, customer history leaking through prompt injection or poorly scoped access. Most teams I’ve spoken to don’t feel comfortable letting an agent anywhere near production tables.

The options today aren’t great:

Off-the-shelf MCP servers: There are thousands out there, most too generic for production and a surprising number are malicious.

Hand-rolled API wrappers: Takes months, spreads governance across repos, and you end up maintaining a brittle patchwork of endpoints and policies.

ACLs and row-level permissions weren’t designed for autonomous systems. Locking agents down neuters them; opening things up puts your data at risk. We kept seeing this tradeoff.

So we built Pylar.

It sits between your agents and your databases. You connect your sources, create sandboxed SQL views that define exactly what an agent is allowed to see, convert those views into deterministic MCP tools, and publish them to any agent builder through one secure link.

From one place, you can:

- Give agents scoped, sandboxed access (never raw tables)

- Apply consistent governance across all data sources

- Get observability into agent behavior and queries

- Contain misuse before it becomes a breach

- Plug into anything: Claude, Cursor, LangGraph, n8n, etc.

We’ve been working with a few early teams already, across internal analytics agents and customer-facing AI features driven directly by production data.

If you’re solving similar problems around safe structured-data access for agents, I’d love your thoughts.

Here's our - Docs (https://docs.pylar.ai) - Website (https://www.pylar.ai) - Demo (https://youtu.be/w8DPxS5RP2Y?si=4xyO_B4UgjPlIFvM)

You can try our product on a 14 day trial here - https://app.pylar.ai/signup

We're excited to launch here and get feedback on how we're approaching this.