Show HN: Agent-desktop – Native desktop automation CLI for AI agents

github.com

83 points by lahfir 8 hours ago

I've been building computer-use tools for a while, and I quietly launched this about a month ago (122 Stars on GH). I figured it was worth sharing here.

Over the last few months, a lot of computer-use agents have come out: Codex, Claude Code, CUA, and others. Most of them seem to work roughly like this: 1. Take a screenshot 2. Have the model predict pixel coordinates 3. Click x,y 4. Take another screenshot 5. Repeat

That works, but it's slow, expensive in tokens, and fragile. If the UI shifts a few pixels, things break. And the model still doesn't know what any element actually is.

But the OS already exposes structured UI information:

  - macOS: Accessibility API
  - Windows: UI Automation
  - Linux: AT-SPI

Screen readers have used these APIs for years. On the web, Playwright beat screenshot scraping for the same reason: structured access is just a better abstraction than pixels.

So I built a desktop equivalent: agent-desktop.

It's a cross-platform CLI for structured desktop automation through the accessibility tree. One Rust binary, about 15 MB, no runtime dependencies. It exposes 53 commands with JSON output, so an LLM can inspect and operate native apps without screenshots or vision models. Inspired by agent-browser by Vercel Labs.

A typical loop looks like this:

  agent-desktop snapshot --app Slack -i --compact
  agent-desktop click @e12
  agent-desktop type @e5 "ship it"
  agent-desktop press cmd+return

So the loop becomes:

  1. Snapshot
  2. Decide
  3. Act
  4. Snapshot again

The main design problem was context size.

A naive approach would dump the full accessibility tree into the model, but real apps get huge. Slack can easily exceed 50,000 tokens for a full tree dump, which makes the approach impractical.

The approach I ended up using is progressive skeleton traversal:

  - First pass: return a shallow tree, typically depth 3, with deeper containers truncated and annotated with children_count
  - Named containers get references so the agent can request only that subtree
  - The agent drills down into the relevant region with --root @e3
  - References are scoped and invalidated only for that subtree
  - After acting, the agent can re-query just that region instead of re-snapshotting the whole app

In practice, this reduced token usage by about 78% to 96% versus full-tree dumps in Electron apps like Slack, VS Code, and Notion.

A few implementation details that may be interesting here:

  - Rust workspace with strict platform/core separation through a PlatformAdapter trait
  - Accessibility-first activation chain; mouse synthesis is the fallback, not the default
  - Deterministic element refs like @e1, @e2, with optimistic re-identification across UI shifts
  - Structured errors with machine-readable codes plus retry suggestions
  - C ABI via cdylib, so it can be loaded directly from Python, Swift, Go, Node, Ruby, or C without shelling out
  - Batch operations in a single call
  - Support for windows, menus, sheets, popovers, alerts, and notifications
  - Special handling for Chromium/Electron accessibility trees, which can get very deep and noisy

Why I think this matters: pixel-based desktop control feels like a leaky abstraction. The OS already knows the UI semantically. Accessibility APIs give you roles, names, actions, hierarchy, focus, selection, and state directly. That seems like a much better substrate for desktop agents than screenshot loops.

If you're building your own desktop agent, internal automation tool, or research prototype, this may be useful.

Install:

  npm install -g agent-desktop
  agent-desktop snapshot --app Finder -i

Repo: https://github.com/lahfir/agent-desktop

I'd especially love feedback from people who've built desktop automation before. What are the biggest pain points you've run into, and what would you want a tool like this to support?

TheFragenTaken 3 hours ago

I've long thought about why the tools we have operate on screenshots, and not the accessibility tree. To me the latter would have seemed like the obvious choice from the beginning (structured data), but yet, here we are with pixels. Happy to see progress being made here.

  • tidbeck 2 hours ago

    While the accessibility tree is great in many aspects it has its own limitations for example when it comes to stacked views or lazy loading outside the viewport.

    • nlitened 1 hour ago

      I think screenshots also don't help with stacked views and lazy loading outside the viewport

jstanley 6 hours ago

lahfir, I vouched your (currently still dead) comment because it was interesting to me.

I expect the reason it is dead is that it seems LLM-generated (you "quietly" launched it on github? Who says that?).

Also, your comment claims that the tool is cross-platform and implies that it works on Mac, Windows, and Linux, but the graphic on the github README says it only works on Mac.

  • nerdsniper 4 hours ago

    It looks hybrid human/LLM at best, but definitely possible that it's mostly human, from someone who is earnestly learning how to use "pitch" language. I got the feeling that some parts, like the bullet points, maybe originated from AI-generated documentation/readme's.

    My intuition tells me that it could have been AI-generated, but if that's the case then it was heavily edited by a human. I think anyone who went through it for that would have changed other things as well. That's why I suspect it's pseudo-artificial pitch "coded" human writing with some (mostly, lightly edited) copy/paste of AI bullet points.

    Then again, I can't find snippets of this language in the repo, so maybe I'm losing my discernment as LLMs advance (as well as the humans who are learning how to use them).

  • preommr 4 hours ago

    Wouldn't the opposite be true? That an llm would use well-known terms for general purpose writing. I think it's much more likely that a human would remember 'silent' launch, or 'stealth' launch, and use silent as a substitute.

    I feel very strongly that comment wasn't AI generated.

    Also, there's a bunch of normal comments that seem to be wrongfully flagged.

  • vasco 4 hours ago

    3 fake comments in the thread also

  • handfuloflight 4 hours ago

    Why is Claude always pointing out or assuming what is done quietly?

esperent 6 hours ago

Looks interesting but like every single one of these computer use apps I've seen, it's macOS only.

Does anyone know of a linux one?

  • Zetaphor 5 hours ago

    I don't think the accessibility story on Linux is comprehensive enough to make this possible unfortunately. Especially with Wayland. One advantage Mac apps have is they're all targeting the same underlying OS primitives, which is the layer their accessibility platform lives at.

    • tuukkah 4 hours ago

      Quote from a sibling comment:

        - macOS: Accessibility API
        - Windows: UI Automation
        - Linux: AT-SPI
      • Arainach 3 hours ago

        The levels of support are radically different. Compositors, window managers, UI frameworks, and apps all have mixed and inconsistent levels of support such that the overall experience is that you simply cannot rely on using a Linux system via accessibility.

someone654 4 hours ago

Looks very interesting. Especially like that language environment is abstracted away, through cli, such that one are not stuck with for example python to write your UI logic (or create your own cli wrapper around PyAutoGUI).

How can one help with implementing Linux and Windows support?

xnx 3 hours ago

The best desktop automation system would take HDMI input and output USB keystrokes and mouse movements so that it can be plugged into any computer transparently, including work computers.

  • ActorNightly 3 hours ago

    You don't need hdmi out, just ability to do screenshots, which easy to script.

    Arguably though, browser automation gets you 95% of the way there for most things.

    • xnx 3 hours ago

      Many systems won't allow the end user to install any software (e.g. work issued laptops), but you can plug in HDMI and USB.

  • lukewarm707 21 minutes ago

    if you can attach a local llm...hdmi is airgapped (sort of)...

    the operating computer requires no processing power or install....

    it plugs into any interface............

    i plug it into a scada...............

    $$$$$$

z3ratul163071 5 hours ago

i knew it... macos

  • dotancohen 2 hours ago

    OP claims cross platform.

      > It's a cross-platform CLI for structured desktop automation through the accessibility tree.
rado 4 hours ago

Interesting, would be nice to see a demo video apart from that unclear GIF

DeathArrow 6 hours ago

This is big if it works. Nice job!