jryio 15 hours ago

Notion did it first and arguably better[1]. Shared agents benefit from shared context.

The hardest part is ensuring that shared context is maintained and it converges on a representation of reality and the people in the company.

[1] https://www.notion.com/help/custom-agents

  • gavinray 14 hours ago

    At promptql, our solution to this was a wiki. You get knowledge-graph/relations for free through page links.

    New knowledge additions are proposed when agents decide it would be relevant to retain, humans confirm/deny or create wiki modifications themselves.

  • Jayakumark 14 hours ago

    In demo videos, it shows Memory under Files, so i assume it holds learnings and shared context.

    • defjosiah 12 hours ago

      Yeah, the memory is cool, just a file store that you can instruct the agent to use however you see fit.

  • jeswin 13 hours ago

    Notion, as any other thin-AI product out there, is now in Anthropic/OpenAI/Google's crosshairs. Unless one has a moat the size of SharePoint or Google Docs or OneDrive, it's just a feature away.

    • baxtr 12 hours ago

      I really like Notion's UI. I wish they would focus only on that and let me access my Notion DB as .md files with Claude.

      • jorl17 12 hours ago

        Yes, please. Their MCP suuuuuuuucks

        • artdigital 6 hours ago

          How does it suck? I use it almost daily and love their Notion MCP

          • theshrike79 2 hours ago

            Can't limit access easily. You can do per-workspace permissions and that's about it.

      • nxobject 11 hours ago

        I know this is probably out of scope, but I'd love it as well if Notion could slowly accrete the features of Airtable... at least expose some form of programmatic access to tables!

      • dimitri-vs 7 hours ago

        Take a look at Outline! I use it almost exactly like a cloud based Obsidian vault. And they have been very responsive for MCP feature requests

        • bryanhogan 3 hours ago

          I don't think they have added a Obsidian Bases / Notion Database like feature yet, right? Saw some discussion of adding a NocoDB integration, but also didn't see that happen yet.

skybrian 6 hours ago

From the announcement, I'm getting only a vague idea about what this is. It sounds like this Agent might run in some kind of sandbox with access to files? It would be nice if there were some documentation for the environment it runs in and what tools it has.

  • Gareth321 1 hour ago

    Yeah it looks like it's just a shared project. I thought we could already do this?

neosat 13 hours ago

Tried it to automate something that was on my to do list for the day. I had blocked off a few hours for this and managed to get the agent working reasonably well (85%) of the way there in < 15 mins.

The main remaining part is the poor docx / pdf / final output but will create a skill/workflow to get around that.

Worked really well end-end!

  • defjosiah 12 hours ago

    Cool to hear, glad you enjoyed it! If you want to send any feedback on the output, feel free to send it to josiah at openai and I can take a look.

    • neosat 11 hours ago

      Great work on the feature and sure I'll do that. :)

10keane 2 hours ago

i actually like the concept of workspace agent, because i am feeling some real pain here to run long-term project while retaining context for each instance of agent. but based on the demo it seems more like for cooperation instead of preserving long-term project state: decisions made, actions taken, approvals given, history of what each agent did and why. it is then just a more convenient chatgpt entry in group chat.

another thing: this is all on OpenAI's servers. Which is fine if that's what you want. But there's a real class of user — technical, working on actual production code, security-conscious — for whom "my workspace lives on my machine, in my git repo, under my version control, works for my other non-openai tools" is a hard requirement, not a preference.

mhitza 15 hours ago

This is the LLM integration approach I was pitching last year to some companies. Though in my case it was strictly tied to self-hosted inference.

Agents at the edge of business where they can work independently, asynchronously, is an approach that I don't feel was explored enough in business environments.

Sending your entire communication and documents to OpenAI would be a very bold choice.

  • linkjuice4all 14 hours ago

    Not only are businesses already doing that - they're not even cleaning up their source material so LLMs are generating garbage outputs from the old inconsistent trash that haunts Confluence, Google Drive, and all of the other dumping grounds for enterprise ephemera. Oftentimes "AI transformation" is just a slightly better search engine that regurgitates your old strategy (that didn't work the first time) and wraps it up in new sycophantic language that C-levels use to bulldoze the budgets and timelines of actual skilled front line employees.

    I do believe that LLMs and AI provide actual value, but the "workspace" is usually the passive aggressive CYA battleground for employees to appear productive in-spite of leadership's blind-spots, ossified business practices, and "aligned" decision-making that doesn't actually fix a broken org. Maybe this release will be the one that finally challenges nepo-hires, not-invented here, and all of the other corpo crap that defines "enterprise" business.

    • pixl97 13 hours ago

      Cleaning up source material is not easy work in companies that have massive piles of it and don't exactly know which parts of it are wrong. Quite often these documents are poorly versioned and do work for something but not exactly what you're looking for.

      With this said, you can use your incorrect AI answers to find and then purge or repair this old and/or poorly written documentation and improve the output.

      • linkjuice4all 13 hours ago

        I agree - and I've noticed that these AI transformations tend to lay bare the many issues, inconsistencies, and other problems with workspace functions and data. Unfortunately the people that are usually in charge of these projects do not have the seniority or sway to actually change the broken processes or aren't on the right team to remove cruft. Usually you have to wait until a salesperson misquotes something from an AI summary before these issues get unblocked because they actually affected revenue.

Jayakumark 14 hours ago

Looks like ChatGPTs answer to claude managed agents, but using existing ChatGPT Business subscription and not API Keys. With one Caveat , it needs to be invoked from ChatGPT or Slack does not support invoking from APIs, so cannot embed it. Also google launched agent cli today to build own one and integrate with Gemini enterprise https://developers.googleblog.com/agents-cli-in-agent-platfo...

ANaimi 2 hours ago

I guess I waited too long to share my side project: sadeem.ai

Funny we landed on the same terminology. Will need to connect Stripe.

zenapollo 9 hours ago

I’m helping a client move data from dozens of spreadsheet to an aggregate one. The elegantish solution is to use Python, each run takes about 5s and 1 cal of energy. If i hadn’t helped her write the script, she as a non techie could have started with something like this tool, and it’ll take 90s and use 200 cal of energy. The numbers are fudged a bit, but still, how can this be profitable, or ethical? To say nothing of the spontaneous hallucination that sneaks in from time to time, especially when the model gets silently lobotomized.

  • stingraycharles 8 hours ago

    Because maybe the value she gets out of it is more than what it costs, even at the higher price?

  • senordevnyc 7 hours ago

    Or she could have done it manually, and spent orders of magnitude more time and energy.

    The scarce resource preventing more people from the ideal solution of using a script in your scenario is you. Most people can’t write a script, so their options are slow and “expensive” manual process, or the 100x as efficient AI. The 1000x as efficient script isn’t an option (well, until the model is good enough to know it should obviously just write the script too).

  • skybrian 7 hours ago

    To make this a fair comparison, you'd also need to include how much they paid you to write the script, how long it took, and how much energy you used.

dennisy 13 hours ago

I feel for the startups sweating each one of these frontier lab releases.

How many more are thinking “am I next?”

  • gitmagic 12 hours ago

    Yeah, I’m happy I gave up, it’s just not possible to compete and the constant stress to try to keep up is not worth it.

    (I built https://nelly.is as a solo founder without funding)

    • weird-eye-issue 3 hours ago

      The trick is niching down, this has always been the trick.

anthuswilliams 13 hours ago

Without commenting on the product itself (I haven't tried it), the marketing copy around this release commits the same sins I have seen from Anthropic and Grok and all the rest of them.

I'm so tired of seeing these companies trivializing other people's work! Nobody's job is "edit files" and "respond to messages"! People have jobs like "find and close leads" and "reconcile accounts" and "arrange student field trips" and "make sure the hospital has enough inventory", not "generate reports" and "write code".

Editing files, producing reports, even writing code is just a byproduct. This is like the idiotic "lines of code produced" metric, but now they apply it to all of society.

  • iugtmkbdfil834 13 hours ago

    But.. to your point.. will it not be interesting once the management finds out there may be a little more to what we do?:D

  • infecto 13 hours ago

    While there definitely is a healthy dose of trivializing work I think once you scratch the surface the real messaging is that we can automate or optimize these parts of a current workflow to open work for higher value tasks to folks.

    • anthuswilliams 10 hours ago

      That sort of messaging has been done for decades with business process orchestration companies, RPA vendors, etc. All the way back to the original business software vendors like Lotus and Excel. It's only big LLM labs that adopt this tone of dismissive trivialization of other people's work.

  • inerte 11 hours ago

    They have to be generic because it's a generic tool. If they write "this tool can arrange student field trips", people might ignore thinking it has a narrow purpose.

    Yes, work is being trivialized, but the symptom here isn't caused by that.

    • anthuswilliams 10 hours ago

      The issue is not that they are generic. You could still be generic with phraseology that actually acknowledged the contributions and ownership involved in the jobs being done. For example, you could write e.g. "monitor for outages", "manage projects", "arrange community events", "handle logistics", and so on.

      But the problem is LLMs can't do those things. All they can do is "edit files" and "send messages".

brianjking 11 hours ago

So this only works for Team/Enterprise accounts? No Pro?

throwaway2016a 9 hours ago

What am I missing? I don't see it in either the MacOS app or the web app. I have a "Plus" plan. Do I need "Business" or "Pro"?

Edit: To answer my own question "Workspace agents are available in research preview in ChatGPT Business, Enterprise, Edu, and Teachers plans."

pzo 15 hours ago

I think I enjoyed OpenAI releases like ~1 year ago when they did video and presentation. This days with so many mini feature / releases is hard to be up to date or even figure out some use cases.

  • eieke 8 hours ago

    It’s a reflection of where they are at - spray and pray, pushing forward till the wheels fall off.

faxuss 6 hours ago

So this only works for Team/Enterprise accounts? No Pro?

hereme888 9 hours ago

I just want a guarantee that OpenAI isn't just going to steal my ideas as I design my own agents. And if they did, I want compensation. I think of my custom skills, MCP servers, agents, etc, as intellectual property.

  • spacechild1 9 hours ago

    Now that's a good joke! Do you think any of the writers, artists, software developers, etc. whose work has been unwittingly used for training these models received any compensation whatsoever? If you are so concerned with IP, you should immediately stop using this technology.

  • avaer 9 hours ago

    They will compensate agent engineers just like they compensated all of the developers, artists, academics, and editors that created the data models are trained on.

  • hereme888 7 hours ago

    Oh I know I was begging for someone to talk about how OpenAI didn't compensate mankind previous to me. My concern still stands for me.

m4rkuskk 13 hours ago

OpenAI and Anthropic are killing startups and mature companies left and right. They will always have the cost advantage.

eieke 8 hours ago

More speculative stuff in the desperate search for revenue.

Zzz this is boring. So much for scaling up compute and data = intelligence.

TIPSIO 13 hours ago

Beautiful design and UX for the bot layouts. Kudos this is really clean

nazca 11 hours ago

Is this an RIP Zapier feature?

therobots927 6 hours ago

“Altman anticipates that by 2027, AGI will have materialised.” - Feb 17 2025: https://yourstory.com/2025/02/sam-altmans-2035-ai-prediction

So we’ve got about a year and a half max until we have AGI and OpenAI is launching a bunch of in house harnesses.

They must have some crazy shit cooking in the back rooms. So super duper top secret they can’t even announce it. Because if their public models are any hint, you would never think we were 18 months away from human level machine intelligence.