Edit: for me the most practical insight to come out of these threads, so far, is that Show HNs for generated repos/sites/projects would be more interesting if submitters were required to share the prompts, and not just the generated output. For such projects, the prompts are the real source, while the GH repo or generated artifact is actually the object code, and if that's all that's shared, it's less interesting and there's less to discuss.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://news.ycombinator.com/item?id=47077840
https://news.ycombinator.com/item?id=47050590
https://news.ycombinator.com/item?id=47077555
https://news.ycombinator.com/item?id=47061571
https://news.ycombinator.com/item?id=47058187
https://news.ycombinator.com/item?id=47052452
--- original comment ---
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (423 comments; subthread https://news.ycombinator.com/item?id=47050421 is about what to do about it)
AI makes you boring - https://news.ycombinator.com/item?id=47076966 - Feb 2026 (367 comments)
i can show you my prompt. it's a mess and only listing the major prompts. but it's interesting to look at (for me at least).
archive:
https://github.com/ludos1978/ludos-vscode-markdown-kanban/bl...
in progress or undone:
https://github.com/ludos1978/ludos-vscode-markdown-kanban/bl...
and there is much improvement, because if often watch videos next to vibe coding the project and do it in minutes i have time. so history is a huge mess. but i started 6 months ago, with most work done in the last 4 months. i spent about 1000$ of ai budget on it.
Thanks—I agree that it's interesting to look at, and a submission of the project that included this would be much better than a submission of the same project that did not include it.
It's not a terrible idea, but I'm not sure how that would work logistically.
When I use Codex to do vibe coding stuff, I don't usually have one big prompt, I usually have it do small things piecemeal and I iterate with it later. Maybe I'm using it wrong but it tends to be more "conversational" and I think that would be harder to share, especially considering I'll do things over dozens of sessions.
I suppose I could keep an archive of every session I've ever opened with Codex and share that, but thus far I haven't really done that.
Granted, I don't really share my stuff with "Show HN".
That's how I do it too. I haven't checked (and can't right now as I'm not at work) but does codex not have a feature that lets you download your codex chat logs? I would certainly hope so...
I'm not sure how it would work. That would have to be figured out, so we would start with the simplest implementation that is not nothing.
The limitations you're describing seem mostly to have to do with tooling, rather than objections in principle.
I certainly don’t disagree with the idea; AI stuff can be so low-effort that requiring some how-to makes sense.
I am just not 100% sure how it will be implemented. Maybe even a high-level overview of roughly what the conversation looks like.
> Prompts are the real source
LLMs are non deterministic…
That's one of the major shifts happening with this technology, but I don't think it changes the point: the prompt is the input from which the output gets generated, and in that sense is the source—a generated repo is not.
Technically I suppose one should say that the input is the tuple of (prompt, model, X) where X is some bundle of factors that nobody understands, but that formulation doesn't help me think about how we should adapt to this moment.
“Share the prompts”
How would that be feasible for a project of any complexity whatsoever?
I'm reminded of something I read recently about disclosure of AI use in scientific papers [1]:
> Authors should be asked to indicate categories of AI use (e.g., literature discovery, data analysis, code generation, language editing), not narrate workflows or share prompts. This standardization reduces ambiguity, minimizes burden, and creates consistent signals for editors without inviting overinterpretation. Crucially, such declarations should be routine and neutral, not framed as exceptional or suspicious.
I think that sharing at least some of the prompts is a reasonable thing to do/require. I log every prompt to a LLM that I make. Still, I think this is a discussion worth having.
[1] https://scholarlykitchen.sspnet.org/2026/02/03/why-authors-a...
This is totally infeasible.
If I have a vibe coded project with 175k lines of python, there would be genuinely thousands and thousands of prompts to hundreds of agents, some fed into one another.
Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
> I log every prompt to a LLM that I make.
How many do you have in the log total?
I have a daily journal where I put every online post I make. I include anything I send to a LLM on my own time in there. (I have a separate work log on their computer, though I don't log my work prompts.) Likely I miss a few posts/prompts, but this should have the vast majority.
A few caveats: I'm not a heavy LLM user (this is probably what you're getting at) and the following is a low estimate. Often, I'll save the URL only for the first prompt and just put all subsequent prompts under that one URL.
Anyhow, running a simple grep command suggests that I have at least 82 prompts saved.
In my view, it would be better to organize saved prompts by project. This system was not set up with prompt disclosure in mind, so getting prompts for any particular project would be annoying. The point is more to keep track of what I'm thinking of at a point in time.
Right now, I don't think there are tools to properly "share the prompts" at the scale you mentioned in your other comment, but I think we will have those tools in the future. This is a real and tractable problem.
> Whats the worth of digging through that? What do you learn? How would you know that I shared all of them?
The same questions could be asked for the source code of any large scale project. The answers to the first two are going to depend on the project. I've learned quite a bit from looking at source code, personally, and I'm sure I could learn a lot from looking at prompts. As for the third question, there's no guarantee.
No the same question CANNOT be asked of source code because it can execute.
You might as well ask for a record of the conversations between two engineers while code was being written. That's what the chat is. I have a pre-pre-alpha project which already has potentially hundreds of "prompts"--really turns in continuing conversations. Some of them with 1 kind of embedded agent, some with another. Some with agents on the web with no project access.
Sometimes I would have conversations about plans that I drop. do I include those, if no code came out of them but my perspective changed or the agent's context changed so that later work was possible?
I don't mean to be dismissive, but maybe you don't have the necessary perspective to understand what you're asking for.
> maybe you don't have the necessary perspective to understand what you're asking for.
I disagree. Thinking about this more, I can give an example from my time working as a patent examiner at the USPTO. We were required to include detailed search logs, which were primarily autogenerated using the USPTO's internal search tools. Basically, every query I made was listed. Often this was hundreds of queries for a particular application. You could also add manual entries. Looking at other examiners' search logs was absolutely useful to learn good queries, and I believe primary examiners checked the search logs to evaluate the quality of the search before posting office actions (primary examiners had to review the work of junior examiners like myself). With the right tools, this is useful and not burdensome, I think. Like prompts, this doesn't include the full story (the search results are obviously important too but excluded from the logs), but that doesn't stop the search logs from being useful.
> You might as well ask for a record of the conversations between two engineers while code was being written.
No, that's not typically logged, so it would be very burdensome. LLM prompts and responses, if not automatically logged, can easily be automatically logged.
> LLM prompts and responses, if not automatically logged, can easily be automatically logged.
What will you do with what you’ve logged? Where is “the prompt” when the chat is a chat? What prompt “made” the software?
If you’re assuming that it is prompt > generation > release, that’s not a correct model at all. The model is *much* closer to conversations between engineers which you’ve indicated would be burdensome to log and noisy to review.
> What will you do with what you’ve logged?
Could be a wide variety of things. I'd be interested in how rigorously a software was developed, or if I can learn any prompting tricks.
> Where is “the prompt” when the chat is a chat?
> The model is much closer to conversations between engineers which you’ve indicated would be burdensome to log and noisy to review.
I disagree. Yes, prompts build on responses to past prompts, and prompts alone are not the full story. But exactly the same thing is true at the USPTO if you replace "prompts" with "search queries" and no one is claiming that their autogenerated search logs are burdensome.
Also, the burden in actual conversations would come from the fact that such conversations are often not recorded in the first place. And now that I think about it, some organizations do record many meetings, so it might be easier than I'm thinking.
> What prompt “made” the software?
All of them.
> maybe you don't have the necessary perspective to understand what you're asking for
Please don't cross into personal attack. You're making fine points, and that's enough.
https://news.ycombinator.com/newsguidelines.html
Btw, I think this is a particularly good point: "You might as well ask for a record of the conversations between two engineers while code was being written. That's what the chat is."
That's a good reframing. I can see why it might be impractical to share all of that, hard to make sense of as a reader, and too onerous to demand of submitters.
Since you have experience in this area, I'd like to hear your view on what we could reasonably require submitters to share, given that the flood of generated Github repos is creating a lot of low-quality submissions that don't gratify curiosity and thus don't fit the spirit of either Show HN or HN in general.
Some people would say "just ban them", but I'd rather find a way to adapt to this wave, since it is the largest technical development in a long time, and the price of opposing it is obsolescence.
"maybe you don't have the necessary perspective to understand what you're asking for"
this is in no way a personal attack. It's just a statement that's true. I didn't imply anything about them or their character or limitations, but they might not have the necessary perspective if that's the question they are asking.
I think it's critically important people figure out what they want to learn from what's being shared.
What do you need from submitters here? Even setting aside the burden of supplying it, what do you hope to learn?
> I think it's critically important people figure out what they want to learn from what's being shared.
> What do you need from submitters here? Even setting aside the burden of supplying it, what do you hope to learn?
I appreciate your comments on this - they are the most interesting responses I've seen so far about this question (so I hope the meta stuff doesn't get too much in the way).
The hope is to make the submissions of AI-generated Show HNs more interesting than they are when someone submits just a repo with generated code and a generated README.
The question is what could, at least in principle, be supplied that could have this desired effect.
(I thought I'd fork my reply to keep the meta stuff separate from the interesting stuff)
I believe you that it wasn't your intention, but when you address someone in the second person while commenting negatively on their perspective and understanding, it's going to land with a lot of readers (as it did with me) as personally pejorative. It's common for commenters (me too of course) not to perceive the provocations in their own posts, while being extra sensitive to the provocations in others' posts. If the skew is 10x both ways, that's quite a combination. It's necessary to remember and compensate for the skew, a la "objects in the mirror are closer than they appear".
Edit: total coincidence but I just noticed https://news.ycombinator.com/item?id=47115097 and made a similar reply there. I thought you might find this amusing, as I did.
I can't reply to the other comment, but here goes:
This is one (1) conversation: https://chatgpt.com/share/69991d7e-87fc-8002-8c0e-2b38ed6673...
It has 9 "prompts" On just the issue of path re-writing, that's probably one of a dozen conversations, NOT INCLUDING prompts fed into an LLM that existed to strip spaces and newlines caused by copying things out of a TUI.
It's ok for things to be different than they used to be. It's ok for "prompts" to have been a meaningful unit of analysis 2 years ago but pointless today.
I don't know! We'd have to figure something out along the way.
Btw I realize "prompt" is no longer the right word for this but I don't know what a better term would be. "Conversation with LLM" is too clumsy.
I really have no idea. There's almost no good mapping from conversation to code. Take e.g. https://github.com/Protonk/PAWL
There's...thousands of conversations in dozens of different configurations over months to make that. What...out of that do you want to know? What do you deserve to know by dint of my making a submission?
Those aren't rhetorical. I think if you can answer those two, that's a fair start. Good luck lol.
The submitter could decide and curate what is useful to be shared, whether that’s exact message logs, or the subagents and skills they think made the difference.