We cannot trust identity like we used to here on HN (even pre-LLM-AI I thought we seemed naive.) Unfortunately, we live in a world or anyone or any AI can claim almost anything plausible sounding.
Where do we go from here? (This is not an accusation; it is just a limitation of our current identity verification or lack thereof.)
You can confirm that the people who say things are in a position to know.
> You can confirm that the people who say things are in a position to know.
What is the above commenter's sense of how well one can 'confirm' such a thing?
Looking at an HN account and its comment history provides some signal, but this doesn't satisfy me, given the incentives at play here. We're talking about OpenAI, a ~$800B company. Reputation matters a lot. The stakes are higher than e.g. "does so and on really work at e.g. Mozilla and know about the details of a messy Rust governance issue?" (to pick a deliberately lower stakes example).
When we decide what to let into our brains around OpenAI, Anthropic, etc, the bar needs to be higher than i.e. "does a HN account seem to be consistent with someone who works at OpenAI?". (I'm not sure if this is the above commenter's position or close to it?)
We need to be able to have stronger proofs, preferably ones with cryptography and credibility rooted in a legitimate trust model. In 2026, this is certainly possible technically, if a platform made this a priority. The barriers are largely social, cultural, and economic.
HN does not make real-world identity a priority. There might be some workarounds for posting information in one's profile, but practically speaking, I'm not seeing how this would work and what levels of identity it would bolster. Am I missing something?
If I start hand-waving I might dream up something like the following ... Maybe someone could stitch something together with a trusted content time-stamping server and prove they control an OpenAI email address and also provide that cryptographic evidence on their HN profile. It sounds ... practically unappealing at best. I haven't seen this done. Maybe I'm overlooking a good way. I'm all ears. We're going to need better solutions.
They work at OpenAI, what more do you want? For what it’s worth, I can independently corroborate that the announcement was planned in advance.