The LLMs used were likely trained on the works of science fiction authors without reimbursement to those authors. For the leadership to use them is seen as tacit approval of that ripoff, from the perspective of authors whose see OpenAI raking in the cash while paying $0 to authors anyways. Anyone who approves of authors not getting paid is unlikely to last long at the head of an authors award group. It’s not about whether people were judged by AI.
They weren't used for anything to do with the awards. There are no judges involved in picking the award finalists; they are selected based on votes submitted by anyone who paid for a membership in the convention. The administrators only have to count the votes and make sure that the finalists selected are actually eligible (right publication date, length, and so on for the category).
They were used for looking for red flags about people who applied to be part of one of the hundreds of panels that take place during the convention. Frankly, I doubt this would have been a problem normally, but Worldcon's core audience is very sensitive about LLM usage right now due to all of the rhetoric about LLMs replacing writers.
Online vetting it looks like. Used it to retrieve and summarize articles about the author from the internet based on name alone. I guess to make sure they didn’t make spicy posts online or had some controversy in their past.
Seems like a wild overreaction to bitch about LLMs in this instance.
https://seattlein2025.org/2025/04/30/statement-from-worldcon...
The LLMs used were likely trained on the works of science fiction authors without reimbursement to those authors. For the leadership to use them is seen as tacit approval of that ripoff, from the perspective of authors whose see OpenAI raking in the cash while paying $0 to authors anyways. Anyone who approves of authors not getting paid is unlikely to last long at the head of an authors award group. It’s not about whether people were judged by AI.
It's not clear to me what LLMs were used for, exactly?
Was it that the judges responsible for picking the finalists didn't read the books and instead had AI suggest them? Or something else?
They weren't used for anything to do with the awards. There are no judges involved in picking the award finalists; they are selected based on votes submitted by anyone who paid for a membership in the convention. The administrators only have to count the votes and make sure that the finalists selected are actually eligible (right publication date, length, and so on for the category).
They were used for looking for red flags about people who applied to be part of one of the hundreds of panels that take place during the convention. Frankly, I doubt this would have been a problem normally, but Worldcon's core audience is very sensitive about LLM usage right now due to all of the rhetoric about LLMs replacing writers.
Online vetting it looks like. Used it to retrieve and summarize articles about the author from the internet based on name alone. I guess to make sure they didn’t make spicy posts online or had some controversy in their past.