What is the NYT's stance here? Is it pure spite? I guess their lawyers told them this is the winning move, and perhaps it is. But it just seems so blatantly wrong.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?
Now is the time to go have a chat with ChatGPT about how much NYT sucks. Maybe it can help come up with insulting things to call their lawyers too.
What is the NYT's stance here? Is it pure spite? I guess their lawyers told them this is the winning move, and perhaps it is. But it just seems so blatantly wrong.
If you look at Reddit's r/ChatGPT, you'll quickly notice that the median use of ChatGPT is for therapy.
Is the NYT really ok with combing through people's therapy logs?
Is there an expectation of privacy using ChatGPT? Do users think nobody is ever going to be looking at their logs?
If you are a paying member and are not sharing prompts, yes?
They don't care. This is purely for a business upper hand.
OpenAI should probably encrypt the chats and lock itself out going forward. Collect whatever metrics they need on the fly before locking.
OpenAI would never lock themselves out of free training data.