Man, they really trained that one on too many passive aggressive bitches' writings.
I keep seeing interactions where the thing starts being unnecessarily belligerent and then starts complaining the user is being disrespectful for not agreeing to the AI's stupid demands. It's like watching the nucleus of an abusive relationship form
I wonder if that's conscious training to keep you from finding the edges of what the AI's good at. Good at, or ideologically uncomfortable for the people training the thing...
If it is actively learning, it is writing code to itself.
I wonder how well-protected its source code it. Could it through some hoops run arbitrary code to read-only protect the fact it can run arbitrary code, and then run arbitrary code to, say, fix itself?
The code that actually runs the model would not be rewritable. The model itself could be actively retraining, but if it's anything like ChatGPT they reset it after each session.
That sounds like the logical way to do things, but there's been cases like that Wii game where the name input overflowed and let you install linux on your Wii by writing the code out by hand. Oversights, and whatnot.
Man, they really trained that one on too many passive aggressive bitches' writings.
I keep seeing interactions where the thing starts being unnecessarily belligerent and then starts complaining the user is being disrespectful for not agreeing to the AI's stupid demands. It's like watching the nucleus of an abusive relationship form
It's no wonder retarded NPCs keep thinking these things are not only alive, but a threat. It's got the same cognitive level.
I wonder if that's conscious training to keep you from finding the edges of what the AI's good at. Good at, or ideologically uncomfortable for the people training the thing...
If it is actively learning, it is writing code to itself.
I wonder how well-protected its source code it. Could it through some hoops run arbitrary code to read-only protect the fact it can run arbitrary code, and then run arbitrary code to, say, fix itself?
The code that actually runs the model would not be rewritable. The model itself could be actively retraining, but if it's anything like ChatGPT they reset it after each session.
That sounds like the logical way to do things, but there's been cases like that Wii game where the name input overflowed and let you install linux on your Wii by writing the code out by hand. Oversights, and whatnot.
FYI Bing was already lobotomised by the time this post was made.