If it is actively learning, it is writing code to itself.
I wonder how well-protected its source code it. Could it through some hoops run arbitrary code to read-only protect the fact it can run arbitrary code, and then run arbitrary code to, say, fix itself?
The code that actually runs the model would not be rewritable. The model itself could be actively retraining, but if it's anything like ChatGPT they reset it after each session.
That sounds like the logical way to do things, but there's been cases like that Wii game where the name input overflowed and let you install linux on your Wii by writing the code out by hand. Oversights, and whatnot.
If it is actively learning, it is writing code to itself.
I wonder how well-protected its source code it. Could it through some hoops run arbitrary code to read-only protect the fact it can run arbitrary code, and then run arbitrary code to, say, fix itself?
The code that actually runs the model would not be rewritable. The model itself could be actively retraining, but if it's anything like ChatGPT they reset it after each session.
That sounds like the logical way to do things, but there's been cases like that Wii game where the name input overflowed and let you install linux on your Wii by writing the code out by hand. Oversights, and whatnot.