Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Observe, for example, how the the NSFW filtering was applied to Stable Diffusion. This wasn't by changing to original model. Here is the commit showing the addition of a "Safety Checker" to the generating script: https://github.com/CompVis/stable-diffusion/commit/d0c714ae4afa1c011269a956d6f260f84f77025e
Nah, they didn't retrain their whole text generator. Rather, they most then likely added a filter to the generated text that biases or filters the output in some way. The addition of a filter would be considered the addition of "new code" i.e. "changing the code".
Observe, for example, how the the NSFW filtering was applied to Stable Diffusion. This wasn't by changing to original model.