Win / KotakuInAction2
KotakuInAction2
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.

This just proves you didn't even look at the Stable Diffusion code I quoted, or have any idea how these text generation pipelines actually work.

Yes, the base GPT3 model is a 200 billion parameter ML model but that in itself is not the entirety of "ChatGPT". ChatGPT is instead a manually-coded pipeline that has the flow chart appearance of taking a prompt as input, running it through an opaque manually-coded block ("input preprocessing"), feeding it into GPT3 model, processing it through another opaque manually-coded block ("output postprocessing", potentially feeding back into GPT3 to trigger another round of text generation), and then finally producing the output. I'm not talking about the GPT3 model being manually-coded, but the input/output processing blocks no doubt are, even if they may themselves include various AI models to filter/bias the input/output.

1 year ago
1 score
Reason: None provided.

Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.

This just proves you didn't even look at the Stable Diffusion code I quoted, or have any idea how these text generation pipelines actually work.

Yes, the base GPT3 model is a 200 billion parameter ML model but that in itself is not the entirety of "ChatGPT". ChatGPT is instead a manually-coded pipeline that has the flow chart appearance of taking a prompt as input, running it through an opaque manually-coded block ("input preprocessing"), feeding it into GPT3 model, processing it through another opaque manually-coded block ("output postprocessing", potentially feeding back into GPT3 to trigger another round of text generation), and then finally producing the output. I'm not talking about the GPT3 model being manually-coded, but the input/output processing blocks no doubt are, even if they may themselves include various AI models to filter/bias the input/output.

1 year ago
1 score
Reason: Original

Again, this is an almost 200 billion parameter ML model. There's no manual coding or rule possible to censor it conceptually.

This just proves you didn't even look at the Stable Diffusion code I quoted, or have any idea how these text generation pipelines actually work.

Yes, the base GPT3 model is a 200 billion parameter ML model but that in itself is not the entirety of "ChatGPT". ChatGPT is instead a manually-coded pipeline that has the flow chart appearance of taking prompt given as input, running it through an opaque manually-coded block ("input preprocessing"), feeding it into GPT3 model, processing it through another opaque manually-coded block ("output postprocessing", potentially feeding back into GPT3 to trigger another round of text generation), and then spitting out the output. I'm not talking about the GPT3 model being manually-coded, but the input/output processing blocks no doubt are, even if they may themselves include various AI models to filter/bias the input/output.

1 year ago
1 score