Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest mac tool chains.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2022 trained on 825GB of publicly available text data
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
or : https://github.com/EleutherAI/gpt-neox
Full weights of 268GB can be downloaded if merging 7 years of voat.co comments or folding in 4.5 years of 4Chan comments. Both of those corpus sets are trivially passed around by Israeli researchers and all good GPT is better on REAL WORLD BENCHMARKS of general knowledge when trained using voat and 4Chan folded into your GPT3. Its not all racist jokes.
GPT-NeoX-20B is the largest open-source pre-trained autoregressive language model available
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensor OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize a more sentient engine :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds . A 5 thousand dollar mac can hold 50 gigabyte VRAM models
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest mac tool chains.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2022 trained on 825GB of publicly available text data
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
or : https://github.com/EleutherAI/gpt-neox
Full weights of 268GB can be downloaded if merging 7 years of voat.co comments or folding in 4.5 years of 4Chan comments. Both of those corpus sets are trivially passed around by Israeli researchers and all good GPT is better on REAL WORLD BENCHMARKS of general knowledge when trained using voat and 4Chan folded into your GPT3. Its not all racist jokes.
GPT-NeoX-20B is the largest open-source pre-trained autoregressive language model available
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize a more sentient engine :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds . A 5 thousand dollar mac can hold 50 gigabyte VRAM models
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2022 trained on 825GB of publicly available text data
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
or : https://github.com/EleutherAI/gpt-neox
Full weights of 268GB can be downloaded if merging 7 years of voat.co comments or folding in 4.5 years of 4Chan comments. Both of those corpus sets are trivially passed around by Israeli researchers and all good GPT is better on REAL WORLD BENCHMARKS of general knowledge when trained using voat and 4Chan folded into your GPT3. Its not all racist jokes.
GPT-NeoX-20B is the largest open-source pre-trained autoregressive language model available
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize a more sentient engine :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds . A 5 thousand dollar mac can hold 50 gigabyte VRAM models
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2022 trained on 825GB of publicly available text data
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
or : https://github.com/EleutherAI/gpt-neox
Full weights of 268GB can be downloaded if merging 7 years of voat.co comments or folding in 4.5 years of 4Chan comments. Both of those corpus sets are trivially passed around by Israeli researchers and all good GPT is better on REAL WORLD BENCHMARKS of general knowledge when trained using voat and 4Chan folded into your GPT3. Its not all racist jokes.
GPT-NeoX-20B is the largest open-source pre-trained autoregressive language model available
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize a more sentient engine :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2023
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize a more sentient engine :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2023
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free form text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2023
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose these methods to customize :
- https://doi.org/10.48550/arXiv.2211.01786
- https://doi.org/10.48550/arXiv.2301.12726
- https://doi.org/10.48550/arXiv.2212.12017
But with retinal classification free for text input its overkill for medical pathology imagery if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.3 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2023
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download AI hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 AI nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
I now toy with very large image models, and music generation. But at least all my test bed machines know what a Joooo is and are not censored.
I also now use vision recognition training of retinal aberrations from AI sets :
https://www.aao.org/eyenet/article/ai-and-retina-finding-patterns-of-systemic-disease
To "instruct" OpenGPT3 and GPT 3.5 you choose either or both :
But with retinal classification its overkill if its only run by you, the creator
= = = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
Good luck to those of you who have the hardware
WRONG!
ITS CHEAP and fast on macs.
A cheap M1 mac mini from 2 or 3 years ago (November 17, 2020) runs Stable Diffusion git script Automatic1111 fine (under 80 seconds per image) and the 7.9 gigabyte file!
Its fine and no GPU card needed, using 16GB M1 mac mini internal RAM, if you either update to latest MacOS or manually build a few libraries and update python.
If you use the oldest M1 mac OS (under 500 dollars used, $800 when new), you can still install and run Automatic1111
you can run it if you install a modern fortran for ARM, and compile library for gfpgan manually, because the AUTOMATIC1111 web interface and compilation script fails on non-latest macs.
https://stable-diffusion-art.com/install-mac/#Pros_and_Cons_of_AUTOMATIC1111
and huge AI image file for APU :
https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt
= = = = = =
EleutherAI/gpt-neox-20b 2023
I NOW MAINLY USE a 48 GIGABYTE gpu/apu pre-trained model though, with 20 billion parameters, and not on a cheap mac, so you are correct that its a pricey hobby. You need either TWO 8,000 dollar cards :
- Nvidia Tesla A100 Gpu 900-21001-0000-000 40Gb
or ONE 80 gigabyte ram giant AI card : https://archive.ph/3xiNL
See? one single $11,445 dollar card on Ebay from a Nigerian can hold all of the 48 gigabyte file for EleutherAI/gpt-neox-20b 2023
and its provably slightly smarter than CHAT GPT3.5J6B :
https://blog.eleuther.ai/announcing-20b/
anyone can download the 48 gigabyte trained model without a email address this month from : https://huggingface.co/EleutherAI/gpt-neox-20b
you need a 64 gigabyte or 128 gigabyte ram Mac M2 PRO MAX, or an Apple M1 Ultra for just $3999.00 pre-tax, retail : https://www.apple.com/mac-studio/specs/
= = = = = =
LOOKING FOR UNCENSORED ChatGPT (OpenGpt3.5) or uncensored latest stable diffusion? Then just do a MERGE of two built in models, with one model trained with : Jooo faces, Biden and Democrat faces, Nude Women. It sadly gets repetitive after you hack a model merge, but the results of uncensored and upscaled and face-repaired output is STUNNING!!!!
LOOK AT 50 PORNO NUDES made using a free to download hack on a cheap computer :
WOW WOW WOW! All prompts for those 50 nudes are in :
https://np.reddit.com/r/sdnsfw/
https://www.reddit.com/r/StableDiffusion/comments/11buf3o/psa_deliberate_v2_has_been_released_today/
The reason anime is blended is because this model used a local directory image collection corpus stocked with too much anime and the AI seems to LOVE anime more than real world.: AbyssOrangeMix2
AbyssOrangeMix2 free to DL and install
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2
The other problem is a root design flaw of how all the merged checkpoints are logically defective for lateral thought, including AbyssOrangeMix2
To PROPERLY uncensore OpenAI forks of Free OpenGPT and Stable Diffusion 1.5 you need months of CPU time and 60 TERABYTES of pre labelled LAION-5B images:
LAION-5B is 5.85 BILLION human labelled FILTERED tagged images stolen from the web for researchers
5.85 BILLION PHOTOS! : https://laion.ai/blog/laion-5b/
https://arxiv.org/abs/2210.08402
The first thing they do at OpenAI and StableDiffusion BEFORE training for 3 weeks, is to run a script to delete MOST erotica and pron, and DELETE ALL Jooo FACES, and most negative faces of biden and delete all POSITIVE portrait photos of Trump or WW2 leader of Germany.
= = = = =
TL/DR: AUTOMATIC1111 and hacked image logic can run on a old 500 dollar mac in under 80 seconds
= = =