it looks like we're going to need to build the actual ROCm driver using xuhuisheng's GitHut project in order to use StableDiffusion. u/MsrSgtShooterPerson, you - like me - have an RX 5700 XT card, so we're in for a lot of work ahead of us. Open the optimizedSD/v1-inference.yaml file with any text editor and remove every 'optimizedSD.'.For example, target: must become target: openaimodelSplit.UNetModelEncodeAlso, i added the '-precision full' argument, without it i got only grey squares in output. project does not work as intended, but i found a workaround.ĮDIT: i realized it was just a fork of this project The RX 5700x has only 8gb of vram.I tried playing with stable diffusion's arguments, but i wasn't able to make it work, always crashing because it couldn't allocate enough vram.Maybe there's a way to still use it, but probably it just isn't worth it.ĮDIT: Seems like someone made a fork of stable-diffusion wich is able to use less vRAM. You'll have also to use 'export HSA_OVERRIDE_GFX_VERSION=10.3.0', (it's the workaround linked by yahma)īUT. They provide both the propretary OpenCL driver and the RocM stack. On ArchLinux, i installed opencl-amd and opencl-amd-dev from the AUR.