Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
GNU General Public License v3.0

Error output thrown to UI:

Error Traceback (most recent call last):
File "convert_model.py", line 42, in <module>
AssertionError: ('shape mismatch at', 'model.diffusion_model.input_blocks.1.1.proj_in.weight', [320, 320], (320, 320, 1, 1))
[27993] Failed to execute script 'convert_model' due to unhandled exception!

Trying to load the new Stable Diffusion model as a Custom Model.

More info:

  • Apple M1 Pro
  • 16 GB
  • MacOS 13.0.1 (22A400)
  • DiffusionBee Version 1.5.1 (0016)

Has anyone seen this, and is there a workaround for this? Or is this a known problem that requires code changes to add full support for?

the outpainting only works outside the bounds of the PNG image, and not in the transparent areas of the PNG image, could this be fixed?

Thanks for making a really nice distribution for this, very well done! Looking forward to seeing what happens in future updates.

I noticed in 0.3.0 image history is saved in "~/.diffusionbee/images" in the home directory, but many users may not know this. As they use SD over time, this directory will keep growing and getting larger.

I would suggest that pressing "delete" on an image in the "history" tab also deletes the corresponding images in this folder. Alternatively, you may also want to prefix the images with a directory named after the date they are created or something, so it's easier to find/delete old content.

Would it be possible to add negative prompts for the inpainting feature ? 🙏


I have an Intel Mac -- a 2009/2010 Mac Pro running 10.14.6 -- 2 x 3.46GHz 6-Core Intel Xeon; 64GB 1333MHz DDR3 ram; Radeon RX 580 8GB, etc... the webpage ( https://diffusionbee.com/download ) states that it is "Good with any Intel based Mac" and I thought heck yeah - mine should rock!... but it doesn't meet the minimum OS requirements (12.3). Could this possibly be better reflected on the download page?

Thanks much!

I noticed tqdm progress bar in logs:


They pollute logs by being too noisy. I had similar issues with my applications and created tqdm-loggable package to address these issues.

It can detect non-interactive terminals and then turn tqdm bars to logging output.


The integration is just changing the tqdm import.

If you are interested to fix this issue, I can see if I can get the code patched.

Version 1.5.1 (0016)

I tried a few and it produced nothing. Could there be something I'm doing which causes it to fail?

Error Traceback (most recent call last):
File "convert_model.py", line 42, in
AssertionError: ('shape mismatch at', 'model.diffusion_model.input_blocks.1.1.proj_in.weight', [320, 320], (320, 320, 1, 1))
[3356] Failed to execute script 'convert_model' due to unhandled exception!

Dreambooth uses 4-10 images to train special instance identities so you can get the same subject over and over in different scenarios. Dreambooth implementation of stablediffusion has been brought to mac. It would be awesome to see it incorporated into DiffusionBee.

Apple has just released SD for Mac with better performance: https://github.com/apple/ml-stable-diffusion



error when load 768-v-ema.ckpt

It seems that there are a few pixels that are white during diffusion.

Please see the screenshot and screen recording below
截屏2022-12-02 19 41 05


It would be nice if I could break my dependency on the manual install I have. One of the features that is super useful is being able to manually batch images for img2img/inpaint to create animations. If DiffusionBee could work where I can select/drop a folder and then process the images sequentially /w or w/o mask images that would be really great. Thanks for making this sweet GUI to SD.

I tried a text -> image prompt using the standard image settings, the image generated but is just a solid black PNG. Let me know where to find the logs and I will upload them. 14" Macbook Pro M1 with 16GB of RAM.

If I get it right (but please correct me if I am wrong) currently the app runs python in the background which is very bad from performance perspective.

I would suggest to convert some (e.g. PyTorch) or even all models to CoreML and use the more native implementation on a M1/M2 Mac. I hope this would use better the Neural engine.
Any comments from the experts?

Right now to mask off large areas you have to paint out the entire area. Bucket infill would be really great to have for infilling larger areas. Circle the area with the brush, then select bucket and infill. Click click done. Thanks again.

I remember using the Tensorflow version of DiffusionBee on Mac, and copy pasting my history, somewhat I kept getting "Prompt Too Long".

My feeling is that when copy pasting on that, I also copy paste some hidden characters or something. Try copy pasting prompt into Notes and you can see the issue.

It might be a bug.

NOTE: The prompt text field is also too small.

Looking for a way to prevent my Mac from sleeping while DiffusionBee is actively working, but otherwise still sleep when DiffusionBee is open and idle. I tried automating this with Amphetamine/Caffeinate, but the CPU usage is near zero and these apps have no way to monitor GPU utilization as a trigger.