pbakaus
Repos
23
Followers
505

The AMP Project Website.

549
678

Events

issue comment
fix img2img by working around pytorch bug

Awesome, any workaround is better than the current state (img2img / canvas being broken)!

Curious - what's the expected performance hit for this? How much of the pipeline does this MPS->CPU change represent?

Created at 2 days ago
issue comment
[bug]: img2img (and inpainting) produces only terrible noise on Mac

@damian0815 running into the exact same issue. How did you install the patched pytorch? Tried today but ran into too many hiccups, getting a lot of package inconsistency warnings in conda (granted, I have little idea what I'm doing).

Created at 3 days ago
Feature request: Ability to use wildcards in negative prompt field

Maybe this already works but I couldn't get it to work as expected. I tried to create a folder named negative and put a photo.txt into it. Then, I simply put the following into the negative prompt:

Negative prompt:

__negative/photo__

Unfortunately, looking at the metadata the field was simply left untouched by the extension. Not sure if this was intended to work and this is technically a bug request, or whether this never worked in the first place, in which case consider this a feature request :)

Created at 1 month ago
Bug with wildcard file + combination

Describe the bug This here used to work:

{__wildcard1__|__attr1__ __wildcard2__}

but now, only one wildcard include can be in that {} bracket for one segment at a time. It's the __attr1__ __wildcard2__ that breaks it. In the broken state, the process throws an error, and all wildcards are ignored. The following works:

__attr1__ {__wildcard1__|__wildcard2__}

Pretty sure I never ran into problems with that until recently.

To Reproduce Steps to reproduce the behaviour, include the prompt you used if applicable:

  1. create three wildcard files with the right names
  2. Put {__wildcard1__|__attr1__ __wildcard2__} into the prompt
  3. Observe command line error

Happens with default configuration for me (no advanced settings used).

Expected behaviour Expectation is that one can use unlimited wildcard inclusions in an "OR" selector.

Created at 1 month ago
Feature request: Jinja2 variable or global way to act based on current model

aaaah perfect. Thank you, will give this a try! Feel free to close this issue.

Created at 1 month ago
Feature request: Jinja2 variable or global way to act based on current model

Very commonly, the tags you would use for e.g. a Danbooru model vs vanilla 1.5 model are completely different. One option is to simply create different wildcard files for each, but it makes organization somewhat difficult. It would be pretty handy to be able to switch dynamically based on which model is currently active.

For Jinja2, I imagine it could look something like this:

{% if model == "waifu.cpt"}
    1girl
    2girls
    ...
    {% else %}
    one woman
    two women
{% endif %}

Even better would be a general syntax that makes it easy to simply prefix a line in a wildcard file so it gets ignored if not matching a certain model, like maybe so, imagine this was a file called styles.txt:

{@sd-v1-4,v1-5-pruned-emaonly@} half body portrait
{@arcane*, comic*@} high detail illustration
# alternate hash based lookup
{@318a302e@} high detail illustration

Obviously all pseudo syntax quickly hacked together, but regardless of how, being able to match models would be amazing!

Created at 1 month ago
issue comment
[enhancement]: Integrate Apple's CoreML optimizations for SD

just looked a little into this - looks really promising, but some issues as of now:

  • models have to be converted in order to work. Not a big deal, but worth calling out..
  • converted models do not seem to be able to accept any other width/height input other than 512x512 (or whatever it was originally trained on. see https://github.com/apple/ml-stable-diffusion/blob/main/python_coreml_stable_diffusion/pipeline.py#L225). That looks like a blocker until they fix it.
  • The python generation pipeline currently has to load the model from scratch every time (2-3 mins!) and is unable to cache it. Their FAQ describes it in more detail. There's a swift pipeline that can avoid it, but not sure that helps here..

I'd love to see this in action here. Hope my digging helps a bit, and hope their repo advanced quickly to fix these shortcomings.

Created at 2 months ago
[Bug]: Having an "open_clip.transformer" error after updating webui.

nvm - found the other bug report, and some instructions that worked.

Created at 2 months ago
[Bug]: Having an "open_clip.transformer" error after updating webui.

Re-installing on a mac, running into a similar variant of this issue:

Traceback (most recent call last):
  File "/Users/paulbakaus/code/stable-diffusion-webui/webui.py", line 14, in <module>
    from modules import shared, devices, sd_samplers, upscaler, extensions, localization, ui_tempdir
  File "/Users/paulbakaus/code/stable-diffusion-webui/modules/sd_samplers.py", line 11, in <module>
    from modules import prompt_parser, devices, processing, images
  File "/Users/paulbakaus/code/stable-diffusion-webui/modules/processing.py", line 15, in <module>
    import modules.sd_hijack
  File "/Users/paulbakaus/code/stable-diffusion-webui/modules/sd_hijack.py", line 14, in <module>
    from modules import sd_hijack_clip, sd_hijack_open_clip
  File "/Users/paulbakaus/code/stable-diffusion-webui/modules/sd_hijack_open_clip.py", line 1, in <module>
    import open_clip.tokenizer
ModuleNotFoundError: No module named 'open_clip'
Created at 2 months ago