flesler
Repos
60
Followers
416
Following
3

Lightweight, cross-browser and highly customizable animated scrolling with jQuery

3633
1003

HashMap JavaScript class for Node.js and the browser. The keys can be anything and won't be stringified

382
67

A connectivity monitoring app, built on Electron

Animated anchor navigation made easy with jQuery

620
199

jQuery JavaScript Library

57152
20142

Web-based Rock-Paper-Scissors vs a LSTM Neural Network

Events

Fails to download the model from Huggingface

You can create the file as described in there, or add a cell that has:

token = '...your token...'

and run it before downloading the model

Created at 1 week ago
Fails to download the model from Huggingface

Yeah I noticed that and patched the code for myself. Having it in the notebook was way more practical IMO. Even if that fixes, the way it fails right now is super cryptic. Rather than checking if there's no token when needed and showing a clear error message

Created at 1 week ago
Fails to download the model from Huggingface

Fixed it on my end, with quick ugly patch:

origin = f"https://USER:{token}@huggingface.co/{Path_to_HuggingFace}"
!git remote add -f origin $origin
Created at 1 week ago
Save Checkpoint every n steps fails to save file

Gave it a quick try, couldn't even get there. Reported #1436

Created at 1 week ago
Fails to download the model from Huggingface

image

Created at 1 week ago
Save Checkpoint every n steps fails to save file

Is it working right now? If so, do the intermediate versions get saved or only the last?

Created at 1 week ago
Save Checkpoint every n steps fails to save file

image

Had me waste much time updating and then this

Created at 2 weeks ago
Save Checkpoint every n steps fails to save file

And if not using colab pro bro? You just broke it for everyone using free colab without warning?

Created at 2 weeks ago
Training on a custom (huggingface) model is broken

The V3 alpha isn't. That's the one I'm REALLY looking to try :)

Created at 2 weeks ago
Training on a custom (huggingface) model is broken

Got this error trying a ckpt from civitai (it's not on huggingface). I dowloaded to GDrive first and linked to the path:

Here's the ckpt URL: https://civitai.com/api/download/models/1292?type=Model&format=PickleTensor This is the model's page (V3 Alpha): https://civitai.com/models/1102/synthwavepunk

Converting to Diffusers ...
Traceback (most recent call last):
  File "/content/convertodiff.py", line 1115, in <module>
    convert(args)
  File "/content/convertodiff.py", line 1066, in convert
    text_encoder, vae, unet = load_models_from_stable_diffusion_checkpoint(v2_model, args.model_to_load)
  File "/content/convertodiff.py", line 847, in load_models_from_stable_diffusion_checkpoint
    info = unet.load_state_dict(converted_unet_checkpoint)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
	Missing key(s) in state_dict: "up_blocks.0.upsamplers.0.conv.weight", "up_blocks.0.upsamplers.0.conv.bias", "up_blocks.1.upsamplers.0.conv.weight", "up_blocks.1.upsamplers.0.conv.bias", "up_blocks.2.upsamplers.0.conv.weight", "up_blocks.2.upsamplers.0.conv.bias". 
	Unexpected key(s) in state_dict: "up_blocks.0.attentions.2.conv.bias", "up_blocks.0.attentions.2.conv.weight". 
rm: cannot remove '/content/stable-diffusion-custom': No such file or directory

Any idea @TheLastBen ?

Created at 2 weeks ago
Training on a custom (huggingface) model is broken

That one shouldn't work, it's fp16. Protogen worked for me by putting the Hugging face instead (darkstorm2150/Protogen_v2.2_Official_Release)

Created at 2 weeks ago
Training on a custom (huggingface) model is broken

Protogen 3.whatever worked for me using the huggingface path, the two times I tried. There are clearly many different errors going on here. Maybe this needs a ticket for each or w/e

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

BTW, I see the notebook is replacing the VAE with 1.5's for custom models, don't they sometimes have their own custom VAE (a rarely none) and actually count on it not to change?

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

@TheLastBen For the fp16 bit, is it possible to know without downloading? I expected the size to reflect that (like ~2GB for fp16, >4GB otherwise). Both both the .bin and the .ckpt are ~2GB and yet the CKPT is fine.

Also, if not, would it be easy to check if fp16 at the downloading stage? So that we don't get all the way to the bottom and create useless directories before realizing it's wrong

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

It blows up with this right now, both at the text encoder and then the unet

File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 726, in main
    accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 920, in clip_grad_norm_
    self.unscale_gradients()
  File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 904, in unscale_gradients
    self.scaler.unscale_(opt)
  File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 282, in unscale_
    optimizer_state["found_inf_per_device"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)
  File "/usr/local/lib/python3.8/dist-packages/torch/cuda/amp/grad_scaler.py", line 210, in _unscale_grads_
    raise ValueError("Attempting to unscale FP16 gradients.")
ValueError: Attempting to unscale FP16 gradients.
  0% 0/4999 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=2000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0/instance_images', '--output_dir=/content/models/jmilei-v2-photoreal2.0', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v2-photoreal2.0/captions', '--instance_prompt=', '--seed=48872', '--resolution=768', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=4e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.
Something went wrong

Different error though. Using Path_to_HuggingFace: dreamlike-art/dreamlike-photoreal-2.0, images of 768x768 (chosen in both places). Some text encoding

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

I'm doing a test run right now with the latest version

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

some models require a specific python version, but most of them work fine, do you have a link to the model ?

This one reproduced it for me @TheLastBen https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0

Created at 3 weeks ago
Training on a custom (huggingface) model is broken

I tried several different base models based on 1.5. Pasted the following in Path_to_HuggingFace, no path or link. 1.5 selected as custom model version:

  • darkstorm2150/Protogen_v5.3_Official_Release
  • 22h/vintedois-diffusion-v0-1
  • dreamlike-art/dreamlike-photoreal-2.0
  • devilkkw/KKW_FANTAREAL_V1.0

All of them crash when it gets to training the unet, I get:

Training the UNet...
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 522, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.8/dist-packages/diffusers/modeling_utils.py", line 388, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named diffusion_pytorch_model.bin found in directory /content/stable-diffusion-custom.
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=1000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal/instance_images', '--output_dir=/content/models/jmilei-photoreal', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-photoreal/captions', '--instance_prompt=', '--seed=643601', '--resolution=768', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=3e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.

I tried to patch it by copying the stuff from /unet/ to the parent as it expected. Still then got this other error and rage-quitted

Training the UNet...
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 852, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 522, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.8/dist-packages/diffusers/modeling_utils.py", line 451, in from_pretrained
    model, unused_kwargs = cls.from_config(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 202, in from_config
    model = cls(**init_dict)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 516, in inner_init
    init(self, *args, **init_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 544, in __init__
    self.encoder = Encoder(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 94, in __init__
    down_block = get_down_block(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py", line 67, in get_down_block
    raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D")
ValueError: cross_attention_dim must be specified for CrossAttnDownBlock2D
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--stop_text_encoder_training=300', '--image_captions_filename', '--train_only_unet', '--save_starting_step=1000', '--save_n_steps=1000', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3/instance_images', '--output_dir=/content/models/jmilei-v3-protogen5.3', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/jmilei-v3-protogen5.3/captions', '--instance_prompt=', '--seed=425318', '--resolution=512', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=4e-06', '--lr_scheduler=polynomial', '--lr_warmup_steps=0', '--max_train_steps=4999']' returned non-zero exit status 1.
Created at 3 weeks ago
Created at 3 weeks ago
[Feature Request] Greater efficiency when using "Steps" in X/Y plot

@AUTOMATIC1111 The UI has a setting to enable live preview every X steps. Isn't this basically the same thing? Does enabling the partial previews change the final result?

Created at 3 weeks ago
'type' object is not subscriptable

Seems to be fixed now 👍

Created at 4 weeks ago
Preserve dashes and spaces in filenames when using them for prompts

I realized this would have to go into diffusers, created a quick PR

Created at 4 weeks ago
pull request opened
Remove spaces, parens and numbers from prompts only when trailing

Initially reported here https://github.com/TheLastBen/fast-stable-diffusion/issues/1174#issuecomment-1366085743

Created at 4 weeks ago

Remove spaces, parens and numbers from prompts only when trailing

Created at 4 weeks ago
Created at 4 weeks ago
'type' object is not subscriptable

Describe the bug Getting this error on every run

Error running process: /content/stable-diffusion-webui/extensions/sd-dynamic-prompts/scripts/dynamic_prompting.py
Traceback (most recent call last):
  File "/content/stable-diffusion-webui/modules/scripts.py", line 338, in process
    script.process(p, *script_args)
  File "/content/stable-diffusion-webui/extensions/sd-dynamic-prompts/scripts/dynamic_prompting.py", line 412, in process
    all_prompts = generator.generate(num_images)
  File "/content/stable-diffusion-webui/extensions/sd-dynamic-prompts/prompts/generators/randomprompt.py", line 231, in generate
    prompts = self._generator.generate_prompts(self._template, max_prompts)
  File "/content/stable-diffusion-webui/extensions/sd-dynamic-prompts/prompts/parser/random_generator.py", line 117, in generate_prompts
    tokens = cast(list[Command], tokens)
TypeError: 'type' object is not subscriptable

To Reproduce I ran this on a Google colab. Install the addon and run A1111 and it just happens. If the stack trace is not enough, let's me know and I'll describe more about it

Created at 1 month ago
Preserve dashes and spaces in filenames when using them for prompts

You could replace /[0-9 ()]+$/ for empty string. I also mentioned in Reddit that numbers in the middle of the prompt are removed which... is also very unexpected IMO.

Created at 1 month ago
Preserve dashes and spaces in filenames when using them for prompts

I added a manual captioning feature with which you can either use an existing txt file or create one that contains the captions. Nice feature but that's a ton of work. The images are already captioned, this is just a request to preserve them

also, using class names like "t-shirt" will make the model converge too fast and cause extravagant overfitting, when using captions, I recommend using proper names, like the brand name and let the model automatically assign the subject to it. Not sure if you mean using that word as the whole prompt. I meant "someStyle a man wearning a blue t-shirt". You can find many high-profile models that are trained with a dash in the prompt, like "wa-vyormdjrny-v4 style`. it's the logical character when underscore is not an option. Not sure what's the downside here to not make the change

when using captioning, the text encoder steps will become the number of steps to use the captions, after that, the captions will be disabled and falls back to the filename, this will prevent overfitting. That's... using the new manual captioning?

Created at 1 month ago
Preserve dashes and spaces in filenames when using them for prompts

Many words naturally have dashes, like t-shirt or see-through. They are getting removed, changing prompts incorrectly. Also, it'd be nice to get back the space-to-underscore pre-processing, so that files original can be kept with spaces which is more readable

Created at 1 month ago
Created at 1 month ago