discard-icon
vip-icon

hyperfusion_vpred finetune 3.3m images

v9 vpred
v8
v7
Reprint
Last Update:2024-12-16
#Anime
#thighs
#Hyperass
#belly
#pregnancy
#Pregnant
#hyperpreg
#Base Model
#booty
#fat
#chubby
#fetish
#danbooru
#thicc
#bbw
#nipples
#bubble butt
#Vore
#huge ass
#bottom heavy
#navel
#kink
#Breasts
#Ass
#hyper
#huge nipples
#stuffing
#r34
#NSFW
#finetune
#e621
#stuffed
#inflation
#bellybutton
#belly stuffing
#Belly Inflation

This checkpoint was trained on 3.3m images of normal to hyper sized anime characters. It focus mainly on breasts/ass/belly/thighs, but now handles more general tag topics as well. It's about 50%/50% anime and furry images as of v8. See change log article below for more version details, and future plans.

Note: This will be my final SD1x model. I wanted to see what the hyperfusion dataset was really capable of on sd1.5. So I let it train on 2x3090s for 10 months to squeeze every bit of concept knowledge out of it. This is the best concept model Ive trained so far, but it still has the usual SD1x jankiness. I'm deciding what to do next, but will update the Changelog Article when I have concrete plans.

Big shoutout to stuffer.ai for letting me host my model on their site to gather feedback. It was critical for resolving issues with the model early on, and a great way to see what needed improvement over time.


V9 is a v_pred model, so you will need to use the YAML file in A1111, or the vpred node in Comfy along with cfg_rescale=0.8 in both. A1111 will need the CFG_Rescale extension installed

The OG hyperfusion LoRAs can be found here https://civitai.com/models/16928
Also a back up HuggingFace link for these models

Uploaded 1.4 million custom tags used in hyperfusion here for integrating into your own datasets

Changelog Article Link

Recommendations for v9_vpred:

sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic. Also you will need to use the uniform scheduler in A1111, or "simple,normal" in Comfy at least

negative: I tested each of these tags separately to make sure they had a positive effect:

worst quality, low rating, signature, artist name, artist logo, logo, unfinished, jpeg artifacts, artwork \(traditional\), sketch, horror, mutant, flat color, simple shading

positive: "best quality, high rating" for the base style I trained into this model, more details in Training Data docs

cfg: 7-9

cfg_rescale: 0.8 rescale_cfg is required for this v_pred model

resolution: 768-1024 (closer to 896 for less body horror)

clip skip: 2

zero_terminal_snr: Enabled

styling: You will want to choose a style first. The default style is pretty meh. Try the new artist tags included in v8+, all tags can be found in the tags.csv by searching for "(artist)". See example images for art styles.

Lora/TI: loras trained on other models will not work with this model, even loras trained on other v_pred models are not guaranteed to work here.

Recommendations for v8:

sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic.

Lora/TI: If you are using LoRA's/TI's trained on NovelAI based models, they might do more harm than good. Try without them first.

negative: low rating, lowres, text, signature, watermark, username, blurry, transparent background, ugly, sketch, unfinished, artwork \(traditional\), multiple views, flat color, simple shading, unfinished, rough sketch

cfg: 8 (it needs less than LoRA hyperfusion) resolution: 768-1024 (closer to 768 for less body horror)

clip skip: 2

styling: Try the new artist tags included in v8, all tags can be found in the tags.csv by searching for "(artist)"


Tag Info (You definitely want to read the tag docs, see :Training Data)


Because hyperfusion is a conglomeration of multiple tagging schemes, I've included a tag guide in the training data download section. It will describe the way the tags work (similar to Danbooru tags), which tags the model knows best, and all my custom labeled tags.
For the most part you can use a majority of tags from Danbooru, Gelbooru, r-34, e621, related to breasts/ass/belly/thighs/nipples/vore/body_shape.

The best method I have found for tag exploration is going to one of the booru sites above and copying the tags from any image you like, and use them as a base. Because there are just too many tags trained into this model to test them all.

Tips

  • Because of the size and variety of this dataset, tags tend to behave differently than most NovelAI based models. Keep in mind your prompts from other models, might need to be tweaked.

  • If you are not getting the results you expect from a tag, find other similar tags and include those as well. I've found that this model tends to spread its knowledge of a tag around to other related tags. So including more will increase your chances of getting what you want.

  • Using the negative "3d" does a good job of making the image more anime like if it starts veering too much into a rendered model look.

  • Ass related tags have a strong preference for back shots, try a low strength ControlNet pose to correct this, or try one or more of these in the negatives "ass focus, from behind, looking back". The new "ass visible from front" tag can help too.

  • ...more tips in tag docs

Extra


This model took me months of failures and plenty of lessons learned (hence v7)! I would eventually like to train a few more image classifiers to improve certain tags, but all future dreams for now.

As usual, I have no intention of monetizing any of my models. Enjoy the thickness!

-Tagging-

The key to tagging a large dataset is to automate it all. I started with the wd-tagger (or similar danbooru tagger) to append some common tags on top of the original tags. Eventually I added an e621 tagger too, but I generally only tag with a limited set of tags and not the entire tag list (some tags are not accurate enough). Then I trained a handful of image classifiers like breast size, breasts shape, innie/outie navel, directionality, motion lines, and about 20 others..., and let those tag for me. They not only improve on existing tags, but add completely new concepts to the dataset. Finally I converted similar tags into one single tag as described in the tag docs (I stopped doing this now. With 3m images it really doesn't matter as much).

Basically any time I find its hard to prompt for a specific thing, I throw together a new classifier, and so far the only ones that don't work well are ones that try to classify small details in the image, like signatures.

Starting in v9 I will be including ~10% captions along side the tags. These captions are generated with CogVLM.

I used this to train my image classifiers
https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification

Ideally, I should train a multi-class-per-image classifier like the Danbooru tagger, but for now these single class-per-image classifiers work well enough.

-Software/Hardware-

The training was all done on a 3090 on Ubuntu. The software used is Kohya's trainer, since it currently has the most options to choose from.

View More
Quick Translation
Comment 0
Create (1.3K)
Favorite (141)
Download (421)
Model Details
Type
Checkpoint
Rating
4.1
Publish Time
2024-12-16
Base Model
SD 1.5
Model Source: civitai
Version Introduction

This version of hyperfusion was trained on 3.3 million images over 10 months, and is a v_prediction + zero_snr model based on SD1.5.

This version was trained on SD 1.5, so there is no NovelAI influence in this checkpoint.

More image classifiers trained, and existing classifiers improved (list of classified tags under Training Data section)

Training Notes:

  • ~3.3m images

  • LR 4e-6

  • TE_LR 1e-6, droped to 1e-7 (after epoch 10)

  • batch 8

  • GA 16

  • 2x3090s so 2x the base batch size. total v_batch = 256

  • total images seen: 190_000 * 256 = 48_000_000

  • AdamW-8bit (ADOPT for the last epoch as a test)

  • scheduler: linear

  • base model SD1.5

  • No custom VAE, usually use the original SD1.5 VAE

  • flip aug

  • clip skip 2

  • 525 token length (appending captions + tags made this necessary)

  • bucketing at 768 max 1024

    • bucket resolution steps 32 for more buckets

    • trained at 768 for the first 10 epochs, and 1024 for the last 6

  • tag drop chance 0.15

  • caption_dropout 0.1

  • tag shuffling

  • --min_snr_gamma 3

  • --ip_noise_gamma 0.02

  • --zero_terninal_snr

  • about 10 months training time

Custom training configs:

I have implemented a number of things into Kohys's training code that have been suggested to improve training, and kept the things that seemed to make improvements.

  • drop out 75% of tags 5% of the time to hopefully improve short tag length results

  • soft_min_snr instead of min_snr

  • --no_flip_when_cap_matches: Prevent flipping images when certain tags exists like "sequence, asymmetrical, before and after, text on*, written, speech bubble" etc... This should help with text, and characters with asymmetrical features.

  • --important_tags: move important tags to the beginning of the list, and sort them separately from the unimportant ones (suggested from NovelAI if I remember correctly).

  • --tag_implication_dropout: Dropout similar tags to prevent the model from requiring them both to be present when generating. Like "breasts, big breasts" breasts will be dropped out 30-50% of the time. I used the tag implications csv from e621 as a base and added tags as needed. Even with 10%-15% tag dropout, some tag pairs were still being associated too often, this definitely made a difference. I think there were about 5k tags in total on the dropout list.

  • 12% of the dataset is captioned with CogVLM, as well as cleaning up many of the captions with custom scripts that correct common problems.

  • Tags vs Captions: 70% of the time use tags, ~20% of the time use captions (if they exist), 10% of the time combine tags with captions in different orders.

If I remember more custom changes, ill add them later.

View More
License Scope
Creative License Scope
Online Image Generation
Merge
Allow Downloads
Commercial License Scope
Sale or Commercial Use of Generated Images
Resale of Models or Their Sale After Merging