Skip to content

Saves Quantized Model to disk for loading next time via "--int8" #61

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

petermg
Copy link

@petermg petermg commented May 24, 2025

saves quantized model to disk when using "python app.py --int8", then on next load it will load the quantized model from the disk so that it no longer has to quantize it every launch. You still need to use "--int8" when running to tell it to use the quantized version.
Also opened up a few other options in the UI.

saves quantized model to disk when using "python app.py --int8", then on next load it will load the quantized model from the disk so that it no longer has to quantize it every launch. You still need to use "--int8" when running to tell it to use the quantized version.
Modified from original:
Saves quantized models to disk
Loads quantized models from disk if found, so no need to quantize every run.
support for LoRAs
Added the ability to specify number of images to generate per run.
Exposed and/or added the following options in the UI:
"Face Upscale Factor", "Face Crop Size", "resolution for ref image", "Neg Prompt", and some others that were previously hidden in the "Advanced Options" accordion.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant