Image inpainting tool powered by SOTA AI Model
Project description
Lama-cleaner: Image inpainting tool powered by SOTA AI model
https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4
- Support multiple model architectures
- Support CPU & GPU
- High resolution support
- Run as a desktop APP
- Multi stroke support. Press and hold the
cmd/ctrl
key to enable multi stroke mode. - Zoom & Pan
Install
pip install lama-cleaner
lama-cleaner --device=cpu --port=8080
Available commands:
Name | Description | Default |
---|---|---|
--model | lama or ldm. See details in Model Comparison | lama |
--device | cuda or cpu | cuda |
--gui | Launch lama-cleaner as a desktop application | |
--gui_size | Set the window size for the application | 1200 900 |
--input | Path to image you want to load by default | None |
--port | Port for flask web server | 8080 |
--debug | Enable debug mode for flask web server |
Model Comparison
Diffusion model(ldm) is MUCH MORE slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better result, see below example:
Original Image | LaMa | LDM |
---|---|---|
Blogs about diffusion models:
- https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
- https://yang-song.github.io/blog/2021/score/
Development
Only needed if you plan to modify the frontend and recompile yourself.
Frontend
Frontend code are modified from cleanup.pictures, You can experience their great online services here.
- Install dependencies:
cd lama_cleaner/app/ && yarn
- Start development server:
yarn start
- Build:
yarn build
Docker
Run within a Docker container. Set the CACHE_DIR
to models location path. Optionally add a -d
option to
the docker run
command below to run as a daemon.
Build Docker image
docker build -f Dockerfile -t lamacleaner .
Run Docker (cpu)
docker run -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
Run Docker (gpu)
docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
Then open http://localhost:8080
Like My Work?
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for lama_cleaner-0.12.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf9e0358ac1d1fb7c6cb5c3676c4ddcadcefd44988b0d88a4b980f2904c5615f |
|
MD5 | 0ca4bc8919343eea5f8d71bee23a42e5 |
|
BLAKE2b-256 | 0d13570cef9352703a5f675653a691d607b4fc53d92d38ba40acb5fd086bfbf5 |