Image inpainting tool powered by SOTA AI Model
Project description
Lama-cleaner: Image inpainting tool powered by SOTA AI model
https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4
- Support multiple model architectures
- High resolution support
- Run as a desktop APP
- Multi stroke support. Press and hold the
cmd/ctrl
key to enable multi stroke mode. - Zoom & Pan
- Keep image EXIF data
Quick Start
- Install requirements:
pip3 install -r requirements.txt
- Start server:
python3 main.py
, open http://localhost:8080
Available commands for main.py
Name | Description | Default |
---|---|---|
--model | lama or ldm. See details in Model Comparison | lama |
--device | cuda or cpu | cuda |
--gui | Launch lama-cleaner as a desktop application | |
--gui_size | Set the window size for the application | 1200 900 |
--input | Path to image you want to load by default | None |
--port | Port for flask web server | 8080 |
--debug | Enable debug mode for flask web server |
Model Comparison
Diffusion model(ldm) is MUCH MORE slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better result, see below example:
Original Image | LaMa | LDM |
---|---|---|
Blogs about diffusion models:
- https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
- https://yang-song.github.io/blog/2021/score/
Development
Only needed if you plan to modify the frontend and recompile yourself.
Fronted
Frontend code are modified from cleanup.pictures, You can experience their great online services here.
- Install dependencies:
cd lama_cleaner/app/ && yarn
- Start development server:
yarn start
- Build:
yarn build
Docker
Run within a Docker container. Set the CACHE_DIR
to models location path. Optionally add a -d
option to
the docker run
command below to run as a daemon.
Build Docker image
docker build -f Dockerfile -t lamacleaner .
Run Docker (cpu)
docker run -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
Run Docker (gpu)
docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
Then open http://localhost:8080
Like My Work?
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for lama_cleaner-0.9.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cde9164cbea3f0140a62d8154c39f7d659d6a1ae229d430839c9b1eb9176ae25 |
|
MD5 | 2d177ac956945c990db26e8efc9a5a7d |
|
BLAKE2b-256 | 2d8c52d9f622d3a4021bc2dd9c08de0cfd61d93c8a72c6f164143aaf5ad195d4 |