Skip to main content

Image inpainting tool powered by SOTA AI Model

Project description

Lama-cleaner: Image inpainting tool powered by SOTA AI model

https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4

  • Support multiple model architectures
    1. LaMa
    2. LDM
  • High resolution support
  • Run as a desktop APP
  • Multi stroke support. Press and hold the cmd/ctrl key to enable multi stroke mode.
  • Zoom & Pan
  • Keep image EXIF data

Quick Start

  1. Install requirements: pip3 install -r requirements.txt
  2. Start server: python3 main.py, open http://localhost:8080

Available commands for main.py

Name Description Default
--model lama or ldm. See details in Model Comparison lama
--device cuda or cpu cuda
--gui Launch lama-cleaner as a desktop application
--gui_size Set the window size for the application 1200 900
--input Path to image you want to load by default None
--port Port for flask web server 8080
--debug Enable debug mode for flask web server

Model Comparison

Diffusion model(ldm) is MUCH MORE slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better result, see below example:

Original Image LaMa LDM
photo-1583445095369-9c651e7e5d34 photo-1583445095369-9c651e7e5d34_cleanup_lama photo-1583445095369-9c651e7e5d34_cleanup_ldm

Blogs about diffusion models:

Development

Only needed if you plan to modify the frontend and recompile yourself.

Fronted

Frontend code are modified from cleanup.pictures, You can experience their great online services here.

  • Install dependencies:cd lama_cleaner/app/ && yarn
  • Start development server: yarn start
  • Build: yarn build

Docker

Run within a Docker container. Set the CACHE_DIR to models location path. Optionally add a -d option to the docker run command below to run as a daemon.

Build Docker image

docker build -f Dockerfile -t lamacleaner .

Run Docker (cpu)

docker run -p 8080:8080 -e CACHE_DIR=/app/models -v  $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080

Run Docker (gpu)

docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080

Then open http://localhost:8080

Like My Work?

Sanster

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lama-cleaner-0.9.0.tar.gz (466.8 kB view hashes)

Uploaded Source

Built Distribution

lama_cleaner-0.9.0-py3-none-any.whl (486.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page