Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more
Project description
Webscout
Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More
Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.
📋 Table of Contents
- 🌟 Key Features
- ⚙️ Installation
- 🖥️ Command Line Interface
- 📚 Documentation Hub
- 🔄 OpenAI-Compatible API Server
- 🕸️ Scout: HTML Parser & Web Crawler
- 🎭 Awesome Prompts Manager
- 🔗 GitAPI: GitHub Data Extraction
- 🤖 AI Models and Voices
- 💬 AI Chat Providers
- 👨💻 Advanced AI Interfaces
- 🤝 Contributing
- 🙏 Acknowledgments
[!IMPORTANT] Webscout supports three types of compatibility:
- Native Compatibility: Webscout's own native API for maximum flexibility
- OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
- Local LLM Compatibility: Run local models with OpenAI-compatible servers
Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.
[!NOTE] Webscout supports over 90 AI providers including: LLAMA, C4ai, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.
🚀 Features
Search & AI
- Comprehensive Search: Access multiple search engines including DuckDuckGo, Yep, Bing, Brave, Yahoo, Yandex, Mojeek, and Wikipedia for diverse search results (Search Documentation)
- AI Powerhouse: Access and interact with various AI models through three compatibility options:
- Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
- OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
- Local LLMs: Run local models with OpenAI-compatible servers (see Inferno documentation)
- AI Search: AI-powered search engines with advanced capabilities
- OpenAI-Compatible API Server: Run a local API server that serves any Webscout provider through OpenAI-compatible endpoints
- Python Client API: Use Webscout providers directly in Python with OpenAI-compatible format
Media & Content Tools
- YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
- Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
- Text-to-Image: Generate high-quality images using a wide range of AI art providers
- Weather Tools: Retrieve detailed weather information for any location
Developer Tools
- GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
- SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
- LitPrinter: Styled console output with rich formatting and colors
- LitAgent: Modern user agent generator that keeps your requests undetectable
- Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
- GGUF Conversion: Convert and quantize Hugging Face models to GGUF format
- Utility Decorators: Easily measure function execution time (
timeIt) and add retry logic (retry) to any function - Stream Sanitization Utilities: Advanced tools for cleaning, decoding, and processing data streams
- Command Line Interface: Comprehensive CLI for all search engines and utilities
Privacy & Utilities
- Tempmail & Temp Number: Generate temporary email addresses and phone numbers
- Awesome Prompts Manager: Curated collection of system prompts for specialized AI personas with comprehensive management capabilities
⚙️ Installation
Webscout supports multiple installation methods to fit your workflow:
📦 Standard Installation
# Install from PyPI
pip install -U webscout
# Install with API server dependencies
pip install -U "webscout[api]"
# Install with development dependencies
pip install -U "webscout[dev]"
⚡ UV Package Manager (Recommended)
UV is a fast Python package manager. Webscout has full UV support:
# Install UV first (if not already installed)
pip install uv
# Install Webscout with UV
uv add webscout
# Install with API dependencies
uv add "webscout[api]"
# Run Webscout directly with UV (no installation needed)
uv run webscout --help
# Run with API dependencies
uv run webscout --extra api webscout-server
# Install as a UV tool for global access
uv tool install webscout
# Use UV tool commands
webscout --help
webscout-server
🔧 Development Installation
# Clone the repository
git clone https://github.com/OEvortex/Webscout.git
cd Webscout
# Install in development mode with UV
uv sync --extra dev --extra api
# Or with pip
pip install -e ".[dev,api]"
# Or with uv pip
uv pip install -e ".[dev,api]"
🐳 Docker Installation
# Pull and run the Docker image
docker pull OEvortex/webscout:latest
docker run -it OEvortex/webscout:latest
📱 Quick Start Commands
After installation, you can immediately start using Webscout:
# Check version
webscout version
# Search the web
webscout text -k "python programming"
# Start API server
webscout-server
# Get help
webscout --help
🖥️ Command Line Interface
Webscout provides a comprehensive command-line interface with support for multiple search engines and utilities. You can use it in multiple ways:
🚀 Direct Commands (Recommended)
After installing with uv tool install webscout or pip install webscout:
# Get help and list all commands
webscout --help
# Show version
webscout version
# Start API server
webscout-server
# Web search commands
webscout text -k "python programming" # DuckDuckGo text search
webscout images -k "mountain landscape" # DuckDuckGo image search
webscout news -k "AI breakthrough" -t w # News from last week
webscout weather -l "New York" # Weather information
webscout translate -k "Hello" -to es # Translation
# Alternative search engines
webscout yahoo_text -k "machine learning" -r us # Yahoo search
webscout bing_text -k "climate change" # Bing search
webscout yep_text -k "latest news" # Yep search
# Search with advanced options
webscout images -k "cat" --size large --type-image photo --license-image any
webscout maps -k "coffee shop" --city "New York" --radius 5
🔧 UV Run Commands (No Installation Required)
# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run webscout text -k "latest news"
uv run --extra api webscout-server
📦 Python Module Commands
# Traditional Python module execution
python -m webscout --help
python -m webscout text -k "search query"
python -m webscout-server
🌐 Supported Search Providers
Webscout CLI supports multiple search backends:
- DuckDuckGo (default):
text,images,videos,news,answers,maps,translate,suggestions,weather - Yahoo:
yahoo_text,yahoo_images,yahoo_videos,yahoo_news,yahoo_answers,yahoo_maps,yahoo_translate,yahoo_suggestions,yahoo_weather - Bing:
bing_text,bing_images,bing_news,bing_suggestions - Yep:
yep_text,yep_images,yep_suggestions
For detailed command reference and all available options, see CLI Documentation.
🤖 AI Models and Voices
Webscout provides easy access to a wide range of AI models and voice options.
LLM Models
Access and manage Large Language Models with Webscout's model utilities.
from webscout import model
from rich import print
# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")
# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} models")
# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
for i, model_name in enumerate(available_models, 1):
print(f" {i}. {model_name}")
else:
print(f" {available_models}")
TTS Voices
Access and manage Text-to-Speech voices across multiple providers.
from webscout import model
from rich import print
# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")
# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} voices")
# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices
print(f" - {voice_name}: {voice_id}")
if len(available_voices) > 5:
print(f" ... and {len(available_voices) - 5} more")
💬 AI Chat Providers
Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.
Popular AI Providers
| Provider | Description | Key Features |
|---|---|---|
OPENAI |
OpenAI's models | GPT-3.5, GPT-4, tool calling |
GEMINI |
Google's Gemini models | Web search capabilities |
Meta |
Meta's AI assistant | Image generation, web search |
GROQ |
Fast inference platform | High-speed inference, tool calling |
LLAMA |
Meta's Llama models | Open weights models |
DeepInfra |
Various open models | Multiple model options |
Cohere |
Cohere's language models | Command models |
PerplexityLabs |
Perplexity AI | Web search integration |
YEPCHAT |
Yep.com's AI | Streaming responses |
ChatGPTClone |
ChatGPT-like interface | Multiple model options |
TypeGPT |
TypeChat models | Multiple model options |
Example: Using Meta AI
from webscout import Meta
# For basic usage (no authentication required)
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")
# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])
# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
print(media["url"])
Example: GROQ with Tool Calling
from webscout import GROQ, DuckDuckGoSearch
import json
# Initialize GROQ client
client = GROQ(api_key="your_api_key")
# Define helper functions
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a web search"""
try:
ddg = DuckDuckGoSearch()
results = ddg.text(query, max_results=3)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define tool specifications
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
]
# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)
response = client.chat("Find information about quantum computing", tools=tools)
print(response)
GGUF Model Conversion
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.
Note (2026.01.01): GGUF conversion now uses lazy imports for
huggingface_hub. The library can be imported without requiringhuggingface_hub, and it's only loaded when GGUF features are actually used. Install it withpip install huggingface_hubif you need GGUF conversion.
from webscout.Extra.gguf import ModelConverter
# Create a converter instance
converter = ModelConverter(
model_id="mistralai/Mistral-7B-Instruct-v0.2", # Hugging Face model ID
quantization_methods="q4_k_m" # Quantization method
)
# Run the conversion
converter.convert()
Available Quantization Methods
| Method | Description |
|---|---|
fp16 |
16-bit floating point - maximum accuracy, largest size |
q2_k |
2-bit quantization (smallest size, lowest accuracy) |
q3_k_l |
3-bit quantization (large) - balanced for size/accuracy |
q3_k_m |
3-bit quantization (medium) - good balance for most use cases |
q3_k_s |
3-bit quantization (small) - optimized for speed |
q4_0 |
4-bit quantization (version 0) - standard 4-bit compression |
q4_1 |
4-bit quantization (version 1) - improved accuracy over q4_0 |
q4_k_m |
4-bit quantization (medium) - balanced for most models |
q4_k_s |
4-bit quantization (small) - optimized for speed |
q5_0 |
5-bit quantization (version 0) - high accuracy, larger size |
q5_1 |
5-bit quantization (version 1) - improved accuracy over q5_0 |
q5_k_m |
5-bit quantization (medium) - best balance for quality/size |
q5_k_s |
5-bit quantization (small) - optimized for speed |
q6_k |
6-bit quantization - highest accuracy, largest size |
q8_0 |
8-bit quantization - maximum accuracy, largest size |
Command Line Usage
python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"
🤝 Contributing
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes and commit them with descriptive messages
- Push your branch to your forked repository
- Submit a pull request to the main repository
🙏 Acknowledgments
- All the amazing developers who have contributed to the project
- The open-source community for their support and inspiration
Made with ❤️ by the Webscout team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file webscout-2026.2.23.tar.gz.
File metadata
- Download URL: webscout-2026.2.23.tar.gz
- Upload date:
- Size: 667.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b41fe3bbc91b3032138a99aae7b63b86646b7a062b02fff7f90a64abf111d748
|
|
| MD5 |
59cd8a81be24f5cb2085a3cb1171c89d
|
|
| BLAKE2b-256 |
9ab74e8c43ca7e9ea499792994571369580a750f1b7fcac90af66c2437062647
|
File details
Details for the file webscout-2026.2.23-py3-none-any.whl.
File metadata
- Download URL: webscout-2026.2.23-py3-none-any.whl
- Upload date:
- Size: 902.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
63c918c4546082abeb1d0c379b1a7b645a93f4d3a79a628ecdecd4fea971038b
|
|
| MD5 |
be84132ccacc105f3b70116cb65656c6
|
|
| BLAKE2b-256 |
a7412467beca7770a5b87578e2855c3e41d257b0d6376dc4cdab2af149a9e4a4
|