A comprehensive library for voice processing tasks such as wake word detection, speech recognition, translation, and text-to-speech.
Project description
VoiceProcessingToolkit
Introduction
VoiceProcessingToolkit is a comprehensive Python library for voice processing tasks such as wake word detection, transcription, and synthesis, simplifying the development of voice-activated applications.
Features
- Wake word detection using Picovoice Porcupine.
- High-quality voice recording with adjustable settings for Voice Activation Detection.
- Fast and accurate speech-to-text transcription with OpenAI's Whisper.
- Customizable text-to-speech synthesis via ElevenLabs' API.
- Secure API key management with environment variables.
- Example scripts for easy demonstration and usage.
- Extensible architecture for feature additions and customization.
Installation
The VoiceProcessingToolkit is available on PyPI. To install, run the following command:
pip install VoiceProcessingToolkit
Usage
Basic Example
Here is a simple example of how to use the toolkit to detect a wake word, record and speak the result with speech synthesis in English:
from VoiceProcessingManager import VoiceProcessingManager
import os
# Set environment variables for API keys
os.environ['PICOVOICE_APIKEY'] = 'your-picovoice-api-key'
os.environ['OPENAI_API_KEY'] = 'your-openai-api-key'
os.environ['ELEVENLABS_API_KEY'] = 'your-elevenlabs-api-key'
# Create a VoiceProcessingManager instance with default settings
vpm = VoiceProcessingManager.create_default_instance(wake_word='jarvis')
# Run the voice processing manager with transcription and text-to-speech
text = vpm.run()
print(f"Processed text: {text}")
Text-to-Speech Example
You can also run the toolkit without any recording, and provide your own text to convert to speech:
from VoiceProcessingToolkit.VoiceProcessingManager import text_to_speech_stream
from dotenv import load_dotenv
load_dotenv()
text = "Hello, welcome to the Voice Processing Toolkit!"
text = text_to_speech_stream(text=text)
print(f"Processed text: {text}")
The VoiceProcessingManager
class is the central component of the toolkit, orchestrating the voice processing workflow. It is highly configurable, allowing you to tailor the behavior to your specific needs. Below are some of the key attributes and methods provided by this class:
Attributes of VoiceProcessingManager
include:
wake_word
: The wake word for triggering voice recording.sensitivity
: Sensitivity for wake word detection.output_directory
: Directory for saving recorded audio files<.audio_format
,channels
,rate
,frames_per_buffer
: Audio stream parameters.voice_threshold
,silence_limit
,inactivity_limit
,min_recording_length
,buffer_length
: Voice recording parameters.use_wake_word
: Flag to use wake word detection.save_wake_word_recordings
: Flag to save audio buffer that triggered the wake word detection.play_notification_sound
: Flag to play a sound on detection.
Methods of VoiceProcessingManager
include:
run(tts=False, streaming=False)
: Processes a voice command with optional text-to-speech functionality.setup()
: Initializes the components of the voice processing manager.process_voice_command()
: Processes a voice command using the configured components.
For a more detailed explanation of these attributes and methods, please refer to the inline documentation within the VoiceProcessingManager.py
file.
Getting Started
To get started with VoiceProcessingToolkit, please refer to the inline documentation and example usage scripts provided in the toolkit. These resources provide detailed instructions on configuration, usage examples, and customization options.
To get started with the VoiceProcessingToolkit, follow these simple steps:
-
Installation: Install the toolkit using pip:
pip install VoiceProcessingToolkit
-
API Keys: Obtain the necessary API keys from Picovoice, OpenAI, and ElevenLabs.
-
Environment Variables: Set the API keys as environment variables:
export PICOVOICE_APIKEY='your-picovoice-api-key' export OPENAI_API_KEY='your-openai-api-key' export ELEVENLABS_API_KEY='your-elevenlabs-api-key'
-
Run an Example: Navigate to the
example_usage
directory and run one of the example scripts to see the toolkit in action. -
Customize: Customize the settings in the
VoiceProcessingManager
to fit your application's needs.
For a more detailed explanation of these steps, please refer to the inline documentation and example usage scripts provided in the toolkit. These resources provide detailed instructions on configuration, usage examples, and customization options.
Example Usage
The toolkit includes several example scripts that demonstrate different use cases and features. You can find these examples in the example_usage
directory:
- Simple Setup: Demonstrates the basic setup and usage of the VoiceProcessingManager.
- Create Wake Word Data: Demonstrates how to create a wake word dataset using the VoiceProcessingManager.
- Wake Word Decorators: Demonstrates how to register actions with the VoiceProcessingManager that will be triggered when the wake word is detected.
- Custom Recording Logic: Demonstrates custom recording settings and runs the VoiceProcessingManager without the wake word detector.
- Text to Speech: Demonstrates the text to speech functionality with text as input using the VoiceProcessingManager.
Configuration
The toolkit can be configured with various settings such as wake word sensitivity, audio sample rate, and text-to-speech voice selection. For detailed configuration options, please see the configuration.md
or visit the documentation in the example_usage folder.
Contributing
Contributions to the VoiceProcessingToolkit are welcome! Please read the CONTRIBUTING.md file for guidelines on how to contribute.
Support
If you encounter any issues or have questions, please file an issue on the GitHub issue tracker.
License
VoiceProcessingToolkit is licensed under the MIT License. See the LICENSE file for more details.
Development Status
VoiceProcessingToolkit is still in development, and feedback is greatly appreciated. If you have suggestions or encounter any issues, please feel free to open an issue on the GitHub repository or contribute to the project.
Acknowledgements
I would like to extend my gratitude to OpenAI, ElevenLabs, and Picovoice for their exceptional tools that have significantly contributed to the development of this project. Their innovative technologies have been instrumental in enabling the capabilities of the VoiceProcessingToolkit. VoiceProcessingToolkit is licensed under the MIT License. See the LICENSE file for more details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for VoiceProcessingToolkit-0.1.6.4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 330511b340453fd0860f2fc72efc6618aaf552a54ba0c79afc4f8636291423fb |
|
MD5 | d50c60fc23b2ca0d2cba0a59a22183fe |
|
BLAKE2b-256 | 9175306bdfdadc158c1296e576aff5877eaaba6698967cff1cec6bb3b47d46d1 |
Hashes for VoiceProcessingToolkit-0.1.6.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9e5ce5633a9c2afd715fadba0b150adcb591bfea9a4c72a502c5e80865eba998 |
|
MD5 | 4a419cec0f555e737f0c31e69619d780 |
|
BLAKE2b-256 | 205d145a0d0d9e7bb2a62db4d629f776c69d7da63431d9fafb931f0e80ca7306 |