a package for llm self-reflection
Project description
llmReflect
llmReflect is a python package designed for large language model (LLM) applications. We have seen numerous emergent abilities so far. Given by a right prompt, a LLM is capable of various tasks. Also the art of writing a prompt usually determines the performance of the LLM at that task. So is there a chance that we can use LLM to evaluate / improve itself's prompt?
Warning! This project is at the very early stage!
Installation
-
- llmReflect is on PYPI.
pip install llmreflect
- llmReflect is on PYPI.
-
- use pipenv and git clone
git clone https://github.com/Recherches-Neuro-Hippocampe/llmReflect.git
pipenv shell
pipenv install
- use pipenv and git clone
Basic usage
1. Case 1: Use a combined chain to retrieve information from database based on users' natural language description
from llmreflect.LLMCore.LLMCore import LOCAL_MODEL, OPENAI_MODEL
from llmreflect.Utils.log import get_logger
from llmreflect.Chains.DatabaseChain import \
DatabaseModerateNAnswerNFixChain
from decouple import config
# Assume you have a .env file storing OpenAI API key, db credentials and etc..
LOGGER = get_logger("test")
def example_chain_running(local=False):
# If you have a local Llama.cpp supported model, you can specify `local=True`
MODEL_PATH = LOCAL_MODEL.upstage_70_b
URI = f"postgresql+psycopg2://{config('DBUSERNAME')}:\
{config('DBPASSWORD')}@{config('DBHOST')}:{config('DBPORT')}/postgres"
INCLUDE_TABLES = [
'tb_patient',
'tb_patients_allergies',
'tb_appointment_patients',
'tb_patient_mmse_and_moca_scores',
'tb_patient_medications'
]
LOCAL_LLM_CONFIG = {
"max_output_tokens": 512,
"max_total_tokens": 5000,
"model_path": MODEL_PATH,
"n_batch": 512,
"n_gpus_layers": 4,
"n_threads": 16,
"temperature": 0.0,
"verbose": False
}
OPENAI_LLM_CONFIG = {
"llm_model": OPENAI_MODEL.gpt_3_5_turbo_0613,
"max_output_tokens": 512,
"open_ai_key": config("OPENAI_API_KEY"),
"temperature": 0.0
}
chain_config = {
"DatabaseAnswerNFixChain": {
"DatabaseAnswerChain": {
"llm_config": LOCAL_LLM_CONFIG if local else OPENAI_LLM_CONFIG,
"other_config": {},
"retriever_config": {
"include_tables": INCLUDE_TABLES,
"max_rows_return": 500,
"sample_rows": 0,
"uri": URI
}
},
"DatabaseSelfFixChain": {
"llm_config": LOCAL_LLM_CONFIG if local else OPENAI_LLM_CONFIG,
"other_config": {},
"retriever_config": {
"include_tables": INCLUDE_TABLES,
"max_rows_return": 500,
"sample_rows": 0,
"uri": URI
}
},
},
"ModerateChain": {
"llm_config": LOCAL_LLM_CONFIG if local else OPENAI_LLM_CONFIG,
"other_config": {},
"retriever_config": {
"include_tables": INCLUDE_TABLES
}
},
}
ch = DatabaseModerateNAnswerNFixChain.from_config(**chain_config) # Initialize a chain
question = "Show me the patients who have taken the medication \
Donepezil and are considered as overweight."
result, traces = ch.perform_cost_monitor(
user_input=question,
explain_moderate=True) # Run the chain
# Presenting the results of execution
LOGGER.info(f"Question: {question}")
LOGGER.info(f"LLM Moderate Decision: {result['moderate_decision']}")
LOGGER.info(f"LLM Moderate Comment: {result['moderate_explanation']}")
LOGGER.info(f"LLM Generated Postgresql: {result['cmd']}")
LOGGER.info(f"Postgresql Execution Result: {result['summary']}")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llmreflect-0.1.10.tar.gz
(36.4 kB
view hashes)
Built Distribution
Close
Hashes for llmreflect-0.1.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5faef1894c11da4f3d6114ff0c8ac7c0bbbe68335a400ebbd49bece2a865707a |
|
MD5 | 3887b89d24c0e4668fd69496f4964be3 |
|
BLAKE2b-256 | 184c963903620d44d6acad013e52e7c1eb0da714d7db1ccc65dd9fcf20a9eaf0 |