Technology demonstrator self-adapting code
Project description
Readme
Note
Hi All. I have been busy with life and other things, but this project is a work of passion. I am currently working on building all of the machinery required for self adapting and self modifying code. What better way to demonstrate technology than to have it partially manage itself? All of that introspection, auto-building and adapting prompts, scoring models and their responses, figuring out how to evaluate responses and having it in a form that can be debugged is hard and time consuming.
It is entirely possible that I bit off more than I can chew right now. I will get there eventually.
This is a simple demonstration project. More like this is possible at this point. Use at your own risk. This code is ultimately non-deterministic and nobody knows what might happen.
Importing this code will give a simple python script rudimentary way of fixing itself. This scripts registers itself as a global exception hook, retrieves the source code that failed, executes llm to fix the code, substitutes it back in and continues execution as if nothing happened.
Simple, eh?
Components
Configuration
Configuration area for the module. This is simply a python module that is loaded and referenced. Easy and flexible. Security is not a consideration since:
A) This project is meant to be a library / agent that is included within another project, having complete and total control of that project. B) We are running AI within our code and allowing it to make decisions on the fly. C) There is no end-user facing configuration planned at this point.
AI
System of selecting and listing available models. To be expanded later. Key functionality:
- Assume that capability may change over time. There has to be a method of re-examining capabilities and adding to them.
- Allow for models to be deleted, even if they were found before.
- Allow to handle models (at least try to), even if they are new.
- Optionally allow to download models?
Prompts
System of having and selecting one of multiple prompts. Key Functionality:
- Some prompts work with some models, and not others.
- Some prompts may require different information. Thankfully we can try different prompts and get an error if information is not available.
- Each prompt should have some way of deciding if a model can be used to evaluate it.
Decorators
Currently only one decorator. Others are in the planning stages. Please have a look at issues inside github if you are interested.
Decorator is capable of catching and monitoring a single feature, while recording information about that section of code.
Necromancer
Last ditch effort to respond to an unhandled exception. At this stage the program is effectively dead and the stack has unravelled. More testing is needed whether necromancer can restore a complex program, but should work for simple scripts.
While this mode of running is simple and impressive, it is primairly included here as a demonstration. At the stage of life when necromancer is activated, the program is effectively dead. All the stack has been traversed up and the only remaining thing is to output the stack trace to screen or log. This code will attempt to ressurect and re-start the code execution, which is a very interesting concept.
At this stage this will only work for simple scripts. If at any point in the stack we catch and then re-raise an exception, necromancer would likely be get very confused.
Installation
Ollama
Need ollama installed together with pip install ollama.
Need to download an appropriate model and set it inside caretaker/config.py. While you are at it, feel free to improve on the prompt.
Issues and future work
There are several issues with this code. Some (most probably) are fixable.
-
This package may not install properly at this time. I am trying to resolve naming issues.
-
It only modifies the code in memory, does not write it back out to the filesystem.
-
This code is currently working with ollama only, although it allows for capture of other models and methods of execution (not fully implemented yet). Need to create a process of selecting a best model.
-
Catching it at global exception hook means the program is already dead. Restarting execution (if the failure happened in a loop, etc), is tricky. Necromancer is a pure demonstration of what might be possible. Decorators are the future.
-
Several different decorators are in the design / partial implemntation stages. No guarantee any of them work.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file caretaker-0.1.2.tar.gz.
File metadata
- Download URL: caretaker-0.1.2.tar.gz
- Upload date:
- Size: 27.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3587ec49fb01ada89699a6729b82178ceafc2ce9db1482e88f960c1c165e7545
|
|
| MD5 |
9bf2f90b0bbc3c3bcd29a9081e8fef27
|
|
| BLAKE2b-256 |
5215680f95da3d272f4282a5755cab8aa938b66fae25703ca70d0ec72c92c036
|
File details
Details for the file Caretaker-0.1.2-py3-none-any.whl.
File metadata
- Download URL: Caretaker-0.1.2-py3-none-any.whl
- Upload date:
- Size: 29.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68f7aedb7628aa8994cff19f2fd84b78ecc1e9f5c46a8b030678e6e0476554aa
|
|
| MD5 |
1df0fab39c6660f77d455b6500668818
|
|
| BLAKE2b-256 |
cec0b44cf4003964b4d065e5c96159a48cee7198ebb7118bb32b26a0909d98cc
|