AI-Powered Ethical Hacking Assistant
Project description
Nebula – AI-Powered Penetration Testing Assistant
Nebula is an advanced, AI-powered penetration testing open-source tool that revolutionizes penetration testing by integrating state-of-the-art AI models into your command-line interface. Designed for cybersecurity professionals, ethical hackers, and developers, Nebula automates vulnerability assessments and enhances security workflows with real-time insights and automated note-taking.
Acknowledgement
First i would like to thank the All-Mighty God who is the source of all knowledge, without Him, this would not be possible.
News
Introducing the Deep Application Profiler (DAP). DAP uses neural networks to analyze an executable's internal structure and intent, rather than relying on traditional virus signatures. This approach enables it to detect new, zero-day malware that conventional methods often miss. DAP also provides detailed breakdowns for rapid analyst review and is available as both a web service and an API. Learn More Here
Introducing Nebula Pro, Nebula Pro improves on Nebula 2.0 by adding additional features such as autonomous mode, code analysis and more. Learn More Here
Nebula: AI-Powered Penetration Testing Platform
Nebula is a cutting-edge, AI-powered penetration testing tool designed for cybersecurity professionals and ethical hackers. It integrates advanced open-source AI models such as OpenAI's models (any model that is available via API) Meta's Llama-3.1-8B-Instruct, Mistralai's Mistral-7B-Instruct-v0.2, and DeepSeek-R1-Distill-Llama-8B—directly into the command line interface (CLI). By leveraging these state-of-the-art models, Nebula not only enhances vulnerability assessments and penetration testing workflows but also supports any tool that can be invoked from the CLI.
Installation
System Requirements:
For CPU-Based Inference(Ollama)(Note that Ollama Supports GPU too):
- At least 16GB of RAM
- Python 3.10 – 3.13.9
- Ollama
Installation Command:
python -m pip install nebula-ai --upgrade
Running Nebula
Important:
Ollama Local Model Based Usage
Install Ollama and download your preferred models for example
ollama pull mistral
Then enter the model's exact name as it appears in Ollama in the engagement settings.
OpenAI Models Usage
To use OpenAI models, add your API keys to your env like so
export OPENAI_API_KEY="sk-blah-blaj"
Then enter the OpenAI model's exact name in the engagement settings.
Run nebula
nebula
Using docker
First allow local connections to your X server:
xhost +local:docker
docker run --rm -it -e DISPLAY=$DISPLAY -v /home/YOUR_HOST_NAME/.local/share/nebula/logs:/root/.local/share/nebula/logs -v YOUR_ENGAGEMENT_FOLDER_ON_HOST_MACHINE:/engagements -v /tmp/.X11-unix:/tmp/.X11-unix berylliumsec/nebula:latest
Interacting with the models.
To interact with the models, begin your input with a ! or use the AI/Terminal button to switch between modes. For example: ! write a python script to scan the ports of a remote system the "!" is not needed if you use the context button
Key Features
-
AI-Powered Internet Search via agents:
Enhance responses by integrating real-time, internet-sourced context to keep you updated on cybersecurity trends. "whats in the news on cybersecurity today" -
AI-Assisted Note-Taking:
Automatically record and categorize security findings. -
Real-Time AI-Driven Insights:
Get immediate suggestions for discovering and exploiting vulnerabilities based on terminal tool outputs. -
Enhanced Tool Integration:
Seamlessly import data from external tools for AI-powered note-taking and advice. -
Integrated Screenshot & Editing:
Capture and annotate images directly within Nebula for streamlined documentation. -
Manual Note-Taking & Automatic Command Logging:
Maintain a detailed log of your actions and findings with both automated and manual note-taking features. -
Status feed:
This panel displays your most recent penetration testing activities, it refreshes every five minutes
Getting Started
For a comprehensive video guide visit here and here. Please note that some features are only applicable to Nebula Pro.You can also access the help screen within Nebula or refer to the Manual.md document
Roadmap
- Create custom models that are more useful for penetration testing
Troubleshooting
Logs are located at /home/[your_username]/.local/share/nebula/logs. You would most likely find the reason for the error in one of those logs
Get More Support
- Have questions or need help? Open an Issue on GitHub.
- For comprehensive guides, check out our Video Guide and User Manual.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nebula_ai-2.0.0b19.tar.gz.
File metadata
- Download URL: nebula_ai-2.0.0b19.tar.gz
- Upload date:
- Size: 20.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f383a93e330c2ff562f2e19704ca7d784c0884552cde5d1c094db9192e588931
|
|
| MD5 |
d88844b601b4ce32d125bc65c49876c8
|
|
| BLAKE2b-256 |
fb5495ef20a8c0cf5aebf594c0f638a56e717b980068420cbf4d88d2924e5f7b
|
Provenance
The following attestation bundles were made for nebula_ai-2.0.0b19.tar.gz:
Publisher:
publish.yml on berylliumsec/nebula
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nebula_ai-2.0.0b19.tar.gz -
Subject digest:
f383a93e330c2ff562f2e19704ca7d784c0884552cde5d1c094db9192e588931 - Sigstore transparency entry: 1008976638
- Sigstore integration time:
-
Permalink:
berylliumsec/nebula@7d3a5e99afad2754586a20ae93c6414bbb4707e8 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/berylliumsec
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d3a5e99afad2754586a20ae93c6414bbb4707e8 -
Trigger Event:
pull_request
-
Statement type:
File details
Details for the file nebula_ai-2.0.0b19-py3-none-any.whl.
File metadata
- Download URL: nebula_ai-2.0.0b19-py3-none-any.whl
- Upload date:
- Size: 20.8 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c3c078fbc1f2745c493fc376905f6560389eb5b96342d2508d6081db934d19b
|
|
| MD5 |
58379795e288aeb91559877da2956679
|
|
| BLAKE2b-256 |
5d10837d61ea31013f61d424e4cab8ba87dc04a1f28d37f86335371cdf8787f0
|
Provenance
The following attestation bundles were made for nebula_ai-2.0.0b19-py3-none-any.whl:
Publisher:
publish.yml on berylliumsec/nebula
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nebula_ai-2.0.0b19-py3-none-any.whl -
Subject digest:
7c3c078fbc1f2745c493fc376905f6560389eb5b96342d2508d6081db934d19b - Sigstore transparency entry: 1008976742
- Sigstore integration time:
-
Permalink:
berylliumsec/nebula@7d3a5e99afad2754586a20ae93c6414bbb4707e8 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/berylliumsec
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d3a5e99afad2754586a20ae93c6414bbb4707e8 -
Trigger Event:
pull_request
-
Statement type: