Skip to main content

A simple plugin to use with pytest

Project description

pytest-elk-reporter

PyPI version Python versions .github/workflows/tests.yml Libraries.io dependency status for GitHub repo Using Black Codecov Reports

A plugin to send pytest test results to ELK stack, with extra context data

Features

  • Reporting into Elasticsearch each test result, as the test finish
  • Automatically append context data to each test:
    • git information such as branch or last commit and more
    • all of CI env variables
      • Jenkins
      • Travis
      • Circle CI
      • Github Actions
    • username if available
  • Report a test summery to Elastic for each session with all the context data
  • can append any user data into the context sent to elastic

Requirements

Installation

You can install "pytest-elk-reporter" via pip from PyPI

pip install pytest-elk-reporter

ElasticSearch configuration

We need this auto_create_index enable for the indexes that are going to be used, since we don't have code to create the indexes, this is the default

curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
    "persistent": {
        "action.auto_create_index": "true"
    }
}
'

For more info on this elasticsearch feature check their index documention

Usage

Run and configure from command line

pytest --es-address 127.0.0.1:9200
# or if you need user/password to authenticate
pytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'

Configure from code (ideally in conftest.py)

from pytest_elk_reporter import ElkReporter

def pytest_plugin_registered(plugin, manager):
    if isinstance(plugin, ElkReporter):
      # TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes
      # or put them in plain-text in the code... what can ever go wrong...
      plugin.es_address = "my-elk-server.io:9200"
      plugin.es_user = 'fruch'
      plugin.es_password = 'passwordsarenicetohave'
      plugin.es_index_name = 'test_data'

Configure from pytest ini file

# put this in pytest.ini / tox.ini / setup.cfg
[pytest]
es_address = my-elk-server.io:9200
es_user = fruch
es_password = passwordsarenicetohave
es_index_name = test_data

see pytest docs for more about how to configure using .ini files

Collect context data for the whole session

For example, with this I'll be able to build a dash board per version

import pytest

@pytest.fixture(scope="session", autouse=True)
def report_formal_version_to_elk(request):
    """
    Append my own data specific, for example which of the code uner test is used
    """
    # TODO: take it programticly of of the code under test...
    my_data = {"formal_version": "1.0.0-rc2" }

    elk = request.config.pluginmanager.get_plugin("elk-reporter-runtime")
    elk.session_data.update(**my_data)

Collect data for specific tests

import requests

def test_my_service_and_collect_timings(request, elk_reporter):
    response = requests.get("http://my-server.io/api/do_something")
    assert response.status_code == 200

    elk_reporter.append_test_data(request, {"do_something_response_time": response.elapsed.total_seconds() })
    # now doing response time per version dashboard quite easy
    # and yeah, it's not exactly real usable metric, it's just an example...

Or via record_property built-in fixture (that is normally used to collect data into the junitxml):

import requests

def test_my_service_and_collect_timings(record_property):
    response = requests.get("http://my-server.io/api/do_something")
    assert response.status_code == 200

    record_property("do_something_response_time", response.elapsed.total_seconds())
    # now doing response time per version dashboard quite easy
    # and yeah, it's not exactly real usable metric, it's just an example...

Split tests base on history

Cool thing that can be done now that you have history of the tests is to split the test base on their actually runtime when passing. for integration test which might be long (minutes to hours), this would be priceless.

In this example we going to split the run in maximum 4min slices while any test that doesn't have history information would be assumed to be 60sec long

# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60
...

0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']
1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']

...

# cat include000.txt
test_history_slices.py::test_should_pass_1
test_history_slices.py::test_should_pass_2
test_history_slices.py::test_should_pass_3

# cat include000.txt
test_history_slices.py::test_with_history_data
test_history_slices.py::test_that_failed

### now we can run the each slice on it's own machines
### on machine1
# pytest $(cat include000.txt)

### on machine2
# pytest $(cat include001.txt)

Contributing

Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the MIT license, "pytest-elk-reporter" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Thanks

This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-elk-reporter-0.2.2.tar.gz (24.8 kB view hashes)

Uploaded Source

Built Distribution

pytest_elk_reporter-0.2.2-py2.py3-none-any.whl (10.0 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page