Skip to main content

A utility for Automated BEnchmark Distribution

Project description

Abed is an automated system for benchmarking machine learning algorithms. It is created for running experiments where it is desired to run multiple methods on multiple datasets using multiple parameters. It includes automated processing of result files into result tables. Abed was designed for use with the Dutch LISA supercomputer, but can hopefully be used on any Torque compute cluster.

Abed was created as a way to automate all the tedious work necessary to set up proper benchmarking experiments. It also removes much of the hassle by using a single configuration file for the experimental setup. A core feature of Abed is that it doesn’t care about which language the tested methods are written in.

Abed can create output tables as either simple txt files, or as html pages using the excellent DataTables plugin. To support offline operation the necessary DataTables files are packaged with Abed.

Documentation

For Abed’s documentation, see the documentation.

Screenshots

Tbd.

Notes

The current version of Abed is very usable. However, it is still considered beta software, as it is not yet completely documented and some robustness improvements are planned. For a similar and more mature project which works with R see: BatchExperiments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

abed-0.0.2.tar.gz (1.1 MB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page