In [1]:
from sklearn_benchmarks.reporting.hp_match import HpMatchReporting
from sklearn_benchmarks.utils import default_results_directory
from pathlib import Path
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)

ONNX Runtime vs. scikit-learn¶

In [2]:
results_dir = default_results_directory()
In [3]:
# Parameters
results_dir = "./results/local/20220315T132911/"
In [4]:
results_dir = Path(results_dir)
In [5]:
reporting = HpMatchReporting(other_library="onnx", config="config.yml", log_scale=True, results_dir=results_dir)
reporting.make_report()

We assume here there is a perfect match between the hyperparameters of both librairies. For a given set of parameters and a given dataset, we compute the speed-up time scikit-learn / time onnx. For instance, a speed-up of 2 means that onnx is twice as fast as scikit-learn for a given set of parameters and a given dataset.

KNeighborsClassifier (brute force) ¶

onnx (1.11.0) vs. scikit-learn (1.0.2)

Speedup barplots ¶

All estimators share the following parameters: algorithm=brute.

Raw results ¶

predict

function n_samples_train n_samples n_features mean_duration_sklearn std_duration_sklearn iteration_throughput latency n_jobs n_neighbors accuracy_score_sklearn mean_duration_onnx std_duration_onnx accuracy_score_onnx speedup std_speedup sklearn_profiling onnx_profiling
0 predict 100000 1000 100 2.872 0.287 0.0 0.003 -1 1 0.676 18.028 0.188 0.676 0.159 0.159 Download Download
1 predict 100000 1 100 0.032 0.006 0.0 0.032 -1 1 0.000 0.335 0.014 0.000 0.095 0.095 Download Download
2 predict 100000 1000 100 3.563 0.068 0.0 0.004 -1 5 0.743 18.304 0.054 0.743 0.195 0.195 Download Download
3 predict 100000 1 100 0.030 0.002 0.0 0.030 -1 5 1.000 0.337 0.010 1.000 0.089 0.089 Download Download
4 predict 100000 1000 100 2.626 0.047 0.0 0.003 1 100 0.846 17.820 0.063 0.846 0.147 0.147 Download Download
5 predict 100000 1 100 0.027 0.002 0.0 0.027 1 100 1.000 0.340 0.011 1.000 0.079 0.079 Download Download
6 predict 100000 1000 100 3.373 0.081 0.0 0.003 -1 100 0.846 17.583 0.239 0.846 0.192 0.192 Download Download
7 predict 100000 1 100 0.030 0.006 0.0 0.030 -1 100 1.000 0.350 0.009 1.000 0.084 0.084 Download Download
8 predict 100000 1000 100 2.572 0.048 0.0 0.003 1 5 0.743 17.604 0.086 0.743 0.146 0.146 Download Download
9 predict 100000 1 100 0.024 0.001 0.0 0.024 1 5 1.000 0.326 0.007 1.000 0.075 0.075 Download Download
10 predict 100000 1000 100 1.711 0.030 0.0 0.002 1 1 0.676 17.815 0.020 0.676 0.096 0.096 Download Download
11 predict 100000 1 100 0.027 0.004 0.0 0.027 1 1 0.000 0.348 0.010 0.000 0.078 0.079 Download Download
12 predict 100000 1000 2 1.957 0.046 0.0 0.002 -1 1 0.845 4.260 0.068 0.845 0.459 0.459 Download Download
13 predict 100000 1 2 0.005 0.001 0.0 0.005 -1 1 1.000 0.241 0.008 1.000 0.022 0.022 Download Download
14 predict 100000 1000 2 2.921 0.075 0.0 0.003 -1 5 0.883 4.292 0.066 0.883 0.680 0.681 Download Download
15 predict 100000 1 2 0.008 0.002 0.0 0.008 -1 5 1.000 0.254 0.011 1.000 0.030 0.030 Download Download
16 predict 100000 1000 2 2.327 0.074 0.0 0.002 1 100 0.887 4.161 0.038 0.887 0.559 0.559 Download Download
17 predict 100000 1 2 0.004 0.000 0.0 0.004 1 100 1.000 0.243 0.017 1.000 0.015 0.015 Download Download
18 predict 100000 1000 2 3.012 0.064 0.0 0.003 -1 100 0.887 4.294 0.116 0.887 0.701 0.702 Download Download
19 predict 100000 1 2 0.007 0.002 0.0 0.007 -1 100 1.000 0.248 0.007 1.000 0.028 0.028 Download Download
20 predict 100000 1000 2 2.346 0.067 0.0 0.002 1 5 0.883 4.407 0.159 0.883 0.532 0.533 Download Download
21 predict 100000 1 2 0.003 0.000 0.0 0.003 1 5 1.000 0.262 0.006 1.000 0.013 0.013 Download Download
22 predict 100000 1000 2 1.390 0.014 0.0 0.001 1 1 0.845 4.485 0.075 0.845 0.310 0.310 Download Download
23 predict 100000 1 2 0.002 0.000 0.0 0.002 1 1 1.000 0.260 0.010 1.000 0.009 0.009 Download Download

Profiling traces can be visualized using Perfetto UI.

HistGradientBoostingClassifier ¶

onnx (1.11.0) vs. scikit-learn (1.0.2)

Speedup barplots ¶

All estimators share the following parameters: learning_rate=0.01, n_iter_no_change=10.0, max_leaf_nodes=100.0, max_bins=255.0, min_samples_leaf=100.0, max_iter=300.0.

Raw results ¶

predict

function n_samples_train n_samples n_features mean_duration_sklearn std_duration_sklearn iteration_throughput latency accuracy_score_sklearn mean_duration_onnx std_duration_onnx accuracy_score_onnx speedup std_speedup sklearn_profiling onnx_profiling
0 predict 100000 1000 100 0.191 0.003 0.004 0.0 0.795 0.601 0.026 0.795 0.318 0.318 Download Download

Profiling traces can be visualized using Perfetto UI.

Benchmark environment information¶

System¶

python 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0]
executable /usr/share/miniconda/envs/sklbench/bin/python
machine Linux-5.11.0-1028-azure-x86_64-with-glibc2.10

Dependencies¶

version
pip 22.0.4
setuptools 60.9.3
sklearn 1.0.2
numpy 1.22.3
scipy 1.8.0
Cython None
pandas 1.4.1
matplotlib 3.5.1
joblib 1.1.0
threadpoolctl 3.1.0

Threadpool¶

user_api internal_api prefix filepath version threading_layer architecture num_threads
0 blas openblas libopenblas /usr/share/miniconda/envs/sklbench/lib/libopenblasp-r0.3.18.so 0.3.18 pthreads Haswell 2
1 openmp openmp libgomp /usr/share/miniconda/envs/sklbench/lib/libgomp.so.1.0.0 None NaN NaN 2

CPU count¶

cpu_count 2
physical_cpu_count 2