# Performance Testing¶

## Introduction¶

These functions provide performance testing support, allowing for the quick comparison of models, experiment design heuristics and quality parameters. QInfer’s performance testing functionality can be quickly applied without writing lots of boilerplate code every time. For instance:

>>> import qinfer
>>> n_particles = int(1e5)
>>> perf = qinfer.perf_test(
...     qinfer.SimplePrecessionModel(), n_particles,
...     qinfer.UniformDistribution([0, 1]), 200,
...     qinfer.ExpSparseHeuristic
... )


## Function Reference¶

qinfer.perf_test(model, n_particles, prior, n_exp, heuristic_class, true_model=None, true_prior=None, true_mps=None, extra_updater_args=None)[source]

Runs a trial of using SMC to estimate the parameters of a model, given a number of particles, a prior distribution and an experiment design heuristic.

Parameters: Rtype np.ndarray: model (qinfer.Model) – Model whose parameters are to be estimated. n_particles (int) – Number of SMC particles to use. prior (qinfer.Distribution) – Prior to use in selecting SMC particles. n_exp (int) – Number of experimental data points to draw from the model. heuristic_class (qinfer.Heuristic) – Constructor function for the experiment design heuristic to be used. true_model (qinfer.Model) – Model to be used in generating experimental data. If None, assumed to be model. Note that if the true and estimation models have different numbers of parameters, the loss will be calculated by aligning the respective model vectors “at the right,” analogously to the convention used by NumPy broadcasting. true_prior (qinfer.Distribution) – Prior to be used in selecting the true model parameters. If None, assumed to be prior. true_mps (numpy.ndarray) – The true model parameters. If None, it will be sampled from true_prior. Note that as this function runs exactly one trial, only one model parameter vector may be passed. In particular, this requires that len(true_mps.shape) == 1. extra_updater_args (dict) – Extra keyword arguments for the updater, such as resampling and zero-weight policies. See Performance Results Structure for more details on the type returned by this function. A record array of performance metrics, indexed by the number of experiments performed.
qinfer.perf_test_multiple(n_trials, model, n_particles, prior, n_exp, heuristic_class, true_model=None, true_prior=None, true_mps=None, apply=<class 'qinfer.perf_testing.apply_serial'>, allow_failures=False, extra_updater_args=None, progressbar=None)[source]

Runs many trials of using SMC to estimate the parameters of a model, given a number of particles, a prior distribution and an experiment design heuristic.

In addition to the parameters accepted by perf_test(), this function takes the following arguments:

Parameters: Rtype np.ndarray: n_trials (int) – Number of different trials to run. apply (callable) – Function to call to delegate each trial. See, for example, apply(). progressbar (qutip.ui.BaseProgressBar) – QuTiP-style progress bar class used to report how many trials have successfully completed. allow_failures (bool) – If False, an exception raised in any trial will propagate out. Otherwise, failed trials are masked out of the returned performance array using NumPy masked arrays. See Performance Results Structure for more details on the type returned by this function. A record array of performance metrics, indexed by the trial and the number of experiments performed.

## Performance Results Structure¶

Perfromance results, as collected by perf_test(), are returned as a record array with several fields, each describing a different metric collected by QInfer about the performance. In addition to these fields, each field in model.expparams_dtype is added as a field to the performance results structure to record what measurements are performed.

For a single performance trial, the shape of the performance results array is (n_exp, ), such that perf[idx_exp] returns metrics describing the performance immediately following collecting the datum idx_exp. Some fields are not scalar-valued, such that perf[field] then has shape (n_exp, ) + field_shape.

On the other hand, when multiple trials are collected by perf_test_multiple, the results are returned as an array with the same fields, but with an additional index over trials, for a shape of (n_trials, n_exp).

Field Type Shape
elapsed_time float scalar
Time (in seconds) elapsed during the SMC update for this experiment. Includes resampling, but excludes experiment design, generation of “true” data and calculation of performance metrics.
loss float scalar
Decision-theoretic loss incured by the estimate after updating with this experiment, given by the quadratic loss $$\Tr(Q (\hat{\vec{x}} -\vec{x}) (\hat{\vec{x}} - \vec{x})^{\mathrm{T}})$$. If the true and estimation models have different numbers of parameters, the loss will only be evaluated for those parameters that are in common (aligning the two vectors at the right).
resample_count int scalar
Number of times that resampling was performed on the SMC updater.
outcome int scalar
Outcome of the experiment that was performed.
true float (true_model.n_modelparams, )
Vector of model parameters used to simulate data. For time-dependent models, this changes with each experiment as per true_model.update_timestep.
est float (model.n_modelparams, )
Mean vector of model parameters over the current posterior.