DirectViewParallelizedModel(serial_model, direct_view, purge_client=False, serial_threshold=None)¶
Modelassumes that it has ownership over the DirectView, such that no other processes will send tasks during the lifetime of the Model.
If you are having trouble pickling your model, consider switching to
direct_view.use_dill(). This mode gives more support for closures.
- serial_model (qinfer.Model) – Model to be parallelized. This model will be distributed to the engines in the direct view, such that the model must support pickling.
- direct_view (ipyparallel.DirectView) – Direct view onto the engines that will be used to parallelize evaluation of the model’s likelihood function.
- purge_client (bool) – If
True, then this model will purge results and metadata from the IPython client whenever the model cache is cleared. This is useful for solving memory leaks caused by very large numbers of calls to
likelihood. By default, this is disabled, since enabling this option can cause data loss if the client is being sent other tasks during the operation of this model.
- serial_threshold (int) – Sets the number of model vectors below which
the serial model is to be preferred. By default, this is set to
10 * n_engines, where
n_enginesis the number of engines exposed by
The number of engines seen by the direct view owned by this parallelized model.
Return type: int
Clears any cache associated with the serial model and the engines seen by the direct view.
likelihood(outcomes, modelparams, expparams)¶
Returns the likelihood for the underlying (serial) model, distributing the model parameter array across the engines controlled by this parallelized model. Returns what the serial model would return, see
simulate_experiment(modelparams, expparams, repeat=1, split_by_modelparams=True)¶
Simulates the underlying (serial) model using the parallel engines. Returns what the serial model would return, see
Parameters: split_by_modelparams (bool) – If
True, splits up
n_engineschunks and distributes across engines. If
False, splits up