matplotlib.testing#

matplotlib.testing#

Helper functions for testing.

matplotlib.testing.ipython_in_subprocess(requested_backend_or_gui_framework, all_expected_backends)[source]#
matplotlib.testing.is_ci_environment()[source]#
matplotlib.testing.set_font_settings_for_testing()[source]#
matplotlib.testing.set_reproducibility_for_testing()[source]#
matplotlib.testing.setup()[source]#
matplotlib.testing.subprocess_run_for_testing(command, env=None, timeout=60, stdout=None, stderr=None, check=False, text=True, capture_output=False)[source]#

Create and run a subprocess.

Thin wrapper around subprocess.run, intended for testing. Will mark fork() failures on Cygwin as expected failures: not a success, but not indicating a problem with the code either.

Parameters:
argslist of str
envdict[str, str]
timeoutfloat
stdout, stderr
checkbool
textbool

Also called universal_newlines in subprocess. I chose this name since the main effect is returning bytes (False) vs. str (True), though it also tries to normalize newlines across platforms.

capture_outputbool

Set stdout and stderr to subprocess.PIPE

Returns:
procsubprocess.Popen
Raises:
pytest.xfail

If platform is Cygwin and subprocess reports a fork() failure.

See also

subprocess.run
matplotlib.testing.subprocess_run_helper(func, *args, timeout, extra_env=None)[source]#

Run a function in a sub-process.

Parameters:
funcfunction

The function to be run. It must be in a module that is importable.

*argsstr

Any additional command line arguments to be passed in the first argument to subprocess.run.

extra_envdict[str, str]

Any additional environment variables to be set for the subprocess.

matplotlib.testing.compare#

Utilities for comparing image results.

matplotlib.testing.compare.calculate_rms(expected_image, actual_image)[source]#

Calculate the per-pixel errors, then compute the root mean square error.

matplotlib.testing.compare.comparable_formats()[source]#