ZTWHHH commited on
Commit
d083066
·
verified ·
1 Parent(s): 1ad6175

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +4 -0
  2. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/backends/matplotlibbackend/__pycache__/__init__.cpython-310.pyc +0 -0
  3. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/backends/matplotlibbackend/__pycache__/matplotlib.cpython-310.pyc +0 -0
  4. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/__init__.py +0 -0
  5. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_experimental_lambdify.py +77 -0
  6. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_plot.py +1343 -0
  7. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_and.png +3 -0
  8. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_not.png +3 -0
  9. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_xor.png +3 -0
  10. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_series.py +1771 -0
  11. evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_utils.py +110 -0
  12. evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/diophantine/__pycache__/diophantine.cpython-310.pyc +3 -0
  13. evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/ode/tests/__pycache__/test_systems.cpython-310.pyc +3 -0
  14. evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/tests/__pycache__/test_solvers.cpython-310.pyc +3 -0
  15. evalkit_tf437/lib/python3.10/site-packages/pydantic_core/_pydantic_core.cpython-310-x86_64-linux-gnu.so +3 -0
  16. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__init__.py +0 -0
  17. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/__init__.cpython-310.pyc +0 -0
  18. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/tempita.cpython-310.pyc +0 -0
  19. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/version.cpython-310.pyc +0 -0
  20. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/tempita.py +60 -0
  21. evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/version.py +16 -0
  22. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__init__.py +46 -0
  23. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/__init__.cpython-310.pyc +0 -0
  24. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_elliptic_envelope.cpython-310.pyc +0 -0
  25. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_empirical_covariance.cpython-310.pyc +0 -0
  26. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_graph_lasso.cpython-310.pyc +0 -0
  27. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_robust_covariance.cpython-310.pyc +0 -0
  28. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_elliptic_envelope.py +266 -0
  29. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_empirical_covariance.py +367 -0
  30. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_graph_lasso.py +1140 -0
  31. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_robust_covariance.py +870 -0
  32. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_shrunk_covariance.py +820 -0
  33. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__init__.py +0 -0
  34. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/__init__.cpython-310.pyc +0 -0
  35. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_covariance.cpython-310.pyc +0 -0
  36. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_elliptic_envelope.cpython-310.pyc +0 -0
  37. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_graphical_lasso.cpython-310.pyc +0 -0
  38. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_robust_covariance.cpython-310.pyc +0 -0
  39. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_covariance.py +374 -0
  40. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_elliptic_envelope.py +52 -0
  41. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_graphical_lasso.py +318 -0
  42. evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_robust_covariance.py +168 -0
  43. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_base.cpython-310.pyc +0 -0
  44. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_dict_learning.cpython-310.pyc +0 -0
  45. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_factor_analysis.cpython-310.pyc +0 -0
  46. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_fastica.cpython-310.pyc +0 -0
  47. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_kernel_pca.cpython-310.pyc +0 -0
  48. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_lda.cpython-310.pyc +0 -0
  49. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_nmf.cpython-310.pyc +0 -0
  50. evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_pca.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -1634,3 +1634,7 @@ evalkit_tf437/lib/python3.10/site-packages/pyarrow/libarrow_substrait.so.1800 fi
1634
  falcon/bin/python filter=lfs diff=lfs merge=lfs -text
1635
  evalkit_internvl/lib/python3.10/site-packages/sympy/utilities/tests/__pycache__/test_wester.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1636
  evalkit_internvl/lib/python3.10/site-packages/sympy/tensor/__pycache__/tensor.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
1634
  falcon/bin/python filter=lfs diff=lfs merge=lfs -text
1635
  evalkit_internvl/lib/python3.10/site-packages/sympy/utilities/tests/__pycache__/test_wester.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1636
  evalkit_internvl/lib/python3.10/site-packages/sympy/tensor/__pycache__/tensor.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1637
+ evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/diophantine/__pycache__/diophantine.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1638
+ evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/tests/__pycache__/test_solvers.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1639
+ evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/ode/tests/__pycache__/test_systems.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
1640
+ evalkit_tf437/lib/python3.10/site-packages/pydantic_core/_pydantic_core.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/backends/matplotlibbackend/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (353 Bytes). View file
 
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/backends/matplotlibbackend/__pycache__/matplotlib.cpython-310.pyc ADDED
Binary file (8.52 kB). View file
 
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/__init__.py ADDED
File without changes
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_experimental_lambdify.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sympy.core.symbol import symbols, Symbol
2
+ from sympy.functions import Max
3
+ from sympy.plotting.experimental_lambdify import experimental_lambdify
4
+ from sympy.plotting.intervalmath.interval_arithmetic import \
5
+ interval, intervalMembership
6
+
7
+
8
+ # Tests for exception handling in experimental_lambdify
9
+ def test_experimental_lambify():
10
+ x = Symbol('x')
11
+ f = experimental_lambdify([x], Max(x, 5))
12
+ # XXX should f be tested? If f(2) is attempted, an
13
+ # error is raised because a complex produced during wrapping of the arg
14
+ # is being compared with an int.
15
+ assert Max(2, 5) == 5
16
+ assert Max(5, 7) == 7
17
+
18
+ x = Symbol('x-3')
19
+ f = experimental_lambdify([x], x + 1)
20
+ assert f(1) == 2
21
+
22
+
23
+ def test_composite_boolean_region():
24
+ x, y = symbols('x y')
25
+
26
+ r1 = (x - 1)**2 + y**2 < 2
27
+ r2 = (x + 1)**2 + y**2 < 2
28
+
29
+ f = experimental_lambdify((x, y), r1 & r2)
30
+ a = (interval(-0.1, 0.1), interval(-0.1, 0.1))
31
+ assert f(*a) == intervalMembership(True, True)
32
+ a = (interval(-1.1, -0.9), interval(-0.1, 0.1))
33
+ assert f(*a) == intervalMembership(False, True)
34
+ a = (interval(0.9, 1.1), interval(-0.1, 0.1))
35
+ assert f(*a) == intervalMembership(False, True)
36
+ a = (interval(-0.1, 0.1), interval(1.9, 2.1))
37
+ assert f(*a) == intervalMembership(False, True)
38
+
39
+ f = experimental_lambdify((x, y), r1 | r2)
40
+ a = (interval(-0.1, 0.1), interval(-0.1, 0.1))
41
+ assert f(*a) == intervalMembership(True, True)
42
+ a = (interval(-1.1, -0.9), interval(-0.1, 0.1))
43
+ assert f(*a) == intervalMembership(True, True)
44
+ a = (interval(0.9, 1.1), interval(-0.1, 0.1))
45
+ assert f(*a) == intervalMembership(True, True)
46
+ a = (interval(-0.1, 0.1), interval(1.9, 2.1))
47
+ assert f(*a) == intervalMembership(False, True)
48
+
49
+ f = experimental_lambdify((x, y), r1 & ~r2)
50
+ a = (interval(-0.1, 0.1), interval(-0.1, 0.1))
51
+ assert f(*a) == intervalMembership(False, True)
52
+ a = (interval(-1.1, -0.9), interval(-0.1, 0.1))
53
+ assert f(*a) == intervalMembership(False, True)
54
+ a = (interval(0.9, 1.1), interval(-0.1, 0.1))
55
+ assert f(*a) == intervalMembership(True, True)
56
+ a = (interval(-0.1, 0.1), interval(1.9, 2.1))
57
+ assert f(*a) == intervalMembership(False, True)
58
+
59
+ f = experimental_lambdify((x, y), ~r1 & r2)
60
+ a = (interval(-0.1, 0.1), interval(-0.1, 0.1))
61
+ assert f(*a) == intervalMembership(False, True)
62
+ a = (interval(-1.1, -0.9), interval(-0.1, 0.1))
63
+ assert f(*a) == intervalMembership(True, True)
64
+ a = (interval(0.9, 1.1), interval(-0.1, 0.1))
65
+ assert f(*a) == intervalMembership(False, True)
66
+ a = (interval(-0.1, 0.1), interval(1.9, 2.1))
67
+ assert f(*a) == intervalMembership(False, True)
68
+
69
+ f = experimental_lambdify((x, y), ~r1 & ~r2)
70
+ a = (interval(-0.1, 0.1), interval(-0.1, 0.1))
71
+ assert f(*a) == intervalMembership(False, True)
72
+ a = (interval(-1.1, -0.9), interval(-0.1, 0.1))
73
+ assert f(*a) == intervalMembership(False, True)
74
+ a = (interval(0.9, 1.1), interval(-0.1, 0.1))
75
+ assert f(*a) == intervalMembership(False, True)
76
+ a = (interval(-0.1, 0.1), interval(1.9, 2.1))
77
+ assert f(*a) == intervalMembership(True, True)
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_plot.py ADDED
@@ -0,0 +1,1343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from tempfile import TemporaryDirectory
3
+ import pytest
4
+ from sympy.concrete.summations import Sum
5
+ from sympy.core.numbers import (I, oo, pi)
6
+ from sympy.core.relational import Ne
7
+ from sympy.core.symbol import Symbol, symbols
8
+ from sympy.functions.elementary.exponential import (LambertW, exp, exp_polar, log)
9
+ from sympy.functions.elementary.miscellaneous import (real_root, sqrt)
10
+ from sympy.functions.elementary.piecewise import Piecewise
11
+ from sympy.functions.elementary.trigonometric import (cos, sin)
12
+ from sympy.functions.elementary.miscellaneous import Min
13
+ from sympy.functions.special.hyper import meijerg
14
+ from sympy.integrals.integrals import Integral
15
+ from sympy.logic.boolalg import And
16
+ from sympy.core.singleton import S
17
+ from sympy.core.sympify import sympify
18
+ from sympy.external import import_module
19
+ from sympy.plotting.plot import (
20
+ Plot, plot, plot_parametric, plot3d_parametric_line, plot3d,
21
+ plot3d_parametric_surface)
22
+ from sympy.plotting.plot import (
23
+ unset_show, plot_contour, PlotGrid, MatplotlibBackend, TextBackend)
24
+ from sympy.plotting.series import (
25
+ LineOver1DRangeSeries, Parametric2DLineSeries, Parametric3DLineSeries,
26
+ ParametricSurfaceSeries, SurfaceOver2DRangeSeries)
27
+ from sympy.testing.pytest import skip, warns, raises, warns_deprecated_sympy
28
+ from sympy.utilities import lambdify as lambdify_
29
+ from sympy.utilities.exceptions import ignore_warnings
30
+
31
+ unset_show()
32
+
33
+
34
+ matplotlib = import_module(
35
+ 'matplotlib', min_module_version='1.1.0', catch=(RuntimeError,))
36
+
37
+
38
+ class DummyBackendNotOk(Plot):
39
+ """ Used to verify if users can create their own backends.
40
+ This backend is meant to raise NotImplementedError for methods `show`,
41
+ `save`, `close`.
42
+ """
43
+ def __new__(cls, *args, **kwargs):
44
+ return object.__new__(cls)
45
+
46
+
47
+ class DummyBackendOk(Plot):
48
+ """ Used to verify if users can create their own backends.
49
+ This backend is meant to pass all tests.
50
+ """
51
+ def __new__(cls, *args, **kwargs):
52
+ return object.__new__(cls)
53
+
54
+ def show(self):
55
+ pass
56
+
57
+ def save(self):
58
+ pass
59
+
60
+ def close(self):
61
+ pass
62
+
63
+ def test_basic_plotting_backend():
64
+ x = Symbol('x')
65
+ plot(x, (x, 0, 3), backend='text')
66
+ plot(x**2 + 1, (x, 0, 3), backend='text')
67
+
68
+ @pytest.mark.parametrize("adaptive", [True, False])
69
+ def test_plot_and_save_1(adaptive):
70
+ if not matplotlib:
71
+ skip("Matplotlib not the default backend")
72
+
73
+ x = Symbol('x')
74
+ y = Symbol('y')
75
+
76
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
77
+ ###
78
+ # Examples from the 'introduction' notebook
79
+ ###
80
+ p = plot(x, legend=True, label='f1', adaptive=adaptive, n=10)
81
+ p = plot(x*sin(x), x*cos(x), label='f2', adaptive=adaptive, n=10)
82
+ p.extend(p)
83
+ p[0].line_color = lambda a: a
84
+ p[1].line_color = 'b'
85
+ p.title = 'Big title'
86
+ p.xlabel = 'the x axis'
87
+ p[1].label = 'straight line'
88
+ p.legend = True
89
+ p.aspect_ratio = (1, 1)
90
+ p.xlim = (-15, 20)
91
+ filename = 'test_basic_options_and_colors.png'
92
+ p.save(os.path.join(tmpdir, filename))
93
+ p._backend.close()
94
+
95
+ p.extend(plot(x + 1, adaptive=adaptive, n=10))
96
+ p.append(plot(x + 3, x**2, adaptive=adaptive, n=10)[1])
97
+ filename = 'test_plot_extend_append.png'
98
+ p.save(os.path.join(tmpdir, filename))
99
+
100
+ p[2] = plot(x**2, (x, -2, 3), adaptive=adaptive, n=10)
101
+ filename = 'test_plot_setitem.png'
102
+ p.save(os.path.join(tmpdir, filename))
103
+ p._backend.close()
104
+
105
+ p = plot(sin(x), (x, -2*pi, 4*pi), adaptive=adaptive, n=10)
106
+ filename = 'test_line_explicit.png'
107
+ p.save(os.path.join(tmpdir, filename))
108
+ p._backend.close()
109
+
110
+ p = plot(sin(x), adaptive=adaptive, n=10)
111
+ filename = 'test_line_default_range.png'
112
+ p.save(os.path.join(tmpdir, filename))
113
+ p._backend.close()
114
+
115
+ p = plot((x**2, (x, -5, 5)), (x**3, (x, -3, 3)), adaptive=adaptive, n=10)
116
+ filename = 'test_line_multiple_range.png'
117
+ p.save(os.path.join(tmpdir, filename))
118
+ p._backend.close()
119
+
120
+ raises(ValueError, lambda: plot(x, y))
121
+
122
+ #Piecewise plots
123
+ p = plot(Piecewise((1, x > 0), (0, True)), (x, -1, 1), adaptive=adaptive, n=10)
124
+ filename = 'test_plot_piecewise.png'
125
+ p.save(os.path.join(tmpdir, filename))
126
+ p._backend.close()
127
+
128
+ p = plot(Piecewise((x, x < 1), (x**2, True)), (x, -3, 3), adaptive=adaptive, n=10)
129
+ filename = 'test_plot_piecewise_2.png'
130
+ p.save(os.path.join(tmpdir, filename))
131
+ p._backend.close()
132
+
133
+ # test issue 7471
134
+ p1 = plot(x, adaptive=adaptive, n=10)
135
+ p2 = plot(3, adaptive=adaptive, n=10)
136
+ p1.extend(p2)
137
+ filename = 'test_horizontal_line.png'
138
+ p.save(os.path.join(tmpdir, filename))
139
+ p._backend.close()
140
+
141
+ # test issue 10925
142
+ f = Piecewise((-1, x < -1), (x, And(-1 <= x, x < 0)), \
143
+ (x**2, And(0 <= x, x < 1)), (x**3, x >= 1))
144
+ p = plot(f, (x, -3, 3), adaptive=adaptive, n=10)
145
+ filename = 'test_plot_piecewise_3.png'
146
+ p.save(os.path.join(tmpdir, filename))
147
+ p._backend.close()
148
+
149
+
150
+ @pytest.mark.parametrize("adaptive", [True, False])
151
+ def test_plot_and_save_2(adaptive):
152
+ if not matplotlib:
153
+ skip("Matplotlib not the default backend")
154
+
155
+ x = Symbol('x')
156
+ y = Symbol('y')
157
+ z = Symbol('z')
158
+
159
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
160
+ #parametric 2d plots.
161
+ #Single plot with default range.
162
+ p = plot_parametric(sin(x), cos(x), adaptive=adaptive, n=10)
163
+ filename = 'test_parametric.png'
164
+ p.save(os.path.join(tmpdir, filename))
165
+ p._backend.close()
166
+
167
+ #Single plot with range.
168
+ p = plot_parametric(
169
+ sin(x), cos(x), (x, -5, 5), legend=True, label='parametric_plot',
170
+ adaptive=adaptive, n=10)
171
+ filename = 'test_parametric_range.png'
172
+ p.save(os.path.join(tmpdir, filename))
173
+ p._backend.close()
174
+
175
+ #Multiple plots with same range.
176
+ p = plot_parametric((sin(x), cos(x)), (x, sin(x)),
177
+ adaptive=adaptive, n=10)
178
+ filename = 'test_parametric_multiple.png'
179
+ p.save(os.path.join(tmpdir, filename))
180
+ p._backend.close()
181
+
182
+ #Multiple plots with different ranges.
183
+ p = plot_parametric(
184
+ (sin(x), cos(x), (x, -3, 3)), (x, sin(x), (x, -5, 5)),
185
+ adaptive=adaptive, n=10)
186
+ filename = 'test_parametric_multiple_ranges.png'
187
+ p.save(os.path.join(tmpdir, filename))
188
+ p._backend.close()
189
+
190
+ #depth of recursion specified.
191
+ p = plot_parametric(x, sin(x), depth=13,
192
+ adaptive=adaptive, n=10)
193
+ filename = 'test_recursion_depth.png'
194
+ p.save(os.path.join(tmpdir, filename))
195
+ p._backend.close()
196
+
197
+ #No adaptive sampling.
198
+ p = plot_parametric(cos(x), sin(x), adaptive=False, n=500)
199
+ filename = 'test_adaptive.png'
200
+ p.save(os.path.join(tmpdir, filename))
201
+ p._backend.close()
202
+
203
+ #3d parametric plots
204
+ p = plot3d_parametric_line(
205
+ sin(x), cos(x), x, legend=True, label='3d_parametric_plot',
206
+ adaptive=adaptive, n=10)
207
+ filename = 'test_3d_line.png'
208
+ p.save(os.path.join(tmpdir, filename))
209
+ p._backend.close()
210
+
211
+ p = plot3d_parametric_line(
212
+ (sin(x), cos(x), x, (x, -5, 5)), (cos(x), sin(x), x, (x, -3, 3)),
213
+ adaptive=adaptive, n=10)
214
+ filename = 'test_3d_line_multiple.png'
215
+ p.save(os.path.join(tmpdir, filename))
216
+ p._backend.close()
217
+
218
+ p = plot3d_parametric_line(sin(x), cos(x), x, n=30,
219
+ adaptive=adaptive)
220
+ filename = 'test_3d_line_points.png'
221
+ p.save(os.path.join(tmpdir, filename))
222
+ p._backend.close()
223
+
224
+ # 3d surface single plot.
225
+ p = plot3d(x * y, adaptive=adaptive, n=10)
226
+ filename = 'test_surface.png'
227
+ p.save(os.path.join(tmpdir, filename))
228
+ p._backend.close()
229
+
230
+ # Multiple 3D plots with same range.
231
+ p = plot3d(-x * y, x * y, (x, -5, 5), adaptive=adaptive, n=10)
232
+ filename = 'test_surface_multiple.png'
233
+ p.save(os.path.join(tmpdir, filename))
234
+ p._backend.close()
235
+
236
+ # Multiple 3D plots with different ranges.
237
+ p = plot3d(
238
+ (x * y, (x, -3, 3), (y, -3, 3)), (-x * y, (x, -3, 3), (y, -3, 3)),
239
+ adaptive=adaptive, n=10)
240
+ filename = 'test_surface_multiple_ranges.png'
241
+ p.save(os.path.join(tmpdir, filename))
242
+ p._backend.close()
243
+
244
+ # Single Parametric 3D plot
245
+ p = plot3d_parametric_surface(sin(x + y), cos(x - y), x - y,
246
+ adaptive=adaptive, n=10)
247
+ filename = 'test_parametric_surface.png'
248
+ p.save(os.path.join(tmpdir, filename))
249
+ p._backend.close()
250
+
251
+ # Multiple Parametric 3D plots.
252
+ p = plot3d_parametric_surface(
253
+ (x*sin(z), x*cos(z), z, (x, -5, 5), (z, -5, 5)),
254
+ (sin(x + y), cos(x - y), x - y, (x, -5, 5), (y, -5, 5)),
255
+ adaptive=adaptive, n=10)
256
+ filename = 'test_parametric_surface.png'
257
+ p.save(os.path.join(tmpdir, filename))
258
+ p._backend.close()
259
+
260
+ # Single Contour plot.
261
+ p = plot_contour(sin(x)*sin(y), (x, -5, 5), (y, -5, 5),
262
+ adaptive=adaptive, n=10)
263
+ filename = 'test_contour_plot.png'
264
+ p.save(os.path.join(tmpdir, filename))
265
+ p._backend.close()
266
+
267
+ # Multiple Contour plots with same range.
268
+ p = plot_contour(x**2 + y**2, x**3 + y**3, (x, -5, 5), (y, -5, 5),
269
+ adaptive=adaptive, n=10)
270
+ filename = 'test_contour_plot.png'
271
+ p.save(os.path.join(tmpdir, filename))
272
+ p._backend.close()
273
+
274
+ # Multiple Contour plots with different range.
275
+ p = plot_contour(
276
+ (x**2 + y**2, (x, -5, 5), (y, -5, 5)),
277
+ (x**3 + y**3, (x, -3, 3), (y, -3, 3)),
278
+ adaptive=adaptive, n=10)
279
+ filename = 'test_contour_plot.png'
280
+ p.save(os.path.join(tmpdir, filename))
281
+ p._backend.close()
282
+
283
+
284
+ @pytest.mark.parametrize("adaptive", [True, False])
285
+ def test_plot_and_save_3(adaptive):
286
+ if not matplotlib:
287
+ skip("Matplotlib not the default backend")
288
+
289
+ x = Symbol('x')
290
+ y = Symbol('y')
291
+ z = Symbol('z')
292
+
293
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
294
+ ###
295
+ # Examples from the 'colors' notebook
296
+ ###
297
+
298
+ p = plot(sin(x), adaptive=adaptive, n=10)
299
+ p[0].line_color = lambda a: a
300
+ filename = 'test_colors_line_arity1.png'
301
+ p.save(os.path.join(tmpdir, filename))
302
+
303
+ p[0].line_color = lambda a, b: b
304
+ filename = 'test_colors_line_arity2.png'
305
+ p.save(os.path.join(tmpdir, filename))
306
+ p._backend.close()
307
+
308
+ p = plot(x*sin(x), x*cos(x), (x, 0, 10), adaptive=adaptive, n=10)
309
+ p[0].line_color = lambda a: a
310
+ filename = 'test_colors_param_line_arity1.png'
311
+ p.save(os.path.join(tmpdir, filename))
312
+
313
+ p[0].line_color = lambda a, b: a
314
+ filename = 'test_colors_param_line_arity1.png'
315
+ p.save(os.path.join(tmpdir, filename))
316
+
317
+ p[0].line_color = lambda a, b: b
318
+ filename = 'test_colors_param_line_arity2b.png'
319
+ p.save(os.path.join(tmpdir, filename))
320
+ p._backend.close()
321
+
322
+ p = plot3d_parametric_line(
323
+ sin(x) + 0.1*sin(x)*cos(7*x),
324
+ cos(x) + 0.1*cos(x)*cos(7*x),
325
+ 0.1*sin(7*x),
326
+ (x, 0, 2*pi), adaptive=adaptive, n=10)
327
+ p[0].line_color = lambdify_(x, sin(4*x))
328
+ filename = 'test_colors_3d_line_arity1.png'
329
+ p.save(os.path.join(tmpdir, filename))
330
+ p[0].line_color = lambda a, b: b
331
+ filename = 'test_colors_3d_line_arity2.png'
332
+ p.save(os.path.join(tmpdir, filename))
333
+ p[0].line_color = lambda a, b, c: c
334
+ filename = 'test_colors_3d_line_arity3.png'
335
+ p.save(os.path.join(tmpdir, filename))
336
+ p._backend.close()
337
+
338
+ p = plot3d(sin(x)*y, (x, 0, 6*pi), (y, -5, 5), adaptive=adaptive, n=10)
339
+ p[0].surface_color = lambda a: a
340
+ filename = 'test_colors_surface_arity1.png'
341
+ p.save(os.path.join(tmpdir, filename))
342
+ p[0].surface_color = lambda a, b: b
343
+ filename = 'test_colors_surface_arity2.png'
344
+ p.save(os.path.join(tmpdir, filename))
345
+ p[0].surface_color = lambda a, b, c: c
346
+ filename = 'test_colors_surface_arity3a.png'
347
+ p.save(os.path.join(tmpdir, filename))
348
+ p[0].surface_color = lambdify_((x, y, z), sqrt((x - 3*pi)**2 + y**2))
349
+ filename = 'test_colors_surface_arity3b.png'
350
+ p.save(os.path.join(tmpdir, filename))
351
+ p._backend.close()
352
+
353
+ p = plot3d_parametric_surface(x * cos(4 * y), x * sin(4 * y), y,
354
+ (x, -1, 1), (y, -1, 1), adaptive=adaptive, n=10)
355
+ p[0].surface_color = lambda a: a
356
+ filename = 'test_colors_param_surf_arity1.png'
357
+ p.save(os.path.join(tmpdir, filename))
358
+ p[0].surface_color = lambda a, b: a*b
359
+ filename = 'test_colors_param_surf_arity2.png'
360
+ p.save(os.path.join(tmpdir, filename))
361
+ p[0].surface_color = lambdify_((x, y, z), sqrt(x**2 + y**2 + z**2))
362
+ filename = 'test_colors_param_surf_arity3.png'
363
+ p.save(os.path.join(tmpdir, filename))
364
+ p._backend.close()
365
+
366
+
367
+ @pytest.mark.parametrize("adaptive", [True])
368
+ def test_plot_and_save_4(adaptive):
369
+ if not matplotlib:
370
+ skip("Matplotlib not the default backend")
371
+
372
+ x = Symbol('x')
373
+ y = Symbol('y')
374
+
375
+ ###
376
+ # Examples from the 'advanced' notebook
377
+ ###
378
+
379
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
380
+ i = Integral(log((sin(x)**2 + 1)*sqrt(x**2 + 1)), (x, 0, y))
381
+ p = plot(i, (y, 1, 5), adaptive=adaptive, n=10, force_real_eval=True)
382
+ filename = 'test_advanced_integral.png'
383
+ p.save(os.path.join(tmpdir, filename))
384
+ p._backend.close()
385
+
386
+
387
+ @pytest.mark.parametrize("adaptive", [True, False])
388
+ def test_plot_and_save_5(adaptive):
389
+ if not matplotlib:
390
+ skip("Matplotlib not the default backend")
391
+
392
+ x = Symbol('x')
393
+ y = Symbol('y')
394
+
395
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
396
+ s = Sum(1/x**y, (x, 1, oo))
397
+ p = plot(s, (y, 2, 10), adaptive=adaptive, n=10)
398
+ filename = 'test_advanced_inf_sum.png'
399
+ p.save(os.path.join(tmpdir, filename))
400
+ p._backend.close()
401
+
402
+ p = plot(Sum(1/x, (x, 1, y)), (y, 2, 10), show=False,
403
+ adaptive=adaptive, n=10)
404
+ p[0].only_integers = True
405
+ p[0].steps = True
406
+ filename = 'test_advanced_fin_sum.png'
407
+
408
+ # XXX: This should be fixed in experimental_lambdify or by using
409
+ # ordinary lambdify so that it doesn't warn. The error results from
410
+ # passing an array of values as the integration limit.
411
+ #
412
+ # UserWarning: The evaluation of the expression is problematic. We are
413
+ # trying a failback method that may still work. Please report this as a
414
+ # bug.
415
+ with ignore_warnings(UserWarning):
416
+ p.save(os.path.join(tmpdir, filename))
417
+
418
+ p._backend.close()
419
+
420
+
421
+ @pytest.mark.parametrize("adaptive", [True, False])
422
+ def test_plot_and_save_6(adaptive):
423
+ if not matplotlib:
424
+ skip("Matplotlib not the default backend")
425
+
426
+ x = Symbol('x')
427
+
428
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
429
+ filename = 'test.png'
430
+ ###
431
+ # Test expressions that can not be translated to np and generate complex
432
+ # results.
433
+ ###
434
+ p = plot(sin(x) + I*cos(x))
435
+ p.save(os.path.join(tmpdir, filename))
436
+
437
+ with ignore_warnings(RuntimeWarning):
438
+ p = plot(sqrt(sqrt(-x)))
439
+ p.save(os.path.join(tmpdir, filename))
440
+
441
+ p = plot(LambertW(x))
442
+ p.save(os.path.join(tmpdir, filename))
443
+ p = plot(sqrt(LambertW(x)))
444
+ p.save(os.path.join(tmpdir, filename))
445
+
446
+ #Characteristic function of a StudentT distribution with nu=10
447
+ x1 = 5 * x**2 * exp_polar(-I*pi)/2
448
+ m1 = meijerg(((1 / 2,), ()), ((5, 0, 1 / 2), ()), x1)
449
+ x2 = 5*x**2 * exp_polar(I*pi)/2
450
+ m2 = meijerg(((1/2,), ()), ((5, 0, 1/2), ()), x2)
451
+ expr = (m1 + m2) / (48 * pi)
452
+ with warns(
453
+ UserWarning,
454
+ match="The evaluation with NumPy/SciPy failed",
455
+ test_stacklevel=False,
456
+ ):
457
+ p = plot(expr, (x, 1e-6, 1e-2), adaptive=adaptive, n=10)
458
+ p.save(os.path.join(tmpdir, filename))
459
+
460
+
461
+ @pytest.mark.parametrize("adaptive", [True, False])
462
+ def test_plotgrid_and_save(adaptive):
463
+ if not matplotlib:
464
+ skip("Matplotlib not the default backend")
465
+
466
+ x = Symbol('x')
467
+ y = Symbol('y')
468
+
469
+ with TemporaryDirectory(prefix='sympy_') as tmpdir:
470
+ p1 = plot(x, adaptive=adaptive, n=10)
471
+ p2 = plot_parametric((sin(x), cos(x)), (x, sin(x)), show=False,
472
+ adaptive=adaptive, n=10)
473
+ p3 = plot_parametric(
474
+ cos(x), sin(x), adaptive=adaptive, n=10, show=False)
475
+ p4 = plot3d_parametric_line(sin(x), cos(x), x, show=False,
476
+ adaptive=adaptive, n=10)
477
+ # symmetric grid
478
+ p = PlotGrid(2, 2, p1, p2, p3, p4)
479
+ filename = 'test_grid1.png'
480
+ p.save(os.path.join(tmpdir, filename))
481
+ p._backend.close()
482
+
483
+ # grid size greater than the number of subplots
484
+ p = PlotGrid(3, 4, p1, p2, p3, p4)
485
+ filename = 'test_grid2.png'
486
+ p.save(os.path.join(tmpdir, filename))
487
+ p._backend.close()
488
+
489
+ p5 = plot(cos(x),(x, -pi, pi), show=False, adaptive=adaptive, n=10)
490
+ p5[0].line_color = lambda a: a
491
+ p6 = plot(Piecewise((1, x > 0), (0, True)), (x, -1, 1), show=False,
492
+ adaptive=adaptive, n=10)
493
+ p7 = plot_contour(
494
+ (x**2 + y**2, (x, -5, 5), (y, -5, 5)),
495
+ (x**3 + y**3, (x, -3, 3), (y, -3, 3)), show=False,
496
+ adaptive=adaptive, n=10)
497
+ # unsymmetric grid (subplots in one line)
498
+ p = PlotGrid(1, 3, p5, p6, p7)
499
+ filename = 'test_grid3.png'
500
+ p.save(os.path.join(tmpdir, filename))
501
+ p._backend.close()
502
+
503
+
504
+ @pytest.mark.parametrize("adaptive", [True, False])
505
+ def test_append_issue_7140(adaptive):
506
+ if not matplotlib:
507
+ skip("Matplotlib not the default backend")
508
+
509
+ x = Symbol('x')
510
+ p1 = plot(x, adaptive=adaptive, n=10)
511
+ p2 = plot(x**2, adaptive=adaptive, n=10)
512
+ plot(x + 2, adaptive=adaptive, n=10)
513
+
514
+ # append a series
515
+ p2.append(p1[0])
516
+ assert len(p2._series) == 2
517
+
518
+ with raises(TypeError):
519
+ p1.append(p2)
520
+
521
+ with raises(TypeError):
522
+ p1.append(p2._series)
523
+
524
+
525
+ @pytest.mark.parametrize("adaptive", [True, False])
526
+ def test_issue_15265(adaptive):
527
+ if not matplotlib:
528
+ skip("Matplotlib not the default backend")
529
+
530
+ x = Symbol('x')
531
+ eqn = sin(x)
532
+
533
+ p = plot(eqn, xlim=(-S.Pi, S.Pi), ylim=(-1, 1), adaptive=adaptive, n=10)
534
+ p._backend.close()
535
+
536
+ p = plot(eqn, xlim=(-1, 1), ylim=(-S.Pi, S.Pi), adaptive=adaptive, n=10)
537
+ p._backend.close()
538
+
539
+ p = plot(eqn, xlim=(-1, 1), adaptive=adaptive, n=10,
540
+ ylim=(sympify('-3.14'), sympify('3.14')))
541
+ p._backend.close()
542
+
543
+ p = plot(eqn, adaptive=adaptive, n=10,
544
+ xlim=(sympify('-3.14'), sympify('3.14')), ylim=(-1, 1))
545
+ p._backend.close()
546
+
547
+ raises(ValueError,
548
+ lambda: plot(eqn, adaptive=adaptive, n=10,
549
+ xlim=(-S.ImaginaryUnit, 1), ylim=(-1, 1)))
550
+
551
+ raises(ValueError,
552
+ lambda: plot(eqn, adaptive=adaptive, n=10,
553
+ xlim=(-1, 1), ylim=(-1, S.ImaginaryUnit)))
554
+
555
+ raises(ValueError,
556
+ lambda: plot(eqn, adaptive=adaptive, n=10,
557
+ xlim=(S.NegativeInfinity, 1), ylim=(-1, 1)))
558
+
559
+ raises(ValueError,
560
+ lambda: plot(eqn, adaptive=adaptive, n=10,
561
+ xlim=(-1, 1), ylim=(-1, S.Infinity)))
562
+
563
+
564
+ def test_empty_Plot():
565
+ if not matplotlib:
566
+ skip("Matplotlib not the default backend")
567
+
568
+ # No exception showing an empty plot
569
+ plot()
570
+ # Plot is only a base class: doesn't implement any logic for showing
571
+ # images
572
+ p = Plot()
573
+ raises(NotImplementedError, lambda: p.show())
574
+
575
+
576
+ @pytest.mark.parametrize("adaptive", [True, False])
577
+ def test_issue_17405(adaptive):
578
+ if not matplotlib:
579
+ skip("Matplotlib not the default backend")
580
+
581
+ x = Symbol('x')
582
+ f = x**0.3 - 10*x**3 + x**2
583
+ p = plot(f, (x, -10, 10), adaptive=adaptive, n=30, show=False)
584
+ # Random number of segments, probably more than 100, but we want to see
585
+ # that there are segments generated, as opposed to when the bug was present
586
+
587
+ # RuntimeWarning: invalid value encountered in double_scalars
588
+ with ignore_warnings(RuntimeWarning):
589
+ assert len(p[0].get_data()[0]) >= 30
590
+
591
+
592
+ @pytest.mark.parametrize("adaptive", [True, False])
593
+ def test_logplot_PR_16796(adaptive):
594
+ if not matplotlib:
595
+ skip("Matplotlib not the default backend")
596
+
597
+ x = Symbol('x')
598
+ p = plot(x, (x, .001, 100), adaptive=adaptive, n=30,
599
+ xscale='log', show=False)
600
+ # Random number of segments, probably more than 100, but we want to see
601
+ # that there are segments generated, as opposed to when the bug was present
602
+ assert len(p[0].get_data()[0]) >= 30
603
+ assert p[0].end == 100.0
604
+ assert p[0].start == .001
605
+
606
+
607
+ @pytest.mark.parametrize("adaptive", [True, False])
608
+ def test_issue_16572(adaptive):
609
+ if not matplotlib:
610
+ skip("Matplotlib not the default backend")
611
+
612
+ x = Symbol('x')
613
+ p = plot(LambertW(x), show=False, adaptive=adaptive, n=30)
614
+ # Random number of segments, probably more than 50, but we want to see
615
+ # that there are segments generated, as opposed to when the bug was present
616
+ assert len(p[0].get_data()[0]) >= 30
617
+
618
+
619
+ @pytest.mark.parametrize("adaptive", [True, False])
620
+ def test_issue_11865(adaptive):
621
+ if not matplotlib:
622
+ skip("Matplotlib not the default backend")
623
+
624
+ k = Symbol('k', integer=True)
625
+ f = Piecewise((-I*exp(I*pi*k)/k + I*exp(-I*pi*k)/k, Ne(k, 0)), (2*pi, True))
626
+ p = plot(f, show=False, adaptive=adaptive, n=30)
627
+ # Random number of segments, probably more than 100, but we want to see
628
+ # that there are segments generated, as opposed to when the bug was present
629
+ # and that there are no exceptions.
630
+ assert len(p[0].get_data()[0]) >= 30
631
+
632
+
633
+ def test_issue_11461():
634
+ if not matplotlib:
635
+ skip("Matplotlib not the default backend")
636
+
637
+ x = Symbol('x')
638
+ p = plot(real_root((log(x/(x-2))), 3), show=False, adaptive=True)
639
+ with warns(
640
+ RuntimeWarning,
641
+ match="invalid value encountered in",
642
+ test_stacklevel=False,
643
+ ):
644
+ # Random number of segments, probably more than 100, but we want to see
645
+ # that there are segments generated, as opposed to when the bug was present
646
+ # and that there are no exceptions.
647
+ assert len(p[0].get_data()[0]) >= 30
648
+
649
+
650
+ @pytest.mark.parametrize("adaptive", [True, False])
651
+ def test_issue_11764(adaptive):
652
+ if not matplotlib:
653
+ skip("Matplotlib not the default backend")
654
+
655
+ x = Symbol('x')
656
+ p = plot_parametric(cos(x), sin(x), (x, 0, 2 * pi),
657
+ aspect_ratio=(1,1), show=False, adaptive=adaptive, n=30)
658
+ assert p.aspect_ratio == (1, 1)
659
+ # Random number of segments, probably more than 100, but we want to see
660
+ # that there are segments generated, as opposed to when the bug was present
661
+ assert len(p[0].get_data()[0]) >= 30
662
+
663
+
664
+ @pytest.mark.parametrize("adaptive", [True, False])
665
+ def test_issue_13516(adaptive):
666
+ if not matplotlib:
667
+ skip("Matplotlib not the default backend")
668
+
669
+ x = Symbol('x')
670
+
671
+ pm = plot(sin(x), backend="matplotlib", show=False, adaptive=adaptive, n=30)
672
+ assert pm.backend == MatplotlibBackend
673
+ assert len(pm[0].get_data()[0]) >= 30
674
+
675
+ pt = plot(sin(x), backend="text", show=False, adaptive=adaptive, n=30)
676
+ assert pt.backend == TextBackend
677
+ assert len(pt[0].get_data()[0]) >= 30
678
+
679
+ pd = plot(sin(x), backend="default", show=False, adaptive=adaptive, n=30)
680
+ assert pd.backend == MatplotlibBackend
681
+ assert len(pd[0].get_data()[0]) >= 30
682
+
683
+ p = plot(sin(x), show=False, adaptive=adaptive, n=30)
684
+ assert p.backend == MatplotlibBackend
685
+ assert len(p[0].get_data()[0]) >= 30
686
+
687
+
688
+ @pytest.mark.parametrize("adaptive", [True, False])
689
+ def test_plot_limits(adaptive):
690
+ if not matplotlib:
691
+ skip("Matplotlib not the default backend")
692
+
693
+ x = Symbol('x')
694
+ p = plot(x, x**2, (x, -10, 10), adaptive=adaptive, n=10)
695
+ backend = p._backend
696
+
697
+ xmin, xmax = backend.ax.get_xlim()
698
+ assert abs(xmin + 10) < 2
699
+ assert abs(xmax - 10) < 2
700
+ ymin, ymax = backend.ax.get_ylim()
701
+ assert abs(ymin + 10) < 10
702
+ assert abs(ymax - 100) < 10
703
+
704
+
705
+ @pytest.mark.parametrize("adaptive", [True, False])
706
+ def test_plot3d_parametric_line_limits(adaptive):
707
+ if not matplotlib:
708
+ skip("Matplotlib not the default backend")
709
+
710
+ x = Symbol('x')
711
+
712
+ v1 = (2*cos(x), 2*sin(x), 2*x, (x, -5, 5))
713
+ v2 = (sin(x), cos(x), x, (x, -5, 5))
714
+ p = plot3d_parametric_line(v1, v2, adaptive=adaptive, n=60)
715
+ backend = p._backend
716
+
717
+ xmin, xmax = backend.ax.get_xlim()
718
+ assert abs(xmin + 2) < 1e-2
719
+ assert abs(xmax - 2) < 1e-2
720
+ ymin, ymax = backend.ax.get_ylim()
721
+ assert abs(ymin + 2) < 1e-2
722
+ assert abs(ymax - 2) < 1e-2
723
+ zmin, zmax = backend.ax.get_zlim()
724
+ assert abs(zmin + 10) < 1e-2
725
+ assert abs(zmax - 10) < 1e-2
726
+
727
+ p = plot3d_parametric_line(v2, v1, adaptive=adaptive, n=60)
728
+ backend = p._backend
729
+
730
+ xmin, xmax = backend.ax.get_xlim()
731
+ assert abs(xmin + 2) < 1e-2
732
+ assert abs(xmax - 2) < 1e-2
733
+ ymin, ymax = backend.ax.get_ylim()
734
+ assert abs(ymin + 2) < 1e-2
735
+ assert abs(ymax - 2) < 1e-2
736
+ zmin, zmax = backend.ax.get_zlim()
737
+ assert abs(zmin + 10) < 1e-2
738
+ assert abs(zmax - 10) < 1e-2
739
+
740
+
741
+ @pytest.mark.parametrize("adaptive", [True, False])
742
+ def test_plot_size(adaptive):
743
+ if not matplotlib:
744
+ skip("Matplotlib not the default backend")
745
+
746
+ x = Symbol('x')
747
+
748
+ p1 = plot(sin(x), backend="matplotlib", size=(8, 4),
749
+ adaptive=adaptive, n=10)
750
+ s1 = p1._backend.fig.get_size_inches()
751
+ assert (s1[0] == 8) and (s1[1] == 4)
752
+ p2 = plot(sin(x), backend="matplotlib", size=(5, 10),
753
+ adaptive=adaptive, n=10)
754
+ s2 = p2._backend.fig.get_size_inches()
755
+ assert (s2[0] == 5) and (s2[1] == 10)
756
+ p3 = PlotGrid(2, 1, p1, p2, size=(6, 2),
757
+ adaptive=adaptive, n=10)
758
+ s3 = p3._backend.fig.get_size_inches()
759
+ assert (s3[0] == 6) and (s3[1] == 2)
760
+
761
+ with raises(ValueError):
762
+ plot(sin(x), backend="matplotlib", size=(-1, 3))
763
+
764
+
765
+ def test_issue_20113():
766
+ if not matplotlib:
767
+ skip("Matplotlib not the default backend")
768
+
769
+ x = Symbol('x')
770
+
771
+ # verify the capability to use custom backends
772
+ plot(sin(x), backend=Plot, show=False)
773
+ p2 = plot(sin(x), backend=MatplotlibBackend, show=False)
774
+ assert p2.backend == MatplotlibBackend
775
+ assert len(p2[0].get_data()[0]) >= 30
776
+ p3 = plot(sin(x), backend=DummyBackendOk, show=False)
777
+ assert p3.backend == DummyBackendOk
778
+ assert len(p3[0].get_data()[0]) >= 30
779
+
780
+ # test for an improper coded backend
781
+ p4 = plot(sin(x), backend=DummyBackendNotOk, show=False)
782
+ assert p4.backend == DummyBackendNotOk
783
+ assert len(p4[0].get_data()[0]) >= 30
784
+ with raises(NotImplementedError):
785
+ p4.show()
786
+ with raises(NotImplementedError):
787
+ p4.save("test/path")
788
+ with raises(NotImplementedError):
789
+ p4._backend.close()
790
+
791
+
792
+ def test_custom_coloring():
793
+ x = Symbol('x')
794
+ y = Symbol('y')
795
+ plot(cos(x), line_color=lambda a: a)
796
+ plot(cos(x), line_color=1)
797
+ plot(cos(x), line_color="r")
798
+ plot_parametric(cos(x), sin(x), line_color=lambda a: a)
799
+ plot_parametric(cos(x), sin(x), line_color=1)
800
+ plot_parametric(cos(x), sin(x), line_color="r")
801
+ plot3d_parametric_line(cos(x), sin(x), x, line_color=lambda a: a)
802
+ plot3d_parametric_line(cos(x), sin(x), x, line_color=1)
803
+ plot3d_parametric_line(cos(x), sin(x), x, line_color="r")
804
+ plot3d_parametric_surface(cos(x + y), sin(x - y), x - y,
805
+ (x, -5, 5), (y, -5, 5),
806
+ surface_color=lambda a, b: a**2 + b**2)
807
+ plot3d_parametric_surface(cos(x + y), sin(x - y), x - y,
808
+ (x, -5, 5), (y, -5, 5),
809
+ surface_color=1)
810
+ plot3d_parametric_surface(cos(x + y), sin(x - y), x - y,
811
+ (x, -5, 5), (y, -5, 5),
812
+ surface_color="r")
813
+ plot3d(x*y, (x, -5, 5), (y, -5, 5),
814
+ surface_color=lambda a, b: a**2 + b**2)
815
+ plot3d(x*y, (x, -5, 5), (y, -5, 5), surface_color=1)
816
+ plot3d(x*y, (x, -5, 5), (y, -5, 5), surface_color="r")
817
+
818
+
819
+ @pytest.mark.parametrize("adaptive", [True, False])
820
+ def test_deprecated_get_segments(adaptive):
821
+ if not matplotlib:
822
+ skip("Matplotlib not the default backend")
823
+
824
+ x = Symbol('x')
825
+ f = sin(x)
826
+ p = plot(f, (x, -10, 10), show=False, adaptive=adaptive, n=10)
827
+ with warns_deprecated_sympy():
828
+ p[0].get_segments()
829
+
830
+
831
+ @pytest.mark.parametrize("adaptive", [True, False])
832
+ def test_generic_data_series(adaptive):
833
+ # verify that no errors are raised when generic data series are used
834
+ if not matplotlib:
835
+ skip("Matplotlib not the default backend")
836
+
837
+ x = Symbol("x")
838
+ p = plot(x,
839
+ markers=[{"args":[[0, 1], [0, 1]], "marker": "*", "linestyle": "none"}],
840
+ annotations=[{"text": "test", "xy": (0, 0)}],
841
+ fill={"x": [0, 1, 2, 3], "y1": [0, 1, 2, 3]},
842
+ rectangles=[{"xy": (0, 0), "width": 5, "height": 1}],
843
+ adaptive=adaptive, n=10)
844
+ assert len(p._backend.ax.collections) == 1
845
+ assert len(p._backend.ax.patches) == 1
846
+ assert len(p._backend.ax.lines) == 2
847
+ assert len(p._backend.ax.texts) == 1
848
+
849
+
850
+ def test_deprecated_markers_annotations_rectangles_fill():
851
+ if not matplotlib:
852
+ skip("Matplotlib not the default backend")
853
+
854
+ x = Symbol('x')
855
+ p = plot(sin(x), (x, -10, 10), show=False)
856
+ with warns_deprecated_sympy():
857
+ p.markers = [{"args":[[0, 1], [0, 1]], "marker": "*", "linestyle": "none"}]
858
+ assert len(p._series) == 2
859
+ with warns_deprecated_sympy():
860
+ p.annotations = [{"text": "test", "xy": (0, 0)}]
861
+ assert len(p._series) == 3
862
+ with warns_deprecated_sympy():
863
+ p.fill = {"x": [0, 1, 2, 3], "y1": [0, 1, 2, 3]}
864
+ assert len(p._series) == 4
865
+ with warns_deprecated_sympy():
866
+ p.rectangles = [{"xy": (0, 0), "width": 5, "height": 1}]
867
+ assert len(p._series) == 5
868
+
869
+
870
+ def test_back_compatibility():
871
+ if not matplotlib:
872
+ skip("Matplotlib not the default backend")
873
+
874
+ x = Symbol('x')
875
+ y = Symbol('y')
876
+ p = plot(sin(x), adaptive=False, n=5)
877
+ assert len(p[0].get_points()) == 2
878
+ assert len(p[0].get_data()) == 2
879
+ p = plot_parametric(cos(x), sin(x), (x, 0, 2), adaptive=False, n=5)
880
+ assert len(p[0].get_points()) == 2
881
+ assert len(p[0].get_data()) == 3
882
+ p = plot3d_parametric_line(cos(x), sin(x), x, (x, 0, 2),
883
+ adaptive=False, n=5)
884
+ assert len(p[0].get_points()) == 3
885
+ assert len(p[0].get_data()) == 4
886
+ p = plot3d(cos(x**2 + y**2), (x, -pi, pi), (y, -pi, pi), n=5)
887
+ assert len(p[0].get_meshes()) == 3
888
+ assert len(p[0].get_data()) == 3
889
+ p = plot_contour(cos(x**2 + y**2), (x, -pi, pi), (y, -pi, pi), n=5)
890
+ assert len(p[0].get_meshes()) == 3
891
+ assert len(p[0].get_data()) == 3
892
+ p = plot3d_parametric_surface(x * cos(y), x * sin(y), x * cos(4 * y) / 2,
893
+ (x, 0, pi), (y, 0, 2*pi), n=5)
894
+ assert len(p[0].get_meshes()) == 3
895
+ assert len(p[0].get_data()) == 5
896
+
897
+
898
+ def test_plot_arguments():
899
+ ### Test arguments for plot()
900
+ if not matplotlib:
901
+ skip("Matplotlib not the default backend")
902
+
903
+ x, y = symbols("x, y")
904
+
905
+ # single expressions
906
+ p = plot(x + 1)
907
+ assert isinstance(p[0], LineOver1DRangeSeries)
908
+ assert p[0].expr == x + 1
909
+ assert p[0].ranges == [(x, -10, 10)]
910
+ assert p[0].get_label(False) == "x + 1"
911
+ assert p[0].rendering_kw == {}
912
+
913
+ # single expressions custom label
914
+ p = plot(x + 1, "label")
915
+ assert isinstance(p[0], LineOver1DRangeSeries)
916
+ assert p[0].expr == x + 1
917
+ assert p[0].ranges == [(x, -10, 10)]
918
+ assert p[0].get_label(False) == "label"
919
+ assert p[0].rendering_kw == {}
920
+
921
+ # single expressions with range
922
+ p = plot(x + 1, (x, -2, 2))
923
+ assert p[0].ranges == [(x, -2, 2)]
924
+
925
+ # single expressions with range, label and rendering-kw dictionary
926
+ p = plot(x + 1, (x, -2, 2), "test", {"color": "r"})
927
+ assert p[0].get_label(False) == "test"
928
+ assert p[0].rendering_kw == {"color": "r"}
929
+
930
+ # multiple expressions
931
+ p = plot(x + 1, x**2)
932
+ assert isinstance(p[0], LineOver1DRangeSeries)
933
+ assert p[0].expr == x + 1
934
+ assert p[0].ranges == [(x, -10, 10)]
935
+ assert p[0].get_label(False) == "x + 1"
936
+ assert p[0].rendering_kw == {}
937
+ assert isinstance(p[1], LineOver1DRangeSeries)
938
+ assert p[1].expr == x**2
939
+ assert p[1].ranges == [(x, -10, 10)]
940
+ assert p[1].get_label(False) == "x**2"
941
+ assert p[1].rendering_kw == {}
942
+
943
+ # multiple expressions over the same range
944
+ p = plot(x + 1, x**2, (x, 0, 5))
945
+ assert p[0].ranges == [(x, 0, 5)]
946
+ assert p[1].ranges == [(x, 0, 5)]
947
+
948
+ # multiple expressions over the same range with the same rendering kws
949
+ p = plot(x + 1, x**2, (x, 0, 5), {"color": "r"})
950
+ assert p[0].ranges == [(x, 0, 5)]
951
+ assert p[1].ranges == [(x, 0, 5)]
952
+ assert p[0].rendering_kw == {"color": "r"}
953
+ assert p[1].rendering_kw == {"color": "r"}
954
+
955
+ # multiple expressions with different ranges, labels and rendering kws
956
+ p = plot(
957
+ (x + 1, (x, 0, 5)),
958
+ (x**2, (x, -2, 2), "test", {"color": "r"}))
959
+ assert isinstance(p[0], LineOver1DRangeSeries)
960
+ assert p[0].expr == x + 1
961
+ assert p[0].ranges == [(x, 0, 5)]
962
+ assert p[0].get_label(False) == "x + 1"
963
+ assert p[0].rendering_kw == {}
964
+ assert isinstance(p[1], LineOver1DRangeSeries)
965
+ assert p[1].expr == x**2
966
+ assert p[1].ranges == [(x, -2, 2)]
967
+ assert p[1].get_label(False) == "test"
968
+ assert p[1].rendering_kw == {"color": "r"}
969
+
970
+ # single argument: lambda function
971
+ f = lambda t: t
972
+ p = plot(lambda t: t)
973
+ assert isinstance(p[0], LineOver1DRangeSeries)
974
+ assert callable(p[0].expr)
975
+ assert p[0].ranges[0][1:] == (-10, 10)
976
+ assert p[0].get_label(False) == ""
977
+ assert p[0].rendering_kw == {}
978
+
979
+ # single argument: lambda function + custom range and label
980
+ p = plot(f, ("t", -5, 6), "test")
981
+ assert p[0].ranges[0][1:] == (-5, 6)
982
+ assert p[0].get_label(False) == "test"
983
+
984
+
985
+ def test_plot_parametric_arguments():
986
+ ### Test arguments for plot_parametric()
987
+ if not matplotlib:
988
+ skip("Matplotlib not the default backend")
989
+
990
+ x, y = symbols("x, y")
991
+
992
+ # single parametric expression
993
+ p = plot_parametric(x + 1, x)
994
+ assert isinstance(p[0], Parametric2DLineSeries)
995
+ assert p[0].expr == (x + 1, x)
996
+ assert p[0].ranges == [(x, -10, 10)]
997
+ assert p[0].get_label(False) == "x"
998
+ assert p[0].rendering_kw == {}
999
+
1000
+ # single parametric expression with custom range, label and rendering kws
1001
+ p = plot_parametric(x + 1, x, (x, -2, 2), "test",
1002
+ {"cmap": "Reds"})
1003
+ assert p[0].expr == (x + 1, x)
1004
+ assert p[0].ranges == [(x, -2, 2)]
1005
+ assert p[0].get_label(False) == "test"
1006
+ assert p[0].rendering_kw == {"cmap": "Reds"}
1007
+
1008
+ p = plot_parametric((x + 1, x), (x, -2, 2), "test")
1009
+ assert p[0].expr == (x + 1, x)
1010
+ assert p[0].ranges == [(x, -2, 2)]
1011
+ assert p[0].get_label(False) == "test"
1012
+ assert p[0].rendering_kw == {}
1013
+
1014
+ # multiple parametric expressions same symbol
1015
+ p = plot_parametric((x + 1, x), (x ** 2, x + 1))
1016
+ assert p[0].expr == (x + 1, x)
1017
+ assert p[0].ranges == [(x, -10, 10)]
1018
+ assert p[0].get_label(False) == "x"
1019
+ assert p[0].rendering_kw == {}
1020
+ assert p[1].expr == (x ** 2, x + 1)
1021
+ assert p[1].ranges == [(x, -10, 10)]
1022
+ assert p[1].get_label(False) == "x"
1023
+ assert p[1].rendering_kw == {}
1024
+
1025
+ # multiple parametric expressions different symbols
1026
+ p = plot_parametric((x + 1, x), (y ** 2, y + 1, "test"))
1027
+ assert p[0].expr == (x + 1, x)
1028
+ assert p[0].ranges == [(x, -10, 10)]
1029
+ assert p[0].get_label(False) == "x"
1030
+ assert p[0].rendering_kw == {}
1031
+ assert p[1].expr == (y ** 2, y + 1)
1032
+ assert p[1].ranges == [(y, -10, 10)]
1033
+ assert p[1].get_label(False) == "test"
1034
+ assert p[1].rendering_kw == {}
1035
+
1036
+ # multiple parametric expressions same range
1037
+ p = plot_parametric((x + 1, x), (x ** 2, x + 1), (x, -2, 2))
1038
+ assert p[0].expr == (x + 1, x)
1039
+ assert p[0].ranges == [(x, -2, 2)]
1040
+ assert p[0].get_label(False) == "x"
1041
+ assert p[0].rendering_kw == {}
1042
+ assert p[1].expr == (x ** 2, x + 1)
1043
+ assert p[1].ranges == [(x, -2, 2)]
1044
+ assert p[1].get_label(False) == "x"
1045
+ assert p[1].rendering_kw == {}
1046
+
1047
+ # multiple parametric expressions, custom ranges and labels
1048
+ p = plot_parametric(
1049
+ (x + 1, x, (x, -2, 2), "test1"),
1050
+ (x ** 2, x + 1, (x, -3, 3), "test2", {"cmap": "Reds"}))
1051
+ assert p[0].expr == (x + 1, x)
1052
+ assert p[0].ranges == [(x, -2, 2)]
1053
+ assert p[0].get_label(False) == "test1"
1054
+ assert p[0].rendering_kw == {}
1055
+ assert p[1].expr == (x ** 2, x + 1)
1056
+ assert p[1].ranges == [(x, -3, 3)]
1057
+ assert p[1].get_label(False) == "test2"
1058
+ assert p[1].rendering_kw == {"cmap": "Reds"}
1059
+
1060
+ # single argument: lambda function
1061
+ fx = lambda t: t
1062
+ fy = lambda t: 2 * t
1063
+ p = plot_parametric(fx, fy)
1064
+ assert all(callable(t) for t in p[0].expr)
1065
+ assert p[0].ranges[0][1:] == (-10, 10)
1066
+ assert "Dummy" in p[0].get_label(False)
1067
+ assert p[0].rendering_kw == {}
1068
+
1069
+ # single argument: lambda function + custom range + label
1070
+ p = plot_parametric(fx, fy, ("t", 0, 2), "test")
1071
+ assert all(callable(t) for t in p[0].expr)
1072
+ assert p[0].ranges[0][1:] == (0, 2)
1073
+ assert p[0].get_label(False) == "test"
1074
+ assert p[0].rendering_kw == {}
1075
+
1076
+
1077
+ def test_plot3d_parametric_line_arguments():
1078
+ ### Test arguments for plot3d_parametric_line()
1079
+ if not matplotlib:
1080
+ skip("Matplotlib not the default backend")
1081
+
1082
+ x, y = symbols("x, y")
1083
+
1084
+ # single parametric expression
1085
+ p = plot3d_parametric_line(x + 1, x, sin(x))
1086
+ assert isinstance(p[0], Parametric3DLineSeries)
1087
+ assert p[0].expr == (x + 1, x, sin(x))
1088
+ assert p[0].ranges == [(x, -10, 10)]
1089
+ assert p[0].get_label(False) == "x"
1090
+ assert p[0].rendering_kw == {}
1091
+
1092
+ # single parametric expression with custom range, label and rendering kws
1093
+ p = plot3d_parametric_line(x + 1, x, sin(x), (x, -2, 2),
1094
+ "test", {"cmap": "Reds"})
1095
+ assert isinstance(p[0], Parametric3DLineSeries)
1096
+ assert p[0].expr == (x + 1, x, sin(x))
1097
+ assert p[0].ranges == [(x, -2, 2)]
1098
+ assert p[0].get_label(False) == "test"
1099
+ assert p[0].rendering_kw == {"cmap": "Reds"}
1100
+
1101
+ p = plot3d_parametric_line((x + 1, x, sin(x)), (x, -2, 2), "test")
1102
+ assert p[0].expr == (x + 1, x, sin(x))
1103
+ assert p[0].ranges == [(x, -2, 2)]
1104
+ assert p[0].get_label(False) == "test"
1105
+ assert p[0].rendering_kw == {}
1106
+
1107
+ # multiple parametric expression same symbol
1108
+ p = plot3d_parametric_line(
1109
+ (x + 1, x, sin(x)), (x ** 2, 1, cos(x), {"cmap": "Reds"}))
1110
+ assert p[0].expr == (x + 1, x, sin(x))
1111
+ assert p[0].ranges == [(x, -10, 10)]
1112
+ assert p[0].get_label(False) == "x"
1113
+ assert p[0].rendering_kw == {}
1114
+ assert p[1].expr == (x ** 2, 1, cos(x))
1115
+ assert p[1].ranges == [(x, -10, 10)]
1116
+ assert p[1].get_label(False) == "x"
1117
+ assert p[1].rendering_kw == {"cmap": "Reds"}
1118
+
1119
+ # multiple parametric expression different symbols
1120
+ p = plot3d_parametric_line((x + 1, x, sin(x)), (y ** 2, 1, cos(y)))
1121
+ assert p[0].expr == (x + 1, x, sin(x))
1122
+ assert p[0].ranges == [(x, -10, 10)]
1123
+ assert p[0].get_label(False) == "x"
1124
+ assert p[0].rendering_kw == {}
1125
+ assert p[1].expr == (y ** 2, 1, cos(y))
1126
+ assert p[1].ranges == [(y, -10, 10)]
1127
+ assert p[1].get_label(False) == "y"
1128
+ assert p[1].rendering_kw == {}
1129
+
1130
+ # multiple parametric expression, custom ranges and labels
1131
+ p = plot3d_parametric_line(
1132
+ (x + 1, x, sin(x)),
1133
+ (x ** 2, 1, cos(x), (x, -2, 2), "test", {"cmap": "Reds"}))
1134
+ assert p[0].expr == (x + 1, x, sin(x))
1135
+ assert p[0].ranges == [(x, -10, 10)]
1136
+ assert p[0].get_label(False) == "x"
1137
+ assert p[0].rendering_kw == {}
1138
+ assert p[1].expr == (x ** 2, 1, cos(x))
1139
+ assert p[1].ranges == [(x, -2, 2)]
1140
+ assert p[1].get_label(False) == "test"
1141
+ assert p[1].rendering_kw == {"cmap": "Reds"}
1142
+
1143
+ # single argument: lambda function
1144
+ fx = lambda t: t
1145
+ fy = lambda t: 2 * t
1146
+ fz = lambda t: 3 * t
1147
+ p = plot3d_parametric_line(fx, fy, fz)
1148
+ assert all(callable(t) for t in p[0].expr)
1149
+ assert p[0].ranges[0][1:] == (-10, 10)
1150
+ assert "Dummy" in p[0].get_label(False)
1151
+ assert p[0].rendering_kw == {}
1152
+
1153
+ # single argument: lambda function + custom range + label
1154
+ p = plot3d_parametric_line(fx, fy, fz, ("t", 0, 2), "test")
1155
+ assert all(callable(t) for t in p[0].expr)
1156
+ assert p[0].ranges[0][1:] == (0, 2)
1157
+ assert p[0].get_label(False) == "test"
1158
+ assert p[0].rendering_kw == {}
1159
+
1160
+
1161
+ def test_plot3d_plot_contour_arguments():
1162
+ ### Test arguments for plot3d() and plot_contour()
1163
+ if not matplotlib:
1164
+ skip("Matplotlib not the default backend")
1165
+
1166
+ x, y = symbols("x, y")
1167
+
1168
+ # single expression
1169
+ p = plot3d(x + y)
1170
+ assert isinstance(p[0], SurfaceOver2DRangeSeries)
1171
+ assert p[0].expr == x + y
1172
+ assert p[0].ranges[0] == (x, -10, 10) or (y, -10, 10)
1173
+ assert p[0].ranges[1] == (x, -10, 10) or (y, -10, 10)
1174
+ assert p[0].get_label(False) == "x + y"
1175
+ assert p[0].rendering_kw == {}
1176
+
1177
+ # single expression, custom range, label and rendering kws
1178
+ p = plot3d(x + y, (x, -2, 2), "test", {"cmap": "Reds"})
1179
+ assert isinstance(p[0], SurfaceOver2DRangeSeries)
1180
+ assert p[0].expr == x + y
1181
+ assert p[0].ranges[0] == (x, -2, 2)
1182
+ assert p[0].ranges[1] == (y, -10, 10)
1183
+ assert p[0].get_label(False) == "test"
1184
+ assert p[0].rendering_kw == {"cmap": "Reds"}
1185
+
1186
+ p = plot3d(x + y, (x, -2, 2), (y, -4, 4), "test")
1187
+ assert p[0].ranges[0] == (x, -2, 2)
1188
+ assert p[0].ranges[1] == (y, -4, 4)
1189
+
1190
+ # multiple expressions
1191
+ p = plot3d(x + y, x * y)
1192
+ assert p[0].expr == x + y
1193
+ assert p[0].ranges[0] == (x, -10, 10) or (y, -10, 10)
1194
+ assert p[0].ranges[1] == (x, -10, 10) or (y, -10, 10)
1195
+ assert p[0].get_label(False) == "x + y"
1196
+ assert p[0].rendering_kw == {}
1197
+ assert p[1].expr == x * y
1198
+ assert p[1].ranges[0] == (x, -10, 10) or (y, -10, 10)
1199
+ assert p[1].ranges[1] == (x, -10, 10) or (y, -10, 10)
1200
+ assert p[1].get_label(False) == "x*y"
1201
+ assert p[1].rendering_kw == {}
1202
+
1203
+ # multiple expressions, same custom ranges
1204
+ p = plot3d(x + y, x * y, (x, -2, 2), (y, -4, 4))
1205
+ assert p[0].expr == x + y
1206
+ assert p[0].ranges[0] == (x, -2, 2)
1207
+ assert p[0].ranges[1] == (y, -4, 4)
1208
+ assert p[0].get_label(False) == "x + y"
1209
+ assert p[0].rendering_kw == {}
1210
+ assert p[1].expr == x * y
1211
+ assert p[1].ranges[0] == (x, -2, 2)
1212
+ assert p[1].ranges[1] == (y, -4, 4)
1213
+ assert p[1].get_label(False) == "x*y"
1214
+ assert p[1].rendering_kw == {}
1215
+
1216
+ # multiple expressions, custom ranges, labels and rendering kws
1217
+ p = plot3d(
1218
+ (x + y, (x, -2, 2), (y, -4, 4)),
1219
+ (x * y, (x, -3, 3), (y, -6, 6), "test", {"cmap": "Reds"}))
1220
+ assert p[0].expr == x + y
1221
+ assert p[0].ranges[0] == (x, -2, 2)
1222
+ assert p[0].ranges[1] == (y, -4, 4)
1223
+ assert p[0].get_label(False) == "x + y"
1224
+ assert p[0].rendering_kw == {}
1225
+ assert p[1].expr == x * y
1226
+ assert p[1].ranges[0] == (x, -3, 3)
1227
+ assert p[1].ranges[1] == (y, -6, 6)
1228
+ assert p[1].get_label(False) == "test"
1229
+ assert p[1].rendering_kw == {"cmap": "Reds"}
1230
+
1231
+ # single expression: lambda function
1232
+ f = lambda x, y: x + y
1233
+ p = plot3d(f)
1234
+ assert callable(p[0].expr)
1235
+ assert p[0].ranges[0][1:] == (-10, 10)
1236
+ assert p[0].ranges[1][1:] == (-10, 10)
1237
+ assert p[0].get_label(False) == ""
1238
+ assert p[0].rendering_kw == {}
1239
+
1240
+ # single expression: lambda function + custom ranges + label
1241
+ p = plot3d(f, ("a", -5, 3), ("b", -2, 1), "test")
1242
+ assert callable(p[0].expr)
1243
+ assert p[0].ranges[0][1:] == (-5, 3)
1244
+ assert p[0].ranges[1][1:] == (-2, 1)
1245
+ assert p[0].get_label(False) == "test"
1246
+ assert p[0].rendering_kw == {}
1247
+
1248
+ # test issue 25818
1249
+ # single expression, custom range, min/max functions
1250
+ p = plot3d(Min(x, y), (x, 0, 10), (y, 0, 10))
1251
+ assert isinstance(p[0], SurfaceOver2DRangeSeries)
1252
+ assert p[0].expr == Min(x, y)
1253
+ assert p[0].ranges[0] == (x, 0, 10)
1254
+ assert p[0].ranges[1] == (y, 0, 10)
1255
+ assert p[0].get_label(False) == "Min(x, y)"
1256
+ assert p[0].rendering_kw == {}
1257
+
1258
+
1259
+ def test_plot3d_parametric_surface_arguments():
1260
+ ### Test arguments for plot3d_parametric_surface()
1261
+ if not matplotlib:
1262
+ skip("Matplotlib not the default backend")
1263
+
1264
+ x, y = symbols("x, y")
1265
+
1266
+ # single parametric expression
1267
+ p = plot3d_parametric_surface(x + y, cos(x + y), sin(x + y))
1268
+ assert isinstance(p[0], ParametricSurfaceSeries)
1269
+ assert p[0].expr == (x + y, cos(x + y), sin(x + y))
1270
+ assert p[0].ranges[0] == (x, -10, 10) or (y, -10, 10)
1271
+ assert p[0].ranges[1] == (x, -10, 10) or (y, -10, 10)
1272
+ assert p[0].get_label(False) == "(x + y, cos(x + y), sin(x + y))"
1273
+ assert p[0].rendering_kw == {}
1274
+
1275
+ # single parametric expression, custom ranges, labels and rendering kws
1276
+ p = plot3d_parametric_surface(x + y, cos(x + y), sin(x + y),
1277
+ (x, -2, 2), (y, -4, 4), "test", {"cmap": "Reds"})
1278
+ assert isinstance(p[0], ParametricSurfaceSeries)
1279
+ assert p[0].expr == (x + y, cos(x + y), sin(x + y))
1280
+ assert p[0].ranges[0] == (x, -2, 2)
1281
+ assert p[0].ranges[1] == (y, -4, 4)
1282
+ assert p[0].get_label(False) == "test"
1283
+ assert p[0].rendering_kw == {"cmap": "Reds"}
1284
+
1285
+ # multiple parametric expressions
1286
+ p = plot3d_parametric_surface(
1287
+ (x + y, cos(x + y), sin(x + y)),
1288
+ (x - y, cos(x - y), sin(x - y), "test"))
1289
+ assert p[0].expr == (x + y, cos(x + y), sin(x + y))
1290
+ assert p[0].ranges[0] == (x, -10, 10) or (y, -10, 10)
1291
+ assert p[0].ranges[1] == (x, -10, 10) or (y, -10, 10)
1292
+ assert p[0].get_label(False) == "(x + y, cos(x + y), sin(x + y))"
1293
+ assert p[0].rendering_kw == {}
1294
+ assert p[1].expr == (x - y, cos(x - y), sin(x - y))
1295
+ assert p[1].ranges[0] == (x, -10, 10) or (y, -10, 10)
1296
+ assert p[1].ranges[1] == (x, -10, 10) or (y, -10, 10)
1297
+ assert p[1].get_label(False) == "test"
1298
+ assert p[1].rendering_kw == {}
1299
+
1300
+ # multiple parametric expressions, custom ranges and labels
1301
+ p = plot3d_parametric_surface(
1302
+ (x + y, cos(x + y), sin(x + y), (x, -2, 2), "test"),
1303
+ (x - y, cos(x - y), sin(x - y), (x, -3, 3), (y, -4, 4),
1304
+ "test2", {"cmap": "Reds"}))
1305
+ assert p[0].expr == (x + y, cos(x + y), sin(x + y))
1306
+ assert p[0].ranges[0] == (x, -2, 2)
1307
+ assert p[0].ranges[1] == (y, -10, 10)
1308
+ assert p[0].get_label(False) == "test"
1309
+ assert p[0].rendering_kw == {}
1310
+ assert p[1].expr == (x - y, cos(x - y), sin(x - y))
1311
+ assert p[1].ranges[0] == (x, -3, 3)
1312
+ assert p[1].ranges[1] == (y, -4, 4)
1313
+ assert p[1].get_label(False) == "test2"
1314
+ assert p[1].rendering_kw == {"cmap": "Reds"}
1315
+
1316
+ # lambda functions instead of symbolic expressions for a single 3D
1317
+ # parametric surface
1318
+ p = plot3d_parametric_surface(
1319
+ lambda u, v: u, lambda u, v: v, lambda u, v: u + v,
1320
+ ("u", 0, 2), ("v", -3, 4))
1321
+ assert all(callable(t) for t in p[0].expr)
1322
+ assert p[0].ranges[0][1:] == (-0, 2)
1323
+ assert p[0].ranges[1][1:] == (-3, 4)
1324
+ assert p[0].get_label(False) == ""
1325
+ assert p[0].rendering_kw == {}
1326
+
1327
+ # lambda functions instead of symbolic expressions for multiple 3D
1328
+ # parametric surfaces
1329
+ p = plot3d_parametric_surface(
1330
+ (lambda u, v: u, lambda u, v: v, lambda u, v: u + v,
1331
+ ("u", 0, 2), ("v", -3, 4)),
1332
+ (lambda u, v: v, lambda u, v: u, lambda u, v: u - v,
1333
+ ("u", -2, 3), ("v", -4, 5), "test"))
1334
+ assert all(callable(t) for t in p[0].expr)
1335
+ assert p[0].ranges[0][1:] == (0, 2)
1336
+ assert p[0].ranges[1][1:] == (-3, 4)
1337
+ assert p[0].get_label(False) == ""
1338
+ assert p[0].rendering_kw == {}
1339
+ assert all(callable(t) for t in p[1].expr)
1340
+ assert p[1].ranges[0][1:] == (-2, 3)
1341
+ assert p[1].ranges[1][1:] == (-4, 5)
1342
+ assert p[1].get_label(False) == "test"
1343
+ assert p[1].rendering_kw == {}
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_and.png ADDED

Git LFS Details

  • SHA256: 115d0b9b81ed40f93fe9e216b4f6384cf71093e3bbb64a5d648b8b9858c645a0
  • Pointer size: 129 Bytes
  • Size of remote file: 6.86 kB
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_not.png ADDED

Git LFS Details

  • SHA256: dceffdfe73d6d78f453142c4713e51a88dbe9361f79c710b6df2400edd9c3bc9
  • Pointer size: 129 Bytes
  • Size of remote file: 7.94 kB
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_region_xor.png ADDED

Git LFS Details

  • SHA256: 92e71558103d03df0ea5c47876277968b5d4ca8ab8cf43b80b73cce9d962052c
  • Pointer size: 130 Bytes
  • Size of remote file: 10 kB
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_series.py ADDED
@@ -0,0 +1,1771 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sympy import (
2
+ latex, exp, symbols, I, pi, sin, cos, tan, log, sqrt,
3
+ re, im, arg, frac, Sum, S, Abs, lambdify,
4
+ Function, dsolve, Eq, floor, Tuple
5
+ )
6
+ from sympy.external import import_module
7
+ from sympy.plotting.series import (
8
+ LineOver1DRangeSeries, Parametric2DLineSeries, Parametric3DLineSeries,
9
+ SurfaceOver2DRangeSeries, ContourSeries, ParametricSurfaceSeries,
10
+ ImplicitSeries, _set_discretization_points, List2DSeries
11
+ )
12
+ from sympy.testing.pytest import raises, warns, XFAIL, skip, ignore_warnings
13
+
14
+ np = import_module('numpy')
15
+
16
+
17
+ def test_adaptive():
18
+ # verify that adaptive-related keywords produces the expected results
19
+ if not np:
20
+ skip("numpy not installed.")
21
+
22
+ x, y = symbols("x, y")
23
+
24
+ s1 = LineOver1DRangeSeries(sin(x), (x, -10, 10), "", adaptive=True,
25
+ depth=2)
26
+ x1, _ = s1.get_data()
27
+ s2 = LineOver1DRangeSeries(sin(x), (x, -10, 10), "", adaptive=True,
28
+ depth=5)
29
+ x2, _ = s2.get_data()
30
+ s3 = LineOver1DRangeSeries(sin(x), (x, -10, 10), "", adaptive=True)
31
+ x3, _ = s3.get_data()
32
+ assert len(x1) < len(x2) < len(x3)
33
+
34
+ s1 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
35
+ adaptive=True, depth=2)
36
+ x1, _, _, = s1.get_data()
37
+ s2 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
38
+ adaptive=True, depth=5)
39
+ x2, _, _ = s2.get_data()
40
+ s3 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
41
+ adaptive=True)
42
+ x3, _, _ = s3.get_data()
43
+ assert len(x1) < len(x2) < len(x3)
44
+
45
+
46
+ def test_detect_poles():
47
+ if not np:
48
+ skip("numpy not installed.")
49
+
50
+ x, u = symbols("x, u")
51
+
52
+ s1 = LineOver1DRangeSeries(tan(x), (x, -pi, pi),
53
+ adaptive=False, n=1000, detect_poles=False)
54
+ xx1, yy1 = s1.get_data()
55
+ s2 = LineOver1DRangeSeries(tan(x), (x, -pi, pi),
56
+ adaptive=False, n=1000, detect_poles=True, eps=0.01)
57
+ xx2, yy2 = s2.get_data()
58
+ # eps is too small: doesn't detect any poles
59
+ s3 = LineOver1DRangeSeries(tan(x), (x, -pi, pi),
60
+ adaptive=False, n=1000, detect_poles=True, eps=1e-06)
61
+ xx3, yy3 = s3.get_data()
62
+ s4 = LineOver1DRangeSeries(tan(x), (x, -pi, pi),
63
+ adaptive=False, n=1000, detect_poles="symbolic")
64
+ xx4, yy4 = s4.get_data()
65
+
66
+ assert np.allclose(xx1, xx2) and np.allclose(xx1, xx3) and np.allclose(xx1, xx4)
67
+ assert not np.any(np.isnan(yy1))
68
+ assert not np.any(np.isnan(yy3))
69
+ assert np.any(np.isnan(yy2))
70
+ assert np.any(np.isnan(yy4))
71
+ assert len(s2.poles_locations) == len(s3.poles_locations) == 0
72
+ assert len(s4.poles_locations) == 2
73
+ assert np.allclose(np.abs(s4.poles_locations), np.pi / 2)
74
+
75
+ with warns(
76
+ UserWarning,
77
+ match="NumPy is unable to evaluate with complex numbers some of",
78
+ test_stacklevel=False,
79
+ ):
80
+ s1 = LineOver1DRangeSeries(frac(x), (x, -10, 10),
81
+ adaptive=False, n=1000, detect_poles=False)
82
+ s2 = LineOver1DRangeSeries(frac(x), (x, -10, 10),
83
+ adaptive=False, n=1000, detect_poles=True, eps=0.05)
84
+ s3 = LineOver1DRangeSeries(frac(x), (x, -10, 10),
85
+ adaptive=False, n=1000, detect_poles="symbolic")
86
+ xx1, yy1 = s1.get_data()
87
+ xx2, yy2 = s2.get_data()
88
+ xx3, yy3 = s3.get_data()
89
+ assert np.allclose(xx1, xx2) and np.allclose(xx1, xx3)
90
+ assert not np.any(np.isnan(yy1))
91
+ assert np.any(np.isnan(yy2)) and np.any(np.isnan(yy2))
92
+ assert not np.allclose(yy1, yy2, equal_nan=True)
93
+ # The poles below are actually step discontinuities.
94
+ assert len(s3.poles_locations) == 21
95
+
96
+ s1 = LineOver1DRangeSeries(tan(u * x), (x, -pi, pi), params={u: 1},
97
+ adaptive=False, n=1000, detect_poles=False)
98
+ xx1, yy1 = s1.get_data()
99
+ s2 = LineOver1DRangeSeries(tan(u * x), (x, -pi, pi), params={u: 1},
100
+ adaptive=False, n=1000, detect_poles=True, eps=0.01)
101
+ xx2, yy2 = s2.get_data()
102
+ # eps is too small: doesn't detect any poles
103
+ s3 = LineOver1DRangeSeries(tan(u * x), (x, -pi, pi), params={u: 1},
104
+ adaptive=False, n=1000, detect_poles=True, eps=1e-06)
105
+ xx3, yy3 = s3.get_data()
106
+ s4 = LineOver1DRangeSeries(tan(u * x), (x, -pi, pi), params={u: 1},
107
+ adaptive=False, n=1000, detect_poles="symbolic")
108
+ xx4, yy4 = s4.get_data()
109
+
110
+ assert np.allclose(xx1, xx2) and np.allclose(xx1, xx3) and np.allclose(xx1, xx4)
111
+ assert not np.any(np.isnan(yy1))
112
+ assert not np.any(np.isnan(yy3))
113
+ assert np.any(np.isnan(yy2))
114
+ assert np.any(np.isnan(yy4))
115
+ assert len(s2.poles_locations) == len(s3.poles_locations) == 0
116
+ assert len(s4.poles_locations) == 2
117
+ assert np.allclose(np.abs(s4.poles_locations), np.pi / 2)
118
+
119
+ with warns(
120
+ UserWarning,
121
+ match="NumPy is unable to evaluate with complex numbers some of",
122
+ test_stacklevel=False,
123
+ ):
124
+ u, v = symbols("u, v", real=True)
125
+ n = S(1) / 3
126
+ f = (u + I * v)**n
127
+ r, i = re(f), im(f)
128
+ s1 = Parametric2DLineSeries(r.subs(u, -2), i.subs(u, -2), (v, -2, 2),
129
+ adaptive=False, n=1000, detect_poles=False)
130
+ s2 = Parametric2DLineSeries(r.subs(u, -2), i.subs(u, -2), (v, -2, 2),
131
+ adaptive=False, n=1000, detect_poles=True)
132
+ with ignore_warnings(RuntimeWarning):
133
+ xx1, yy1, pp1 = s1.get_data()
134
+ assert not np.isnan(yy1).any()
135
+ xx2, yy2, pp2 = s2.get_data()
136
+ assert np.isnan(yy2).any()
137
+
138
+ with warns(
139
+ UserWarning,
140
+ match="NumPy is unable to evaluate with complex numbers some of",
141
+ test_stacklevel=False,
142
+ ):
143
+ f = (x * u + x * I * v)**n
144
+ r, i = re(f), im(f)
145
+ s1 = Parametric2DLineSeries(r.subs(u, -2), i.subs(u, -2),
146
+ (v, -2, 2), params={x: 1},
147
+ adaptive=False, n1=1000, detect_poles=False)
148
+ s2 = Parametric2DLineSeries(r.subs(u, -2), i.subs(u, -2),
149
+ (v, -2, 2), params={x: 1},
150
+ adaptive=False, n1=1000, detect_poles=True)
151
+ with ignore_warnings(RuntimeWarning):
152
+ xx1, yy1, pp1 = s1.get_data()
153
+ assert not np.isnan(yy1).any()
154
+ xx2, yy2, pp2 = s2.get_data()
155
+ assert np.isnan(yy2).any()
156
+
157
+
158
+ def test_number_discretization_points():
159
+ # verify that the different ways to set the number of discretization
160
+ # points are consistent with each other.
161
+ if not np:
162
+ skip("numpy not installed.")
163
+
164
+ x, y, z = symbols("x:z")
165
+
166
+ for pt in [LineOver1DRangeSeries, Parametric2DLineSeries,
167
+ Parametric3DLineSeries]:
168
+ kw1 = _set_discretization_points({"n": 10}, pt)
169
+ kw2 = _set_discretization_points({"n": [10, 20, 30]}, pt)
170
+ kw3 = _set_discretization_points({"n1": 10}, pt)
171
+ assert all(("n1" in kw) and kw["n1"] == 10 for kw in [kw1, kw2, kw3])
172
+
173
+ for pt in [SurfaceOver2DRangeSeries, ContourSeries, ParametricSurfaceSeries,
174
+ ImplicitSeries]:
175
+ kw1 = _set_discretization_points({"n": 10}, pt)
176
+ kw2 = _set_discretization_points({"n": [10, 20, 30]}, pt)
177
+ kw3 = _set_discretization_points({"n1": 10, "n2": 20}, pt)
178
+ assert kw1["n1"] == kw1["n2"] == 10
179
+ assert all((kw["n1"] == 10) and (kw["n2"] == 20) for kw in [kw2, kw3])
180
+
181
+ # verify that line-related series can deal with large float number of
182
+ # discretization points
183
+ LineOver1DRangeSeries(cos(x), (x, -5, 5), adaptive=False, n=1e04).get_data()
184
+
185
+
186
+ def test_list2dseries():
187
+ if not np:
188
+ skip("numpy not installed.")
189
+
190
+ xx = np.linspace(-3, 3, 10)
191
+ yy1 = np.cos(xx)
192
+ yy2 = np.linspace(-3, 3, 20)
193
+
194
+ # same number of elements: everything is fine
195
+ s = List2DSeries(xx, yy1)
196
+ assert not s.is_parametric
197
+ # different number of elements: error
198
+ raises(ValueError, lambda: List2DSeries(xx, yy2))
199
+
200
+ # no color func: returns only x, y components and s in not parametric
201
+ s = List2DSeries(xx, yy1)
202
+ xxs, yys = s.get_data()
203
+ assert np.allclose(xx, xxs)
204
+ assert np.allclose(yy1, yys)
205
+ assert not s.is_parametric
206
+
207
+
208
+ def test_interactive_vs_noninteractive():
209
+ # verify that if a *Series class receives a `params` dictionary, it sets
210
+ # is_interactive=True
211
+ x, y, z, u, v = symbols("x, y, z, u, v")
212
+
213
+ s = LineOver1DRangeSeries(cos(x), (x, -5, 5))
214
+ assert not s.is_interactive
215
+ s = LineOver1DRangeSeries(u * cos(x), (x, -5, 5), params={u: 1})
216
+ assert s.is_interactive
217
+
218
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5))
219
+ assert not s.is_interactive
220
+ s = Parametric2DLineSeries(u * cos(x), u * sin(x), (x, -5, 5),
221
+ params={u: 1})
222
+ assert s.is_interactive
223
+
224
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5))
225
+ assert not s.is_interactive
226
+ s = Parametric3DLineSeries(u * cos(x), u * sin(x), x, (x, -5, 5),
227
+ params={u: 1})
228
+ assert s.is_interactive
229
+
230
+ s = SurfaceOver2DRangeSeries(cos(x * y), (x, -5, 5), (y, -5, 5))
231
+ assert not s.is_interactive
232
+ s = SurfaceOver2DRangeSeries(u * cos(x * y), (x, -5, 5), (y, -5, 5),
233
+ params={u: 1})
234
+ assert s.is_interactive
235
+
236
+ s = ContourSeries(cos(x * y), (x, -5, 5), (y, -5, 5))
237
+ assert not s.is_interactive
238
+ s = ContourSeries(u * cos(x * y), (x, -5, 5), (y, -5, 5),
239
+ params={u: 1})
240
+ assert s.is_interactive
241
+
242
+ s = ParametricSurfaceSeries(u * cos(v), v * sin(u), u + v,
243
+ (u, -5, 5), (v, -5, 5))
244
+ assert not s.is_interactive
245
+ s = ParametricSurfaceSeries(u * cos(v * x), v * sin(u), u + v,
246
+ (u, -5, 5), (v, -5, 5), params={x: 1})
247
+ assert s.is_interactive
248
+
249
+
250
+ def test_lin_log_scale():
251
+ # Verify that data series create the correct spacing in the data.
252
+ if not np:
253
+ skip("numpy not installed.")
254
+
255
+ x, y, z = symbols("x, y, z")
256
+
257
+ s = LineOver1DRangeSeries(x, (x, 1, 10), adaptive=False, n=50,
258
+ xscale="linear")
259
+ xx, _ = s.get_data()
260
+ assert np.isclose(xx[1] - xx[0], xx[-1] - xx[-2])
261
+
262
+ s = LineOver1DRangeSeries(x, (x, 1, 10), adaptive=False, n=50,
263
+ xscale="log")
264
+ xx, _ = s.get_data()
265
+ assert not np.isclose(xx[1] - xx[0], xx[-1] - xx[-2])
266
+
267
+ s = Parametric2DLineSeries(
268
+ cos(x), sin(x), (x, pi / 2, 1.5 * pi), adaptive=False, n=50,
269
+ xscale="linear")
270
+ _, _, param = s.get_data()
271
+ assert np.isclose(param[1] - param[0], param[-1] - param[-2])
272
+
273
+ s = Parametric2DLineSeries(
274
+ cos(x), sin(x), (x, pi / 2, 1.5 * pi), adaptive=False, n=50,
275
+ xscale="log")
276
+ _, _, param = s.get_data()
277
+ assert not np.isclose(param[1] - param[0], param[-1] - param[-2])
278
+
279
+ s = Parametric3DLineSeries(
280
+ cos(x), sin(x), x, (x, pi / 2, 1.5 * pi), adaptive=False, n=50,
281
+ xscale="linear")
282
+ _, _, _, param = s.get_data()
283
+ assert np.isclose(param[1] - param[0], param[-1] - param[-2])
284
+
285
+ s = Parametric3DLineSeries(
286
+ cos(x), sin(x), x, (x, pi / 2, 1.5 * pi), adaptive=False, n=50,
287
+ xscale="log")
288
+ _, _, _, param = s.get_data()
289
+ assert not np.isclose(param[1] - param[0], param[-1] - param[-2])
290
+
291
+ s = SurfaceOver2DRangeSeries(
292
+ cos(x ** 2 + y ** 2), (x, 1, 5), (y, 1, 5), n=10,
293
+ xscale="linear", yscale="linear")
294
+ xx, yy, _ = s.get_data()
295
+ assert np.isclose(xx[0, 1] - xx[0, 0], xx[0, -1] - xx[0, -2])
296
+ assert np.isclose(yy[1, 0] - yy[0, 0], yy[-1, 0] - yy[-2, 0])
297
+
298
+ s = SurfaceOver2DRangeSeries(
299
+ cos(x ** 2 + y ** 2), (x, 1, 5), (y, 1, 5), n=10,
300
+ xscale="log", yscale="log")
301
+ xx, yy, _ = s.get_data()
302
+ assert not np.isclose(xx[0, 1] - xx[0, 0], xx[0, -1] - xx[0, -2])
303
+ assert not np.isclose(yy[1, 0] - yy[0, 0], yy[-1, 0] - yy[-2, 0])
304
+
305
+ s = ImplicitSeries(
306
+ cos(x ** 2 + y ** 2) > 0, (x, 1, 5), (y, 1, 5),
307
+ n1=10, n2=10, xscale="linear", yscale="linear", adaptive=False)
308
+ xx, yy, _, _ = s.get_data()
309
+ assert np.isclose(xx[0, 1] - xx[0, 0], xx[0, -1] - xx[0, -2])
310
+ assert np.isclose(yy[1, 0] - yy[0, 0], yy[-1, 0] - yy[-2, 0])
311
+
312
+ s = ImplicitSeries(
313
+ cos(x ** 2 + y ** 2) > 0, (x, 1, 5), (y, 1, 5),
314
+ n=10, xscale="log", yscale="log", adaptive=False)
315
+ xx, yy, _, _ = s.get_data()
316
+ assert not np.isclose(xx[0, 1] - xx[0, 0], xx[0, -1] - xx[0, -2])
317
+ assert not np.isclose(yy[1, 0] - yy[0, 0], yy[-1, 0] - yy[-2, 0])
318
+
319
+
320
+ def test_rendering_kw():
321
+ # verify that each series exposes the `rendering_kw` attribute
322
+ if not np:
323
+ skip("numpy not installed.")
324
+
325
+ u, v, x, y, z = symbols("u, v, x:z")
326
+
327
+ s = List2DSeries([1, 2, 3], [4, 5, 6])
328
+ assert isinstance(s.rendering_kw, dict)
329
+
330
+ s = LineOver1DRangeSeries(1, (x, -5, 5))
331
+ assert isinstance(s.rendering_kw, dict)
332
+
333
+ s = Parametric2DLineSeries(sin(x), cos(x), (x, 0, pi))
334
+ assert isinstance(s.rendering_kw, dict)
335
+
336
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2 * pi))
337
+ assert isinstance(s.rendering_kw, dict)
338
+
339
+ s = SurfaceOver2DRangeSeries(x + y, (x, -2, 2), (y, -3, 3))
340
+ assert isinstance(s.rendering_kw, dict)
341
+
342
+ s = ContourSeries(x + y, (x, -2, 2), (y, -3, 3))
343
+ assert isinstance(s.rendering_kw, dict)
344
+
345
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1))
346
+ assert isinstance(s.rendering_kw, dict)
347
+
348
+
349
+ def test_data_shape():
350
+ # Verify that the series produces the correct data shape when the input
351
+ # expression is a number.
352
+ if not np:
353
+ skip("numpy not installed.")
354
+
355
+ u, x, y, z = symbols("u, x:z")
356
+
357
+ # scalar expression: it should return a numpy ones array
358
+ s = LineOver1DRangeSeries(1, (x, -5, 5))
359
+ xx, yy = s.get_data()
360
+ assert len(xx) == len(yy)
361
+ assert np.all(yy == 1)
362
+
363
+ s = LineOver1DRangeSeries(1, (x, -5, 5), adaptive=False, n=10)
364
+ xx, yy = s.get_data()
365
+ assert len(xx) == len(yy) == 10
366
+ assert np.all(yy == 1)
367
+
368
+ s = Parametric2DLineSeries(sin(x), 1, (x, 0, pi))
369
+ xx, yy, param = s.get_data()
370
+ assert (len(xx) == len(yy)) and (len(xx) == len(param))
371
+ assert np.all(yy == 1)
372
+
373
+ s = Parametric2DLineSeries(1, sin(x), (x, 0, pi))
374
+ xx, yy, param = s.get_data()
375
+ assert (len(xx) == len(yy)) and (len(xx) == len(param))
376
+ assert np.all(xx == 1)
377
+
378
+ s = Parametric2DLineSeries(sin(x), 1, (x, 0, pi), adaptive=False)
379
+ xx, yy, param = s.get_data()
380
+ assert (len(xx) == len(yy)) and (len(xx) == len(param))
381
+ assert np.all(yy == 1)
382
+
383
+ s = Parametric2DLineSeries(1, sin(x), (x, 0, pi), adaptive=False)
384
+ xx, yy, param = s.get_data()
385
+ assert (len(xx) == len(yy)) and (len(xx) == len(param))
386
+ assert np.all(xx == 1)
387
+
388
+ s = Parametric3DLineSeries(cos(x), sin(x), 1, (x, 0, 2 * pi))
389
+ xx, yy, zz, param = s.get_data()
390
+ assert (len(xx) == len(yy)) and (len(xx) == len(zz)) and (len(xx) == len(param))
391
+ assert np.all(zz == 1)
392
+
393
+ s = Parametric3DLineSeries(cos(x), 1, x, (x, 0, 2 * pi))
394
+ xx, yy, zz, param = s.get_data()
395
+ assert (len(xx) == len(yy)) and (len(xx) == len(zz)) and (len(xx) == len(param))
396
+ assert np.all(yy == 1)
397
+
398
+ s = Parametric3DLineSeries(1, sin(x), x, (x, 0, 2 * pi))
399
+ xx, yy, zz, param = s.get_data()
400
+ assert (len(xx) == len(yy)) and (len(xx) == len(zz)) and (len(xx) == len(param))
401
+ assert np.all(xx == 1)
402
+
403
+ s = SurfaceOver2DRangeSeries(1, (x, -2, 2), (y, -3, 3))
404
+ xx, yy, zz = s.get_data()
405
+ assert (xx.shape == yy.shape) and (xx.shape == zz.shape)
406
+ assert np.all(zz == 1)
407
+
408
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1))
409
+ xx, yy, zz, uu, vv = s.get_data()
410
+ assert xx.shape == yy.shape == zz.shape == uu.shape == vv.shape
411
+ assert np.all(xx == 1)
412
+
413
+ s = ParametricSurfaceSeries(1, 1, y, (x, 0, 1), (y, 0, 1))
414
+ xx, yy, zz, uu, vv = s.get_data()
415
+ assert xx.shape == yy.shape == zz.shape == uu.shape == vv.shape
416
+ assert np.all(yy == 1)
417
+
418
+ s = ParametricSurfaceSeries(x, 1, 1, (x, 0, 1), (y, 0, 1))
419
+ xx, yy, zz, uu, vv = s.get_data()
420
+ assert xx.shape == yy.shape == zz.shape == uu.shape == vv.shape
421
+ assert np.all(zz == 1)
422
+
423
+
424
+ def test_only_integers():
425
+ if not np:
426
+ skip("numpy not installed.")
427
+
428
+ x, y, u, v = symbols("x, y, u, v")
429
+
430
+ s = LineOver1DRangeSeries(sin(x), (x, -5.5, 4.5), "",
431
+ adaptive=False, only_integers=True)
432
+ xx, _ = s.get_data()
433
+ assert len(xx) == 10
434
+ assert xx[0] == -5 and xx[-1] == 4
435
+
436
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2 * pi), "",
437
+ adaptive=False, only_integers=True)
438
+ _, _, p = s.get_data()
439
+ assert len(p) == 7
440
+ assert p[0] == 0 and p[-1] == 6
441
+
442
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2 * pi), "",
443
+ adaptive=False, only_integers=True)
444
+ _, _, _, p = s.get_data()
445
+ assert len(p) == 7
446
+ assert p[0] == 0 and p[-1] == 6
447
+
448
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -5.5, 5.5),
449
+ (y, -3.5, 3.5), "",
450
+ adaptive=False, only_integers=True)
451
+ xx, yy, _ = s.get_data()
452
+ assert xx.shape == yy.shape == (7, 11)
453
+ assert np.allclose(xx[:, 0] - (-5) * np.ones(7), 0)
454
+ assert np.allclose(xx[0, :] - np.linspace(-5, 5, 11), 0)
455
+ assert np.allclose(yy[:, 0] - np.linspace(-3, 3, 7), 0)
456
+ assert np.allclose(yy[0, :] - (-3) * np.ones(11), 0)
457
+
458
+ r = 2 + sin(7 * u + 5 * v)
459
+ expr = (
460
+ r * cos(u) * sin(v),
461
+ r * sin(u) * sin(v),
462
+ r * cos(v)
463
+ )
464
+ s = ParametricSurfaceSeries(*expr, (u, 0, 2 * pi), (v, 0, pi), "",
465
+ adaptive=False, only_integers=True)
466
+ xx, yy, zz, uu, vv = s.get_data()
467
+ assert xx.shape == yy.shape == zz.shape == uu.shape == vv.shape == (4, 7)
468
+
469
+ # only_integers also works with scalar expressions
470
+ s = LineOver1DRangeSeries(1, (x, -5.5, 4.5), "",
471
+ adaptive=False, only_integers=True)
472
+ xx, _ = s.get_data()
473
+ assert len(xx) == 10
474
+ assert xx[0] == -5 and xx[-1] == 4
475
+
476
+ s = Parametric2DLineSeries(cos(x), 1, (x, 0, 2 * pi), "",
477
+ adaptive=False, only_integers=True)
478
+ _, _, p = s.get_data()
479
+ assert len(p) == 7
480
+ assert p[0] == 0 and p[-1] == 6
481
+
482
+ s = SurfaceOver2DRangeSeries(1, (x, -5.5, 5.5), (y, -3.5, 3.5), "",
483
+ adaptive=False, only_integers=True)
484
+ xx, yy, _ = s.get_data()
485
+ assert xx.shape == yy.shape == (7, 11)
486
+ assert np.allclose(xx[:, 0] - (-5) * np.ones(7), 0)
487
+ assert np.allclose(xx[0, :] - np.linspace(-5, 5, 11), 0)
488
+ assert np.allclose(yy[:, 0] - np.linspace(-3, 3, 7), 0)
489
+ assert np.allclose(yy[0, :] - (-3) * np.ones(11), 0)
490
+
491
+ r = 2 + sin(7 * u + 5 * v)
492
+ expr = (
493
+ r * cos(u) * sin(v),
494
+ 1,
495
+ r * cos(v)
496
+ )
497
+ s = ParametricSurfaceSeries(*expr, (u, 0, 2 * pi), (v, 0, pi), "",
498
+ adaptive=False, only_integers=True)
499
+ xx, yy, zz, uu, vv = s.get_data()
500
+ assert xx.shape == yy.shape == zz.shape == uu.shape == vv.shape == (4, 7)
501
+
502
+
503
+ def test_is_point_is_filled():
504
+ # verify that `is_point` and `is_filled` are attributes and that they
505
+ # they receive the correct values
506
+ if not np:
507
+ skip("numpy not installed.")
508
+
509
+ x, u = symbols("x, u")
510
+
511
+ s = LineOver1DRangeSeries(cos(x), (x, -5, 5), "",
512
+ is_point=False, is_filled=True)
513
+ assert (not s.is_point) and s.is_filled
514
+ s = LineOver1DRangeSeries(cos(x), (x, -5, 5), "",
515
+ is_point=True, is_filled=False)
516
+ assert s.is_point and (not s.is_filled)
517
+
518
+ s = List2DSeries([0, 1, 2], [3, 4, 5],
519
+ is_point=False, is_filled=True)
520
+ assert (not s.is_point) and s.is_filled
521
+ s = List2DSeries([0, 1, 2], [3, 4, 5],
522
+ is_point=True, is_filled=False)
523
+ assert s.is_point and (not s.is_filled)
524
+
525
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5),
526
+ is_point=False, is_filled=True)
527
+ assert (not s.is_point) and s.is_filled
528
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5),
529
+ is_point=True, is_filled=False)
530
+ assert s.is_point and (not s.is_filled)
531
+
532
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5),
533
+ is_point=False, is_filled=True)
534
+ assert (not s.is_point) and s.is_filled
535
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5),
536
+ is_point=True, is_filled=False)
537
+ assert s.is_point and (not s.is_filled)
538
+
539
+
540
+ def test_is_filled_2d():
541
+ # verify that the is_filled attribute is exposed by the following series
542
+ x, y = symbols("x, y")
543
+
544
+ expr = cos(x**2 + y**2)
545
+ ranges = (x, -2, 2), (y, -2, 2)
546
+
547
+ s = ContourSeries(expr, *ranges)
548
+ assert s.is_filled
549
+ s = ContourSeries(expr, *ranges, is_filled=True)
550
+ assert s.is_filled
551
+ s = ContourSeries(expr, *ranges, is_filled=False)
552
+ assert not s.is_filled
553
+
554
+
555
+ def test_steps():
556
+ if not np:
557
+ skip("numpy not installed.")
558
+
559
+ x, u = symbols("x, u")
560
+
561
+ def do_test(s1, s2):
562
+ if (not s1.is_parametric) and s1.is_2Dline:
563
+ xx1, _ = s1.get_data()
564
+ xx2, _ = s2.get_data()
565
+ elif s1.is_parametric and s1.is_2Dline:
566
+ xx1, _, _ = s1.get_data()
567
+ xx2, _, _ = s2.get_data()
568
+ elif (not s1.is_parametric) and s1.is_3Dline:
569
+ xx1, _, _ = s1.get_data()
570
+ xx2, _, _ = s2.get_data()
571
+ else:
572
+ xx1, _, _, _ = s1.get_data()
573
+ xx2, _, _, _ = s2.get_data()
574
+ assert len(xx1) != len(xx2)
575
+
576
+ s1 = LineOver1DRangeSeries(cos(x), (x, -5, 5), "",
577
+ adaptive=False, n=40, steps=False)
578
+ s2 = LineOver1DRangeSeries(cos(x), (x, -5, 5), "",
579
+ adaptive=False, n=40, steps=True)
580
+ do_test(s1, s2)
581
+
582
+ s1 = List2DSeries([0, 1, 2], [3, 4, 5], steps=False)
583
+ s2 = List2DSeries([0, 1, 2], [3, 4, 5], steps=True)
584
+ do_test(s1, s2)
585
+
586
+ s1 = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5),
587
+ adaptive=False, n=40, steps=False)
588
+ s2 = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5),
589
+ adaptive=False, n=40, steps=True)
590
+ do_test(s1, s2)
591
+
592
+ s1 = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5),
593
+ adaptive=False, n=40, steps=False)
594
+ s2 = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5),
595
+ adaptive=False, n=40, steps=True)
596
+ do_test(s1, s2)
597
+
598
+
599
+ def test_interactive_data():
600
+ # verify that InteractiveSeries produces the same numerical data as their
601
+ # corresponding non-interactive series.
602
+ if not np:
603
+ skip("numpy not installed.")
604
+
605
+ u, x, y, z = symbols("u, x:z")
606
+
607
+ def do_test(data1, data2):
608
+ assert len(data1) == len(data2)
609
+ for d1, d2 in zip(data1, data2):
610
+ assert np.allclose(d1, d2)
611
+
612
+ s1 = LineOver1DRangeSeries(u * cos(x), (x, -5, 5), params={u: 1}, n=50)
613
+ s2 = LineOver1DRangeSeries(cos(x), (x, -5, 5), adaptive=False, n=50)
614
+ do_test(s1.get_data(), s2.get_data())
615
+
616
+ s1 = Parametric2DLineSeries(
617
+ u * cos(x), u * sin(x), (x, -5, 5), params={u: 1}, n=50)
618
+ s2 = Parametric2DLineSeries(cos(x), sin(x), (x, -5, 5),
619
+ adaptive=False, n=50)
620
+ do_test(s1.get_data(), s2.get_data())
621
+
622
+ s1 = Parametric3DLineSeries(
623
+ u * cos(x), u * sin(x), u * x, (x, -5, 5),
624
+ params={u: 1}, n=50)
625
+ s2 = Parametric3DLineSeries(cos(x), sin(x), x, (x, -5, 5),
626
+ adaptive=False, n=50)
627
+ do_test(s1.get_data(), s2.get_data())
628
+
629
+ s1 = SurfaceOver2DRangeSeries(
630
+ u * cos(x ** 2 + y ** 2), (x, -3, 3), (y, -3, 3),
631
+ params={u: 1}, n1=50, n2=50,)
632
+ s2 = SurfaceOver2DRangeSeries(
633
+ cos(x ** 2 + y ** 2), (x, -3, 3), (y, -3, 3),
634
+ adaptive=False, n1=50, n2=50)
635
+ do_test(s1.get_data(), s2.get_data())
636
+
637
+ s1 = ParametricSurfaceSeries(
638
+ u * cos(x + y), sin(x + y), x - y, (x, -3, 3), (y, -3, 3),
639
+ params={u: 1}, n1=50, n2=50,)
640
+ s2 = ParametricSurfaceSeries(
641
+ cos(x + y), sin(x + y), x - y, (x, -3, 3), (y, -3, 3),
642
+ adaptive=False, n1=50, n2=50,)
643
+ do_test(s1.get_data(), s2.get_data())
644
+
645
+ # real part of a complex function evaluated over a real line with numpy
646
+ expr = re((z ** 2 + 1) / (z ** 2 - 1))
647
+ s1 = LineOver1DRangeSeries(u * expr, (z, -3, 3), adaptive=False, n=50,
648
+ modules=None, params={u: 1})
649
+ s2 = LineOver1DRangeSeries(expr, (z, -3, 3), adaptive=False, n=50,
650
+ modules=None)
651
+ do_test(s1.get_data(), s2.get_data())
652
+
653
+ # real part of a complex function evaluated over a real line with mpmath
654
+ expr = re((z ** 2 + 1) / (z ** 2 - 1))
655
+ s1 = LineOver1DRangeSeries(u * expr, (z, -3, 3), n=50, modules="mpmath",
656
+ params={u: 1})
657
+ s2 = LineOver1DRangeSeries(expr, (z, -3, 3),
658
+ adaptive=False, n=50, modules="mpmath")
659
+ do_test(s1.get_data(), s2.get_data())
660
+
661
+
662
+ def test_list2dseries_interactive():
663
+ if not np:
664
+ skip("numpy not installed.")
665
+
666
+ x, y, u = symbols("x, y, u")
667
+
668
+ s = List2DSeries([1, 2, 3], [1, 2, 3])
669
+ assert not s.is_interactive
670
+
671
+ # symbolic expressions as coordinates, but no ``params``
672
+ raises(ValueError, lambda: List2DSeries([cos(x)], [sin(x)]))
673
+
674
+ # too few parameters
675
+ raises(ValueError,
676
+ lambda: List2DSeries([cos(x), y], [sin(x), 2], params={u: 1}))
677
+
678
+ s = List2DSeries([cos(x)], [sin(x)], params={x: 1})
679
+ assert s.is_interactive
680
+
681
+ s = List2DSeries([x, 2, 3, 4], [4, 3, 2, x], params={x: 3})
682
+ xx, yy = s.get_data()
683
+ assert np.allclose(xx, [3, 2, 3, 4])
684
+ assert np.allclose(yy, [4, 3, 2, 3])
685
+ assert not s.is_parametric
686
+
687
+ # numeric lists + params is present -> interactive series and
688
+ # lists are converted to Tuple.
689
+ s = List2DSeries([1, 2, 3], [1, 2, 3], params={x: 1})
690
+ assert s.is_interactive
691
+ assert isinstance(s.list_x, Tuple)
692
+ assert isinstance(s.list_y, Tuple)
693
+
694
+
695
+ def test_mpmath():
696
+ # test that the argument of complex functions evaluated with mpmath
697
+ # might be different than the one computed with Numpy (different
698
+ # behaviour at branch cuts)
699
+ if not np:
700
+ skip("numpy not installed.")
701
+
702
+ z, u = symbols("z, u")
703
+
704
+ s1 = LineOver1DRangeSeries(im(sqrt(-z)), (z, 1e-03, 5),
705
+ adaptive=True, modules=None, force_real_eval=True)
706
+ s2 = LineOver1DRangeSeries(im(sqrt(-z)), (z, 1e-03, 5),
707
+ adaptive=True, modules="mpmath", force_real_eval=True)
708
+ xx1, yy1 = s1.get_data()
709
+ xx2, yy2 = s2.get_data()
710
+ assert np.all(yy1 < 0)
711
+ assert np.all(yy2 > 0)
712
+
713
+ s1 = LineOver1DRangeSeries(im(sqrt(-z)), (z, -5, 5),
714
+ adaptive=False, n=20, modules=None, force_real_eval=True)
715
+ s2 = LineOver1DRangeSeries(im(sqrt(-z)), (z, -5, 5),
716
+ adaptive=False, n=20, modules="mpmath", force_real_eval=True)
717
+ xx1, yy1 = s1.get_data()
718
+ xx2, yy2 = s2.get_data()
719
+ assert np.allclose(xx1, xx2)
720
+ assert not np.allclose(yy1, yy2)
721
+
722
+
723
+ def test_str():
724
+ u, x, y, z = symbols("u, x:z")
725
+
726
+ s = LineOver1DRangeSeries(cos(x), (x, -4, 3))
727
+ assert str(s) == "cartesian line: cos(x) for x over (-4.0, 3.0)"
728
+
729
+ d = {"return": "real"}
730
+ s = LineOver1DRangeSeries(cos(x), (x, -4, 3), **d)
731
+ assert str(s) == "cartesian line: re(cos(x)) for x over (-4.0, 3.0)"
732
+
733
+ d = {"return": "imag"}
734
+ s = LineOver1DRangeSeries(cos(x), (x, -4, 3), **d)
735
+ assert str(s) == "cartesian line: im(cos(x)) for x over (-4.0, 3.0)"
736
+
737
+ d = {"return": "abs"}
738
+ s = LineOver1DRangeSeries(cos(x), (x, -4, 3), **d)
739
+ assert str(s) == "cartesian line: abs(cos(x)) for x over (-4.0, 3.0)"
740
+
741
+ d = {"return": "arg"}
742
+ s = LineOver1DRangeSeries(cos(x), (x, -4, 3), **d)
743
+ assert str(s) == "cartesian line: arg(cos(x)) for x over (-4.0, 3.0)"
744
+
745
+ s = LineOver1DRangeSeries(cos(u * x), (x, -4, 3), params={u: 1})
746
+ assert str(s) == "interactive cartesian line: cos(u*x) for x over (-4.0, 3.0) and parameters (u,)"
747
+
748
+ s = LineOver1DRangeSeries(cos(u * x), (x, -u, 3*y), params={u: 1, y: 1})
749
+ assert str(s) == "interactive cartesian line: cos(u*x) for x over (-u, 3*y) and parameters (u, y)"
750
+
751
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -4, 3))
752
+ assert str(s) == "parametric cartesian line: (cos(x), sin(x)) for x over (-4.0, 3.0)"
753
+
754
+ s = Parametric2DLineSeries(cos(u * x), sin(x), (x, -4, 3), params={u: 1})
755
+ assert str(s) == "interactive parametric cartesian line: (cos(u*x), sin(x)) for x over (-4.0, 3.0) and parameters (u,)"
756
+
757
+ s = Parametric2DLineSeries(cos(u * x), sin(x), (x, -u, 3*y), params={u: 1, y:1})
758
+ assert str(s) == "interactive parametric cartesian line: (cos(u*x), sin(x)) for x over (-u, 3*y) and parameters (u, y)"
759
+
760
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -4, 3))
761
+ assert str(s) == "3D parametric cartesian line: (cos(x), sin(x), x) for x over (-4.0, 3.0)"
762
+
763
+ s = Parametric3DLineSeries(cos(u*x), sin(x), x, (x, -4, 3), params={u: 1})
764
+ assert str(s) == "interactive 3D parametric cartesian line: (cos(u*x), sin(x), x) for x over (-4.0, 3.0) and parameters (u,)"
765
+
766
+ s = Parametric3DLineSeries(cos(u*x), sin(x), x, (x, -u, 3*y), params={u: 1, y: 1})
767
+ assert str(s) == "interactive 3D parametric cartesian line: (cos(u*x), sin(x), x) for x over (-u, 3*y) and parameters (u, y)"
768
+
769
+ s = SurfaceOver2DRangeSeries(cos(x * y), (x, -4, 3), (y, -2, 5))
770
+ assert str(s) == "cartesian surface: cos(x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0)"
771
+
772
+ s = SurfaceOver2DRangeSeries(cos(u * x * y), (x, -4, 3), (y, -2, 5), params={u: 1})
773
+ assert str(s) == "interactive cartesian surface: cos(u*x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0) and parameters (u,)"
774
+
775
+ s = SurfaceOver2DRangeSeries(cos(u * x * y), (x, -4*u, 3), (y, -2, 5*u), params={u: 1})
776
+ assert str(s) == "interactive cartesian surface: cos(u*x*y) for x over (-4*u, 3.0) and y over (-2.0, 5*u) and parameters (u,)"
777
+
778
+ s = ContourSeries(cos(x * y), (x, -4, 3), (y, -2, 5))
779
+ assert str(s) == "contour: cos(x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0)"
780
+
781
+ s = ContourSeries(cos(u * x * y), (x, -4, 3), (y, -2, 5), params={u: 1})
782
+ assert str(s) == "interactive contour: cos(u*x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0) and parameters (u,)"
783
+
784
+ s = ParametricSurfaceSeries(cos(x * y), sin(x * y), x * y,
785
+ (x, -4, 3), (y, -2, 5))
786
+ assert str(s) == "parametric cartesian surface: (cos(x*y), sin(x*y), x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0)"
787
+
788
+ s = ParametricSurfaceSeries(cos(u * x * y), sin(x * y), x * y,
789
+ (x, -4, 3), (y, -2, 5), params={u: 1})
790
+ assert str(s) == "interactive parametric cartesian surface: (cos(u*x*y), sin(x*y), x*y) for x over (-4.0, 3.0) and y over (-2.0, 5.0) and parameters (u,)"
791
+
792
+ s = ImplicitSeries(x < y, (x, -5, 4), (y, -3, 2))
793
+ assert str(s) == "Implicit expression: x < y for x over (-5.0, 4.0) and y over (-3.0, 2.0)"
794
+
795
+
796
+ def test_use_cm():
797
+ # verify that the `use_cm` attribute is implemented.
798
+ if not np:
799
+ skip("numpy not installed.")
800
+
801
+ u, x, y, z = symbols("u, x:z")
802
+
803
+ s = List2DSeries([1, 2, 3, 4], [5, 6, 7, 8], use_cm=True)
804
+ assert s.use_cm
805
+ s = List2DSeries([1, 2, 3, 4], [5, 6, 7, 8], use_cm=False)
806
+ assert not s.use_cm
807
+
808
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -4, 3), use_cm=True)
809
+ assert s.use_cm
810
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, -4, 3), use_cm=False)
811
+ assert not s.use_cm
812
+
813
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -4, 3),
814
+ use_cm=True)
815
+ assert s.use_cm
816
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, -4, 3),
817
+ use_cm=False)
818
+ assert not s.use_cm
819
+
820
+ s = SurfaceOver2DRangeSeries(cos(x * y), (x, -4, 3), (y, -2, 5),
821
+ use_cm=True)
822
+ assert s.use_cm
823
+ s = SurfaceOver2DRangeSeries(cos(x * y), (x, -4, 3), (y, -2, 5),
824
+ use_cm=False)
825
+ assert not s.use_cm
826
+
827
+ s = ParametricSurfaceSeries(cos(x * y), sin(x * y), x * y,
828
+ (x, -4, 3), (y, -2, 5), use_cm=True)
829
+ assert s.use_cm
830
+ s = ParametricSurfaceSeries(cos(x * y), sin(x * y), x * y,
831
+ (x, -4, 3), (y, -2, 5), use_cm=False)
832
+ assert not s.use_cm
833
+
834
+
835
+ def test_surface_use_cm():
836
+ # verify that SurfaceOver2DRangeSeries and ParametricSurfaceSeries get
837
+ # the same value for use_cm
838
+
839
+ x, y, u, v = symbols("x, y, u, v")
840
+
841
+ # they read the same value from default settings
842
+ s1 = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2))
843
+ s2 = ParametricSurfaceSeries(u * cos(v), u * sin(v), u,
844
+ (u, 0, 1), (v, 0 , 2*pi))
845
+ assert s1.use_cm == s2.use_cm
846
+
847
+ # they get the same value
848
+ s1 = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
849
+ use_cm=False)
850
+ s2 = ParametricSurfaceSeries(u * cos(v), u * sin(v), u,
851
+ (u, 0, 1), (v, 0 , 2*pi), use_cm=False)
852
+ assert s1.use_cm == s2.use_cm
853
+
854
+ # they get the same value
855
+ s1 = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
856
+ use_cm=True)
857
+ s2 = ParametricSurfaceSeries(u * cos(v), u * sin(v), u,
858
+ (u, 0, 1), (v, 0 , 2*pi), use_cm=True)
859
+ assert s1.use_cm == s2.use_cm
860
+
861
+
862
+ def test_sums():
863
+ # test that data series are able to deal with sums
864
+ if not np:
865
+ skip("numpy not installed.")
866
+
867
+ x, y, u = symbols("x, y, u")
868
+
869
+ def do_test(data1, data2):
870
+ assert len(data1) == len(data2)
871
+ for d1, d2 in zip(data1, data2):
872
+ assert np.allclose(d1, d2)
873
+
874
+ s = LineOver1DRangeSeries(Sum(1 / x ** y, (x, 1, 1000)), (y, 2, 10),
875
+ adaptive=False, only_integers=True)
876
+ xx, yy = s.get_data()
877
+
878
+ s1 = LineOver1DRangeSeries(Sum(1 / x, (x, 1, y)), (y, 2, 10),
879
+ adaptive=False, only_integers=True)
880
+ xx1, yy1 = s1.get_data()
881
+
882
+ s2 = LineOver1DRangeSeries(Sum(u / x, (x, 1, y)), (y, 2, 10),
883
+ params={u: 1}, only_integers=True)
884
+ xx2, yy2 = s2.get_data()
885
+ xx1 = xx1.astype(float)
886
+ xx2 = xx2.astype(float)
887
+ do_test([xx1, yy1], [xx2, yy2])
888
+
889
+ s = LineOver1DRangeSeries(Sum(1 / x, (x, 1, y)), (y, 2, 10),
890
+ adaptive=True)
891
+ with warns(
892
+ UserWarning,
893
+ match="The evaluation with NumPy/SciPy failed",
894
+ test_stacklevel=False,
895
+ ):
896
+ raises(TypeError, lambda: s.get_data())
897
+
898
+
899
+ def test_apply_transforms():
900
+ # verify that transformation functions get applied to the output
901
+ # of data series
902
+ if not np:
903
+ skip("numpy not installed.")
904
+
905
+ x, y, z, u, v = symbols("x:z, u, v")
906
+
907
+ s1 = LineOver1DRangeSeries(cos(x), (x, -2*pi, 2*pi), adaptive=False, n=10)
908
+ s2 = LineOver1DRangeSeries(cos(x), (x, -2*pi, 2*pi), adaptive=False, n=10,
909
+ tx=np.rad2deg)
910
+ s3 = LineOver1DRangeSeries(cos(x), (x, -2*pi, 2*pi), adaptive=False, n=10,
911
+ ty=np.rad2deg)
912
+ s4 = LineOver1DRangeSeries(cos(x), (x, -2*pi, 2*pi), adaptive=False, n=10,
913
+ tx=np.rad2deg, ty=np.rad2deg)
914
+
915
+ x1, y1 = s1.get_data()
916
+ x2, y2 = s2.get_data()
917
+ x3, y3 = s3.get_data()
918
+ x4, y4 = s4.get_data()
919
+ assert np.isclose(x1[0], -2*np.pi) and np.isclose(x1[-1], 2*np.pi)
920
+ assert (y1.min() < -0.9) and (y1.max() > 0.9)
921
+ assert np.isclose(x2[0], -360) and np.isclose(x2[-1], 360)
922
+ assert (y2.min() < -0.9) and (y2.max() > 0.9)
923
+ assert np.isclose(x3[0], -2*np.pi) and np.isclose(x3[-1], 2*np.pi)
924
+ assert (y3.min() < -52) and (y3.max() > 52)
925
+ assert np.isclose(x4[0], -360) and np.isclose(x4[-1], 360)
926
+ assert (y4.min() < -52) and (y4.max() > 52)
927
+
928
+ xx = np.linspace(-2*np.pi, 2*np.pi, 10)
929
+ yy = np.cos(xx)
930
+ s1 = List2DSeries(xx, yy)
931
+ s2 = List2DSeries(xx, yy, tx=np.rad2deg, ty=np.rad2deg)
932
+ x1, y1 = s1.get_data()
933
+ x2, y2 = s2.get_data()
934
+ assert np.isclose(x1[0], -2*np.pi) and np.isclose(x1[-1], 2*np.pi)
935
+ assert (y1.min() < -0.9) and (y1.max() > 0.9)
936
+ assert np.isclose(x2[0], -360) and np.isclose(x2[-1], 360)
937
+ assert (y2.min() < -52) and (y2.max() > 52)
938
+
939
+ s1 = Parametric2DLineSeries(
940
+ sin(x), cos(x), (x, -pi, pi), adaptive=False, n=10)
941
+ s2 = Parametric2DLineSeries(
942
+ sin(x), cos(x), (x, -pi, pi), adaptive=False, n=10,
943
+ tx=np.rad2deg, ty=np.rad2deg, tp=np.rad2deg)
944
+ x1, y1, a1 = s1.get_data()
945
+ x2, y2, a2 = s2.get_data()
946
+ assert np.allclose(x1, np.deg2rad(x2))
947
+ assert np.allclose(y1, np.deg2rad(y2))
948
+ assert np.allclose(a1, np.deg2rad(a2))
949
+
950
+ s1 = Parametric3DLineSeries(
951
+ sin(x), cos(x), x, (x, -pi, pi), adaptive=False, n=10)
952
+ s2 = Parametric3DLineSeries(
953
+ sin(x), cos(x), x, (x, -pi, pi), adaptive=False, n=10, tp=np.rad2deg)
954
+ x1, y1, z1, a1 = s1.get_data()
955
+ x2, y2, z2, a2 = s2.get_data()
956
+ assert np.allclose(x1, x2)
957
+ assert np.allclose(y1, y2)
958
+ assert np.allclose(z1, z2)
959
+ assert np.allclose(a1, np.deg2rad(a2))
960
+
961
+ s1 = SurfaceOver2DRangeSeries(
962
+ cos(x**2 + y**2), (x, -2*pi, 2*pi), (y, -2*pi, 2*pi),
963
+ adaptive=False, n1=10, n2=10)
964
+ s2 = SurfaceOver2DRangeSeries(
965
+ cos(x**2 + y**2), (x, -2*pi, 2*pi), (y, -2*pi, 2*pi),
966
+ adaptive=False, n1=10, n2=10,
967
+ tx=np.rad2deg, ty=lambda x: 2*x, tz=lambda x: 3*x)
968
+ x1, y1, z1 = s1.get_data()
969
+ x2, y2, z2 = s2.get_data()
970
+ assert np.allclose(x1, np.deg2rad(x2))
971
+ assert np.allclose(y1, y2 / 2)
972
+ assert np.allclose(z1, z2 / 3)
973
+
974
+ s1 = ParametricSurfaceSeries(
975
+ u + v, u - v, u * v, (u, 0, 2*pi), (v, 0, pi),
976
+ adaptive=False, n1=10, n2=10)
977
+ s2 = ParametricSurfaceSeries(
978
+ u + v, u - v, u * v, (u, 0, 2*pi), (v, 0, pi),
979
+ adaptive=False, n1=10, n2=10,
980
+ tx=np.rad2deg, ty=lambda x: 2*x, tz=lambda x: 3*x)
981
+ x1, y1, z1, u1, v1 = s1.get_data()
982
+ x2, y2, z2, u2, v2 = s2.get_data()
983
+ assert np.allclose(x1, np.deg2rad(x2))
984
+ assert np.allclose(y1, y2 / 2)
985
+ assert np.allclose(z1, z2 / 3)
986
+ assert np.allclose(u1, u2)
987
+ assert np.allclose(v1, v2)
988
+
989
+
990
+ def test_series_labels():
991
+ # verify that series return the correct label, depending on the plot
992
+ # type and input arguments. If the user set custom label on a data series,
993
+ # it should returned un-modified.
994
+ if not np:
995
+ skip("numpy not installed.")
996
+
997
+ x, y, z, u, v = symbols("x, y, z, u, v")
998
+ wrapper = "$%s$"
999
+
1000
+ expr = cos(x)
1001
+ s1 = LineOver1DRangeSeries(expr, (x, -2, 2), None)
1002
+ s2 = LineOver1DRangeSeries(expr, (x, -2, 2), "test")
1003
+ assert s1.get_label(False) == str(expr)
1004
+ assert s1.get_label(True) == wrapper % latex(expr)
1005
+ assert s2.get_label(False) == "test"
1006
+ assert s2.get_label(True) == "test"
1007
+
1008
+ s1 = List2DSeries([0, 1, 2, 3], [0, 1, 2, 3], "test")
1009
+ assert s1.get_label(False) == "test"
1010
+ assert s1.get_label(True) == "test"
1011
+
1012
+ expr = (cos(x), sin(x))
1013
+ s1 = Parametric2DLineSeries(*expr, (x, -2, 2), None, use_cm=True)
1014
+ s2 = Parametric2DLineSeries(*expr, (x, -2, 2), "test", use_cm=True)
1015
+ s3 = Parametric2DLineSeries(*expr, (x, -2, 2), None, use_cm=False)
1016
+ s4 = Parametric2DLineSeries(*expr, (x, -2, 2), "test", use_cm=False)
1017
+ assert s1.get_label(False) == "x"
1018
+ assert s1.get_label(True) == wrapper % "x"
1019
+ assert s2.get_label(False) == "test"
1020
+ assert s2.get_label(True) == "test"
1021
+ assert s3.get_label(False) == str(expr)
1022
+ assert s3.get_label(True) == wrapper % latex(expr)
1023
+ assert s4.get_label(False) == "test"
1024
+ assert s4.get_label(True) == "test"
1025
+
1026
+ expr = (cos(x), sin(x), x)
1027
+ s1 = Parametric3DLineSeries(*expr, (x, -2, 2), None, use_cm=True)
1028
+ s2 = Parametric3DLineSeries(*expr, (x, -2, 2), "test", use_cm=True)
1029
+ s3 = Parametric3DLineSeries(*expr, (x, -2, 2), None, use_cm=False)
1030
+ s4 = Parametric3DLineSeries(*expr, (x, -2, 2), "test", use_cm=False)
1031
+ assert s1.get_label(False) == "x"
1032
+ assert s1.get_label(True) == wrapper % "x"
1033
+ assert s2.get_label(False) == "test"
1034
+ assert s2.get_label(True) == "test"
1035
+ assert s3.get_label(False) == str(expr)
1036
+ assert s3.get_label(True) == wrapper % latex(expr)
1037
+ assert s4.get_label(False) == "test"
1038
+ assert s4.get_label(True) == "test"
1039
+
1040
+ expr = cos(x**2 + y**2)
1041
+ s1 = SurfaceOver2DRangeSeries(expr, (x, -2, 2), (y, -2, 2), None)
1042
+ s2 = SurfaceOver2DRangeSeries(expr, (x, -2, 2), (y, -2, 2), "test")
1043
+ assert s1.get_label(False) == str(expr)
1044
+ assert s1.get_label(True) == wrapper % latex(expr)
1045
+ assert s2.get_label(False) == "test"
1046
+ assert s2.get_label(True) == "test"
1047
+
1048
+ expr = (cos(x - y), sin(x + y), x - y)
1049
+ s1 = ParametricSurfaceSeries(*expr, (x, -2, 2), (y, -2, 2), None)
1050
+ s2 = ParametricSurfaceSeries(*expr, (x, -2, 2), (y, -2, 2), "test")
1051
+ assert s1.get_label(False) == str(expr)
1052
+ assert s1.get_label(True) == wrapper % latex(expr)
1053
+ assert s2.get_label(False) == "test"
1054
+ assert s2.get_label(True) == "test"
1055
+
1056
+ expr = Eq(cos(x - y), 0)
1057
+ s1 = ImplicitSeries(expr, (x, -10, 10), (y, -10, 10), None)
1058
+ s2 = ImplicitSeries(expr, (x, -10, 10), (y, -10, 10), "test")
1059
+ assert s1.get_label(False) == str(expr)
1060
+ assert s1.get_label(True) == wrapper % latex(expr)
1061
+ assert s2.get_label(False) == "test"
1062
+ assert s2.get_label(True) == "test"
1063
+
1064
+
1065
+ def test_is_polar_2d_parametric():
1066
+ # verify that Parametric2DLineSeries isable to apply polar discretization,
1067
+ # which is used when polar_plot is executed with polar_axis=True
1068
+ if not np:
1069
+ skip("numpy not installed.")
1070
+
1071
+ t, u = symbols("t u")
1072
+
1073
+ # NOTE: a sufficiently big n must be provided, or else tests
1074
+ # are going to fail
1075
+ # No colormap
1076
+ f = sin(4 * t)
1077
+ s1 = Parametric2DLineSeries(f * cos(t), f * sin(t), (t, 0, 2*pi),
1078
+ adaptive=False, n=10, is_polar=False, use_cm=False)
1079
+ x1, y1, p1 = s1.get_data()
1080
+ s2 = Parametric2DLineSeries(f * cos(t), f * sin(t), (t, 0, 2*pi),
1081
+ adaptive=False, n=10, is_polar=True, use_cm=False)
1082
+ th, r, p2 = s2.get_data()
1083
+ assert (not np.allclose(x1, th)) and (not np.allclose(y1, r))
1084
+ assert np.allclose(p1, p2)
1085
+
1086
+ # With colormap
1087
+ s3 = Parametric2DLineSeries(f * cos(t), f * sin(t), (t, 0, 2*pi),
1088
+ adaptive=False, n=10, is_polar=False, color_func=lambda t: 2*t)
1089
+ x3, y3, p3 = s3.get_data()
1090
+ s4 = Parametric2DLineSeries(f * cos(t), f * sin(t), (t, 0, 2*pi),
1091
+ adaptive=False, n=10, is_polar=True, color_func=lambda t: 2*t)
1092
+ th4, r4, p4 = s4.get_data()
1093
+ assert np.allclose(p3, p4) and (not np.allclose(p1, p3))
1094
+ assert np.allclose(x3, x1) and np.allclose(y3, y1)
1095
+ assert np.allclose(th4, th) and np.allclose(r4, r)
1096
+
1097
+
1098
+ def test_is_polar_3d():
1099
+ # verify that SurfaceOver2DRangeSeries is able to apply
1100
+ # polar discretization
1101
+ if not np:
1102
+ skip("numpy not installed.")
1103
+
1104
+ x, y, t = symbols("x, y, t")
1105
+ expr = (x**2 - 1)**2
1106
+ s1 = SurfaceOver2DRangeSeries(expr, (x, 0, 1.5), (y, 0, 2 * pi),
1107
+ n=10, adaptive=False, is_polar=False)
1108
+ s2 = SurfaceOver2DRangeSeries(expr, (x, 0, 1.5), (y, 0, 2 * pi),
1109
+ n=10, adaptive=False, is_polar=True)
1110
+ x1, y1, z1 = s1.get_data()
1111
+ x2, y2, z2 = s2.get_data()
1112
+ x22, y22 = x1 * np.cos(y1), x1 * np.sin(y1)
1113
+ assert np.allclose(x2, x22)
1114
+ assert np.allclose(y2, y22)
1115
+
1116
+
1117
+ def test_color_func():
1118
+ # verify that eval_color_func produces the expected results in order to
1119
+ # maintain back compatibility with the old sympy.plotting module
1120
+ if not np:
1121
+ skip("numpy not installed.")
1122
+
1123
+ x, y, z, u, v = symbols("x, y, z, u, v")
1124
+
1125
+ # color func: returns x, y, color and s is parametric
1126
+ xx = np.linspace(-3, 3, 10)
1127
+ yy1 = np.cos(xx)
1128
+ s = List2DSeries(xx, yy1, color_func=lambda x, y: 2 * x, use_cm=True)
1129
+ xxs, yys, col = s.get_data()
1130
+ assert np.allclose(xx, xxs)
1131
+ assert np.allclose(yy1, yys)
1132
+ assert np.allclose(2 * xx, col)
1133
+ assert s.is_parametric
1134
+
1135
+ s = List2DSeries(xx, yy1, color_func=lambda x, y: 2 * x, use_cm=False)
1136
+ assert len(s.get_data()) == 2
1137
+ assert not s.is_parametric
1138
+
1139
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1140
+ adaptive=False, n=10, color_func=lambda t: t)
1141
+ xx, yy, col = s.get_data()
1142
+ assert (not np.allclose(xx, col)) and (not np.allclose(yy, col))
1143
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1144
+ adaptive=False, n=10, color_func=lambda x, y: x * y)
1145
+ xx, yy, col = s.get_data()
1146
+ assert np.allclose(col, xx * yy)
1147
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1148
+ adaptive=False, n=10, color_func=lambda x, y, t: x * y * t)
1149
+ xx, yy, col = s.get_data()
1150
+ assert np.allclose(col, xx * yy * np.linspace(0, 2*np.pi, 10))
1151
+
1152
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2*pi),
1153
+ adaptive=False, n=10, color_func=lambda t: t)
1154
+ xx, yy, zz, col = s.get_data()
1155
+ assert (not np.allclose(xx, col)) and (not np.allclose(yy, col))
1156
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2*pi),
1157
+ adaptive=False, n=10, color_func=lambda x, y, z: x * y * z)
1158
+ xx, yy, zz, col = s.get_data()
1159
+ assert np.allclose(col, xx * yy * zz)
1160
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2*pi),
1161
+ adaptive=False, n=10, color_func=lambda x, y, z, t: x * y * z * t)
1162
+ xx, yy, zz, col = s.get_data()
1163
+ assert np.allclose(col, xx * yy * zz * np.linspace(0, 2*np.pi, 10))
1164
+
1165
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
1166
+ adaptive=False, n1=10, n2=10, color_func=lambda x: x)
1167
+ xx, yy, zz = s.get_data()
1168
+ col = s.eval_color_func(xx, yy, zz)
1169
+ assert np.allclose(xx, col)
1170
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
1171
+ adaptive=False, n1=10, n2=10, color_func=lambda x, y: x * y)
1172
+ xx, yy, zz = s.get_data()
1173
+ col = s.eval_color_func(xx, yy, zz)
1174
+ assert np.allclose(xx * yy, col)
1175
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
1176
+ adaptive=False, n1=10, n2=10, color_func=lambda x, y, z: x * y * z)
1177
+ xx, yy, zz = s.get_data()
1178
+ col = s.eval_color_func(xx, yy, zz)
1179
+ assert np.allclose(xx * yy * zz, col)
1180
+
1181
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1), adaptive=False,
1182
+ n1=10, n2=10, color_func=lambda u:u)
1183
+ xx, yy, zz, uu, vv = s.get_data()
1184
+ col = s.eval_color_func(xx, yy, zz, uu, vv)
1185
+ assert np.allclose(uu, col)
1186
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1), adaptive=False,
1187
+ n1=10, n2=10, color_func=lambda u, v: u * v)
1188
+ xx, yy, zz, uu, vv = s.get_data()
1189
+ col = s.eval_color_func(xx, yy, zz, uu, vv)
1190
+ assert np.allclose(uu * vv, col)
1191
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1), adaptive=False,
1192
+ n1=10, n2=10, color_func=lambda x, y, z: x * y * z)
1193
+ xx, yy, zz, uu, vv = s.get_data()
1194
+ col = s.eval_color_func(xx, yy, zz, uu, vv)
1195
+ assert np.allclose(xx * yy * zz, col)
1196
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1), adaptive=False,
1197
+ n1=10, n2=10, color_func=lambda x, y, z, u, v: x * y * z * u * v)
1198
+ xx, yy, zz, uu, vv = s.get_data()
1199
+ col = s.eval_color_func(xx, yy, zz, uu, vv)
1200
+ assert np.allclose(xx * yy * zz * uu * vv, col)
1201
+
1202
+ # Interactive Series
1203
+ s = List2DSeries([0, 1, 2, x], [x, 2, 3, 4],
1204
+ color_func=lambda x, y: 2 * x, params={x: 1}, use_cm=True)
1205
+ xx, yy, col = s.get_data()
1206
+ assert np.allclose(xx, [0, 1, 2, 1])
1207
+ assert np.allclose(yy, [1, 2, 3, 4])
1208
+ assert np.allclose(2 * xx, col)
1209
+ assert s.is_parametric and s.use_cm
1210
+
1211
+ s = List2DSeries([0, 1, 2, x], [x, 2, 3, 4],
1212
+ color_func=lambda x, y: 2 * x, params={x: 1}, use_cm=False)
1213
+ assert len(s.get_data()) == 2
1214
+ assert not s.is_parametric
1215
+
1216
+
1217
+ def test_color_func_scalar_val():
1218
+ # verify that eval_color_func returns a numpy array even when color_func
1219
+ # evaluates to a scalar value
1220
+ if not np:
1221
+ skip("numpy not installed.")
1222
+
1223
+ x, y = symbols("x, y")
1224
+
1225
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1226
+ adaptive=False, n=10, color_func=lambda t: 1)
1227
+ xx, yy, col = s.get_data()
1228
+ assert np.allclose(col, np.ones(xx.shape))
1229
+
1230
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 2*pi),
1231
+ adaptive=False, n=10, color_func=lambda t: 1)
1232
+ xx, yy, zz, col = s.get_data()
1233
+ assert np.allclose(col, np.ones(xx.shape))
1234
+
1235
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
1236
+ adaptive=False, n1=10, n2=10, color_func=lambda x: 1)
1237
+ xx, yy, zz = s.get_data()
1238
+ assert np.allclose(s.eval_color_func(xx), np.ones(xx.shape))
1239
+
1240
+ s = ParametricSurfaceSeries(1, x, y, (x, 0, 1), (y, 0, 1), adaptive=False,
1241
+ n1=10, n2=10, color_func=lambda u: 1)
1242
+ xx, yy, zz, uu, vv = s.get_data()
1243
+ col = s.eval_color_func(xx, yy, zz, uu, vv)
1244
+ assert np.allclose(col, np.ones(xx.shape))
1245
+
1246
+
1247
+ def test_color_func_expression():
1248
+ # verify that color_func is able to deal with instances of Expr: they will
1249
+ # be lambdified with the same signature used for the main expression.
1250
+ if not np:
1251
+ skip("numpy not installed.")
1252
+
1253
+ x, y = symbols("x, y")
1254
+
1255
+ s1 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1256
+ color_func=sin(x), adaptive=False, n=10, use_cm=True)
1257
+ s2 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1258
+ color_func=lambda x: np.cos(x), adaptive=False, n=10, use_cm=True)
1259
+ # the following statement should not raise errors
1260
+ d1 = s1.get_data()
1261
+ assert callable(s1.color_func)
1262
+ d2 = s2.get_data()
1263
+ assert not np.allclose(d1[-1], d2[-1])
1264
+
1265
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -pi, pi), (y, -pi, pi),
1266
+ color_func=sin(x**2 + y**2), adaptive=False, n1=5, n2=5)
1267
+ # the following statement should not raise errors
1268
+ s.get_data()
1269
+ assert callable(s.color_func)
1270
+
1271
+ xx = [1, 2, 3, 4, 5]
1272
+ yy = [1, 2, 3, 4, 5]
1273
+ raises(TypeError,
1274
+ lambda : List2DSeries(xx, yy, use_cm=True, color_func=sin(x)))
1275
+
1276
+
1277
+ def test_line_surface_color():
1278
+ # verify the back-compatibility with the old sympy.plotting module.
1279
+ # By setting line_color or surface_color to be a callable, it will set
1280
+ # the color_func attribute.
1281
+
1282
+ x, y, z = symbols("x, y, z")
1283
+
1284
+ s = LineOver1DRangeSeries(sin(x), (x, -5, 5), adaptive=False, n=10,
1285
+ line_color=lambda x: x)
1286
+ assert (s.line_color is None) and callable(s.color_func)
1287
+
1288
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 2*pi),
1289
+ adaptive=False, n=10, line_color=lambda t: t)
1290
+ assert (s.line_color is None) and callable(s.color_func)
1291
+
1292
+ s = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -2, 2), (y, -2, 2),
1293
+ n1=10, n2=10, surface_color=lambda x: x)
1294
+ assert (s.surface_color is None) and callable(s.color_func)
1295
+
1296
+
1297
+ def test_complex_adaptive_false():
1298
+ # verify that series with adaptive=False is evaluated with discretized
1299
+ # ranges of type complex.
1300
+ if not np:
1301
+ skip("numpy not installed.")
1302
+
1303
+ x, y, u = symbols("x y u")
1304
+
1305
+ def do_test(data1, data2):
1306
+ assert len(data1) == len(data2)
1307
+ for d1, d2 in zip(data1, data2):
1308
+ assert np.allclose(d1, d2)
1309
+
1310
+ expr1 = sqrt(x) * exp(-x**2)
1311
+ expr2 = sqrt(u * x) * exp(-x**2)
1312
+ s1 = LineOver1DRangeSeries(im(expr1), (x, -5, 5), adaptive=False, n=10)
1313
+ s2 = LineOver1DRangeSeries(im(expr2), (x, -5, 5),
1314
+ adaptive=False, n=10, params={u: 1})
1315
+ data1 = s1.get_data()
1316
+ data2 = s2.get_data()
1317
+
1318
+ do_test(data1, data2)
1319
+ assert (not np.allclose(data1[1], 0)) and (not np.allclose(data2[1], 0))
1320
+
1321
+ s1 = Parametric2DLineSeries(re(expr1), im(expr1), (x, -pi, pi),
1322
+ adaptive=False, n=10)
1323
+ s2 = Parametric2DLineSeries(re(expr2), im(expr2), (x, -pi, pi),
1324
+ adaptive=False, n=10, params={u: 1})
1325
+ data1 = s1.get_data()
1326
+ data2 = s2.get_data()
1327
+ do_test(data1, data2)
1328
+ assert (not np.allclose(data1[1], 0)) and (not np.allclose(data2[1], 0))
1329
+
1330
+ s1 = SurfaceOver2DRangeSeries(im(expr1), (x, -5, 5), (y, -10, 10),
1331
+ adaptive=False, n1=30, n2=3)
1332
+ s2 = SurfaceOver2DRangeSeries(im(expr2), (x, -5, 5), (y, -10, 10),
1333
+ adaptive=False, n1=30, n2=3, params={u: 1})
1334
+ data1 = s1.get_data()
1335
+ data2 = s2.get_data()
1336
+ do_test(data1, data2)
1337
+ assert (not np.allclose(data1[1], 0)) and (not np.allclose(data2[1], 0))
1338
+
1339
+
1340
+ def test_expr_is_lambda_function():
1341
+ # verify that when a numpy function is provided, the series will be able
1342
+ # to evaluate it. Also, label should be empty in order to prevent some
1343
+ # backend from crashing.
1344
+ if not np:
1345
+ skip("numpy not installed.")
1346
+
1347
+ f = lambda x: np.cos(x)
1348
+ s1 = LineOver1DRangeSeries(f, ("x", -5, 5), adaptive=True, depth=3)
1349
+ s1.get_data()
1350
+ s2 = LineOver1DRangeSeries(f, ("x", -5, 5), adaptive=False, n=10)
1351
+ s2.get_data()
1352
+ assert s1.label == s2.label == ""
1353
+
1354
+ fx = lambda x: np.cos(x)
1355
+ fy = lambda x: np.sin(x)
1356
+ s1 = Parametric2DLineSeries(fx, fy, ("x", 0, 2*pi),
1357
+ adaptive=True, adaptive_goal=0.1)
1358
+ s1.get_data()
1359
+ s2 = Parametric2DLineSeries(fx, fy, ("x", 0, 2*pi),
1360
+ adaptive=False, n=10)
1361
+ s2.get_data()
1362
+ assert s1.label == s2.label == ""
1363
+
1364
+ fz = lambda x: x
1365
+ s1 = Parametric3DLineSeries(fx, fy, fz, ("x", 0, 2*pi),
1366
+ adaptive=True, adaptive_goal=0.1)
1367
+ s1.get_data()
1368
+ s2 = Parametric3DLineSeries(fx, fy, fz, ("x", 0, 2*pi),
1369
+ adaptive=False, n=10)
1370
+ s2.get_data()
1371
+ assert s1.label == s2.label == ""
1372
+
1373
+ f = lambda x, y: np.cos(x**2 + y**2)
1374
+ s1 = SurfaceOver2DRangeSeries(f, ("a", -2, 2), ("b", -3, 3),
1375
+ adaptive=False, n1=10, n2=10)
1376
+ s1.get_data()
1377
+ s2 = ContourSeries(f, ("a", -2, 2), ("b", -3, 3),
1378
+ adaptive=False, n1=10, n2=10)
1379
+ s2.get_data()
1380
+ assert s1.label == s2.label == ""
1381
+
1382
+ fx = lambda u, v: np.cos(u + v)
1383
+ fy = lambda u, v: np.sin(u - v)
1384
+ fz = lambda u, v: u * v
1385
+ s1 = ParametricSurfaceSeries(fx, fy, fz, ("u", 0, pi), ("v", 0, 2*pi),
1386
+ adaptive=False, n1=10, n2=10)
1387
+ s1.get_data()
1388
+ assert s1.label == ""
1389
+
1390
+ raises(TypeError, lambda: List2DSeries(lambda t: t, lambda t: t))
1391
+ raises(TypeError, lambda : ImplicitSeries(lambda t: np.sin(t),
1392
+ ("x", -5, 5), ("y", -6, 6)))
1393
+
1394
+
1395
+ def test_show_in_legend_lines():
1396
+ # verify that lines series correctly set the show_in_legend attribute
1397
+ x, u = symbols("x, u")
1398
+
1399
+ s = LineOver1DRangeSeries(cos(x), (x, -2, 2), "test", show_in_legend=True)
1400
+ assert s.show_in_legend
1401
+ s = LineOver1DRangeSeries(cos(x), (x, -2, 2), "test", show_in_legend=False)
1402
+ assert not s.show_in_legend
1403
+
1404
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 1), "test",
1405
+ show_in_legend=True)
1406
+ assert s.show_in_legend
1407
+ s = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 1), "test",
1408
+ show_in_legend=False)
1409
+ assert not s.show_in_legend
1410
+
1411
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 1), "test",
1412
+ show_in_legend=True)
1413
+ assert s.show_in_legend
1414
+ s = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 1), "test",
1415
+ show_in_legend=False)
1416
+ assert not s.show_in_legend
1417
+
1418
+
1419
+ @XFAIL
1420
+ def test_particular_case_1_with_adaptive_true():
1421
+ # Verify that symbolic expressions and numerical lambda functions are
1422
+ # evaluated with the same algorithm.
1423
+ if not np:
1424
+ skip("numpy not installed.")
1425
+
1426
+ # NOTE: xfail because sympy's adaptive algorithm is not deterministic
1427
+
1428
+ def do_test(a, b):
1429
+ with warns(
1430
+ RuntimeWarning,
1431
+ match="invalid value encountered in scalar power",
1432
+ test_stacklevel=False,
1433
+ ):
1434
+ d1 = a.get_data()
1435
+ d2 = b.get_data()
1436
+ for t, v in zip(d1, d2):
1437
+ assert np.allclose(t, v)
1438
+
1439
+ n = symbols("n")
1440
+ a = S(2) / 3
1441
+ epsilon = 0.01
1442
+ xn = (n**3 + n**2)**(S(1)/3) - (n**3 - n**2)**(S(1)/3)
1443
+ expr = Abs(xn - a) - epsilon
1444
+ math_func = lambdify([n], expr)
1445
+ s1 = LineOver1DRangeSeries(expr, (n, -10, 10), "",
1446
+ adaptive=True, depth=3)
1447
+ s2 = LineOver1DRangeSeries(math_func, ("n", -10, 10), "",
1448
+ adaptive=True, depth=3)
1449
+ do_test(s1, s2)
1450
+
1451
+
1452
+ def test_particular_case_1_with_adaptive_false():
1453
+ # Verify that symbolic expressions and numerical lambda functions are
1454
+ # evaluated with the same algorithm. In particular, uniform evaluation
1455
+ # is going to use np.vectorize, which correctly evaluates the following
1456
+ # mathematical function.
1457
+ if not np:
1458
+ skip("numpy not installed.")
1459
+
1460
+ def do_test(a, b):
1461
+ d1 = a.get_data()
1462
+ d2 = b.get_data()
1463
+ for t, v in zip(d1, d2):
1464
+ assert np.allclose(t, v)
1465
+
1466
+ n = symbols("n")
1467
+ a = S(2) / 3
1468
+ epsilon = 0.01
1469
+ xn = (n**3 + n**2)**(S(1)/3) - (n**3 - n**2)**(S(1)/3)
1470
+ expr = Abs(xn - a) - epsilon
1471
+ math_func = lambdify([n], expr)
1472
+
1473
+ s3 = LineOver1DRangeSeries(expr, (n, -10, 10), "",
1474
+ adaptive=False, n=10)
1475
+ s4 = LineOver1DRangeSeries(math_func, ("n", -10, 10), "",
1476
+ adaptive=False, n=10)
1477
+ do_test(s3, s4)
1478
+
1479
+
1480
+ def test_complex_params_number_eval():
1481
+ # The main expression contains terms like sqrt(xi - 1), with
1482
+ # parameter (0 <= xi <= 1).
1483
+ # There shouldn't be any NaN values on the output.
1484
+ if not np:
1485
+ skip("numpy not installed.")
1486
+
1487
+ xi, wn, x0, v0, t = symbols("xi, omega_n, x0, v0, t")
1488
+ x = Function("x")(t)
1489
+ eq = x.diff(t, 2) + 2 * xi * wn * x.diff(t) + wn**2 * x
1490
+ sol = dsolve(eq, x, ics={x.subs(t, 0): x0, x.diff(t).subs(t, 0): v0})
1491
+ params = {
1492
+ wn: 0.5,
1493
+ xi: 0.25,
1494
+ x0: 0.45,
1495
+ v0: 0.0
1496
+ }
1497
+ s = LineOver1DRangeSeries(sol.rhs, (t, 0, 100), adaptive=False, n=5,
1498
+ params=params)
1499
+ x, y = s.get_data()
1500
+ assert not np.isnan(x).any()
1501
+ assert not np.isnan(y).any()
1502
+
1503
+
1504
+ # Fourier Series of a sawtooth wave
1505
+ # The main expression contains a Sum with a symbolic upper range.
1506
+ # The lambdified code looks like:
1507
+ # sum(blablabla for for n in range(1, m+1))
1508
+ # But range requires integer numbers, whereas per above example, the series
1509
+ # casts parameters to complex. Verify that the series is able to detect
1510
+ # upper bounds in summations and cast it to int in order to get successfull
1511
+ # evaluation
1512
+ x, T, n, m = symbols("x, T, n, m")
1513
+ fs = S(1) / 2 - (1 / pi) * Sum(sin(2 * n * pi * x / T) / n, (n, 1, m))
1514
+ params = {
1515
+ T: 4.5,
1516
+ m: 5
1517
+ }
1518
+ s = LineOver1DRangeSeries(fs, (x, 0, 10), adaptive=False, n=5,
1519
+ params=params)
1520
+ x, y = s.get_data()
1521
+ assert not np.isnan(x).any()
1522
+ assert not np.isnan(y).any()
1523
+
1524
+
1525
+ def test_complex_range_line_plot_1():
1526
+ # verify that univariate functions are evaluated with a complex
1527
+ # data range (with zero imaginary part). There shouln't be any
1528
+ # NaN value in the output.
1529
+ if not np:
1530
+ skip("numpy not installed.")
1531
+
1532
+ x, u = symbols("x, u")
1533
+ expr1 = im(sqrt(x) * exp(-x**2))
1534
+ expr2 = im(sqrt(u * x) * exp(-x**2))
1535
+ s1 = LineOver1DRangeSeries(expr1, (x, -10, 10), adaptive=True,
1536
+ adaptive_goal=0.1)
1537
+ s2 = LineOver1DRangeSeries(expr1, (x, -10, 10), adaptive=False, n=30)
1538
+ s3 = LineOver1DRangeSeries(expr2, (x, -10, 10), adaptive=False, n=30,
1539
+ params={u: 1})
1540
+
1541
+ with ignore_warnings(RuntimeWarning):
1542
+ data1 = s1.get_data()
1543
+ data2 = s2.get_data()
1544
+ data3 = s3.get_data()
1545
+
1546
+ assert not np.isnan(data1[1]).any()
1547
+ assert not np.isnan(data2[1]).any()
1548
+ assert not np.isnan(data3[1]).any()
1549
+ assert np.allclose(data2[0], data3[0]) and np.allclose(data2[1], data3[1])
1550
+
1551
+
1552
+ @XFAIL
1553
+ def test_complex_range_line_plot_2():
1554
+ # verify that univariate functions are evaluated with a complex
1555
+ # data range (with non-zero imaginary part). There shouln't be any
1556
+ # NaN value in the output.
1557
+ if not np:
1558
+ skip("numpy not installed.")
1559
+
1560
+ # NOTE: xfail because sympy's adaptive algorithm is unable to deal with
1561
+ # complex number.
1562
+
1563
+ x, u = symbols("x, u")
1564
+
1565
+ # adaptive and uniform meshing should produce the same data.
1566
+ # because of the adaptive nature, just compare the first and last points
1567
+ # of both series.
1568
+ s1 = LineOver1DRangeSeries(abs(sqrt(x)), (x, -5-2j, 5-2j), adaptive=True)
1569
+ s2 = LineOver1DRangeSeries(abs(sqrt(x)), (x, -5-2j, 5-2j), adaptive=False,
1570
+ n=10)
1571
+ with warns(
1572
+ RuntimeWarning,
1573
+ match="invalid value encountered in sqrt",
1574
+ test_stacklevel=False,
1575
+ ):
1576
+ d1 = s1.get_data()
1577
+ d2 = s2.get_data()
1578
+ xx1 = [d1[0][0], d1[0][-1]]
1579
+ xx2 = [d2[0][0], d2[0][-1]]
1580
+ yy1 = [d1[1][0], d1[1][-1]]
1581
+ yy2 = [d2[1][0], d2[1][-1]]
1582
+ assert np.allclose(xx1, xx2)
1583
+ assert np.allclose(yy1, yy2)
1584
+
1585
+
1586
+ def test_force_real_eval():
1587
+ # verify that force_real_eval=True produces inconsistent results when
1588
+ # compared with evaluation of complex domain.
1589
+ if not np:
1590
+ skip("numpy not installed.")
1591
+
1592
+ x = symbols("x")
1593
+
1594
+ expr = im(sqrt(x) * exp(-x**2))
1595
+ s1 = LineOver1DRangeSeries(expr, (x, -10, 10), adaptive=False, n=10,
1596
+ force_real_eval=False)
1597
+ s2 = LineOver1DRangeSeries(expr, (x, -10, 10), adaptive=False, n=10,
1598
+ force_real_eval=True)
1599
+ d1 = s1.get_data()
1600
+ with ignore_warnings(RuntimeWarning):
1601
+ d2 = s2.get_data()
1602
+ assert not np.allclose(d1[1], 0)
1603
+ assert np.allclose(d2[1], 0)
1604
+
1605
+
1606
+ def test_contour_series_show_clabels():
1607
+ # verify that a contour series has the abiliy to set the visibility of
1608
+ # labels to contour lines
1609
+
1610
+ x, y = symbols("x, y")
1611
+ s = ContourSeries(cos(x*y), (x, -2, 2), (y, -2, 2))
1612
+ assert s.show_clabels
1613
+
1614
+ s = ContourSeries(cos(x*y), (x, -2, 2), (y, -2, 2), clabels=True)
1615
+ assert s.show_clabels
1616
+
1617
+ s = ContourSeries(cos(x*y), (x, -2, 2), (y, -2, 2), clabels=False)
1618
+ assert not s.show_clabels
1619
+
1620
+
1621
+ def test_LineOver1DRangeSeries_complex_range():
1622
+ # verify that LineOver1DRangeSeries can accept a complex range
1623
+ # if the imaginary part of the start and end values are the same
1624
+
1625
+ x = symbols("x")
1626
+
1627
+ LineOver1DRangeSeries(sqrt(x), (x, -10, 10))
1628
+ LineOver1DRangeSeries(sqrt(x), (x, -10-2j, 10-2j))
1629
+ raises(ValueError,
1630
+ lambda : LineOver1DRangeSeries(sqrt(x), (x, -10-2j, 10+2j)))
1631
+
1632
+
1633
+ def test_symbolic_plotting_ranges():
1634
+ # verify that data series can use symbolic plotting ranges
1635
+ if not np:
1636
+ skip("numpy not installed.")
1637
+
1638
+ x, y, z, a, b = symbols("x, y, z, a, b")
1639
+
1640
+ def do_test(s1, s2, new_params):
1641
+ d1 = s1.get_data()
1642
+ d2 = s2.get_data()
1643
+ for u, v in zip(d1, d2):
1644
+ assert np.allclose(u, v)
1645
+ s2.params = new_params
1646
+ d2 = s2.get_data()
1647
+ for u, v in zip(d1, d2):
1648
+ assert not np.allclose(u, v)
1649
+
1650
+ s1 = LineOver1DRangeSeries(sin(x), (x, 0, 1), adaptive=False, n=10)
1651
+ s2 = LineOver1DRangeSeries(sin(x), (x, a, b), params={a: 0, b: 1},
1652
+ adaptive=False, n=10)
1653
+ do_test(s1, s2, {a: 0.5, b: 1.5})
1654
+
1655
+ # missing a parameter
1656
+ raises(ValueError,
1657
+ lambda : LineOver1DRangeSeries(sin(x), (x, a, b), params={a: 1}, n=10))
1658
+
1659
+ s1 = Parametric2DLineSeries(cos(x), sin(x), (x, 0, 1), adaptive=False, n=10)
1660
+ s2 = Parametric2DLineSeries(cos(x), sin(x), (x, a, b), params={a: 0, b: 1},
1661
+ adaptive=False, n=10)
1662
+ do_test(s1, s2, {a: 0.5, b: 1.5})
1663
+
1664
+ # missing a parameter
1665
+ raises(ValueError,
1666
+ lambda : Parametric2DLineSeries(cos(x), sin(x), (x, a, b),
1667
+ params={a: 0}, adaptive=False, n=10))
1668
+
1669
+ s1 = Parametric3DLineSeries(cos(x), sin(x), x, (x, 0, 1),
1670
+ adaptive=False, n=10)
1671
+ s2 = Parametric3DLineSeries(cos(x), sin(x), x, (x, a, b),
1672
+ params={a: 0, b: 1}, adaptive=False, n=10)
1673
+ do_test(s1, s2, {a: 0.5, b: 1.5})
1674
+
1675
+ # missing a parameter
1676
+ raises(ValueError,
1677
+ lambda : Parametric3DLineSeries(cos(x), sin(x), x, (x, a, b),
1678
+ params={a: 0}, adaptive=False, n=10))
1679
+
1680
+ s1 = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -pi, pi), (y, -pi, pi),
1681
+ adaptive=False, n1=5, n2=5)
1682
+ s2 = SurfaceOver2DRangeSeries(cos(x**2 + y**2), (x, -pi * a, pi * a),
1683
+ (y, -pi * b, pi * b), params={a: 1, b: 1},
1684
+ adaptive=False, n1=5, n2=5)
1685
+ do_test(s1, s2, {a: 0.5, b: 1.5})
1686
+
1687
+ # missing a parameter
1688
+ raises(ValueError,
1689
+ lambda : SurfaceOver2DRangeSeries(cos(x**2 + y**2),
1690
+ (x, -pi * a, pi * a), (y, -pi * b, pi * b), params={a: 1},
1691
+ adaptive=False, n1=5, n2=5))
1692
+ # one range symbol is included into another range's minimum or maximum val
1693
+ raises(ValueError,
1694
+ lambda : SurfaceOver2DRangeSeries(cos(x**2 + y**2),
1695
+ (x, -pi * a + y, pi * a), (y, -pi * b, pi * b), params={a: 1},
1696
+ adaptive=False, n1=5, n2=5))
1697
+
1698
+ s1 = ParametricSurfaceSeries(
1699
+ cos(x - y), sin(x + y), x - y, (x, -2, 2), (y, -2, 2), n1=5, n2=5)
1700
+ s2 = ParametricSurfaceSeries(
1701
+ cos(x - y), sin(x + y), x - y, (x, -2 * a, 2), (y, -2, 2 * b),
1702
+ params={a: 1, b: 1}, n1=5, n2=5)
1703
+ do_test(s1, s2, {a: 0.5, b: 1.5})
1704
+
1705
+ # missing a parameter
1706
+ raises(ValueError,
1707
+ lambda : ParametricSurfaceSeries(
1708
+ cos(x - y), sin(x + y), x - y, (x, -2 * a, 2), (y, -2, 2 * b),
1709
+ params={a: 1}, n1=5, n2=5))
1710
+
1711
+
1712
+ def test_exclude_points():
1713
+ # verify that exclude works as expected
1714
+ if not np:
1715
+ skip("numpy not installed.")
1716
+
1717
+ x = symbols("x")
1718
+
1719
+ expr = (floor(x) + S.Half) / (1 - (x - S.Half)**2)
1720
+
1721
+ with warns(
1722
+ UserWarning,
1723
+ match="NumPy is unable to evaluate with complex numbers some",
1724
+ test_stacklevel=False,
1725
+ ):
1726
+ s = LineOver1DRangeSeries(expr, (x, -3.5, 3.5), adaptive=False, n=100,
1727
+ exclude=list(range(-3, 4)))
1728
+ xx, yy = s.get_data()
1729
+ assert not np.isnan(xx).any()
1730
+ assert np.count_nonzero(np.isnan(yy)) == 7
1731
+ assert len(xx) > 100
1732
+
1733
+ e1 = log(floor(x)) * cos(x)
1734
+ e2 = log(floor(x)) * sin(x)
1735
+ with warns(
1736
+ UserWarning,
1737
+ match="NumPy is unable to evaluate with complex numbers some",
1738
+ test_stacklevel=False,
1739
+ ):
1740
+ s = Parametric2DLineSeries(e1, e2, (x, 1, 12), adaptive=False, n=100,
1741
+ exclude=list(range(1, 13)))
1742
+ xx, yy, pp = s.get_data()
1743
+ assert not np.isnan(pp).any()
1744
+ assert np.count_nonzero(np.isnan(xx)) == 11
1745
+ assert np.count_nonzero(np.isnan(yy)) == 11
1746
+ assert len(xx) > 100
1747
+
1748
+
1749
+ def test_unwrap():
1750
+ # verify that unwrap works as expected
1751
+ if not np:
1752
+ skip("numpy not installed.")
1753
+
1754
+ x, y = symbols("x, y")
1755
+ expr = 1 / (x**3 + 2*x**2 + x)
1756
+ expr = arg(expr.subs(x, I*y*2*pi))
1757
+ s1 = LineOver1DRangeSeries(expr, (y, 1e-05, 1e05), xscale="log",
1758
+ adaptive=False, n=10, unwrap=False)
1759
+ s2 = LineOver1DRangeSeries(expr, (y, 1e-05, 1e05), xscale="log",
1760
+ adaptive=False, n=10, unwrap=True)
1761
+ s3 = LineOver1DRangeSeries(expr, (y, 1e-05, 1e05), xscale="log",
1762
+ adaptive=False, n=10, unwrap={"period": 4})
1763
+ x1, y1 = s1.get_data()
1764
+ x2, y2 = s2.get_data()
1765
+ x3, y3 = s3.get_data()
1766
+ assert np.allclose(x1, x2)
1767
+ # there must not be nan values in the results of these evaluations
1768
+ assert all(not np.isnan(t).any() for t in [y1, y2, y3])
1769
+ assert not np.allclose(y1, y2)
1770
+ assert not np.allclose(y1, y3)
1771
+ assert not np.allclose(y2, y3)
evalkit_internvl/lib/python3.10/site-packages/sympy/plotting/tests/test_utils.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pytest import raises
2
+ from sympy import (
3
+ symbols, Expr, Tuple, Integer, cos, solveset, FiniteSet, ImageSet)
4
+ from sympy.plotting.utils import (
5
+ _create_ranges, _plot_sympify, extract_solution)
6
+ from sympy.physics.mechanics import ReferenceFrame, Vector as MechVector
7
+ from sympy.vector import CoordSys3D, Vector
8
+
9
+
10
+ def test_plot_sympify():
11
+ x, y = symbols("x, y")
12
+
13
+ # argument is already sympified
14
+ args = x + y
15
+ r = _plot_sympify(args)
16
+ assert r == args
17
+
18
+ # one argument needs to be sympified
19
+ args = (x + y, 1)
20
+ r = _plot_sympify(args)
21
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 2
22
+ assert isinstance(r[0], Expr)
23
+ assert isinstance(r[1], Integer)
24
+
25
+ # string and dict should not be sympified
26
+ args = (x + y, (x, 0, 1), "str", 1, {1: 1, 2: 2.0})
27
+ r = _plot_sympify(args)
28
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 5
29
+ assert isinstance(r[0], Expr)
30
+ assert isinstance(r[1], Tuple)
31
+ assert isinstance(r[2], str)
32
+ assert isinstance(r[3], Integer)
33
+ assert isinstance(r[4], dict) and isinstance(r[4][1], int) and isinstance(r[4][2], float)
34
+
35
+ # nested arguments containing strings
36
+ args = ((x + y, (y, 0, 1), "a"), (x + 1, (x, 0, 1), "$f_{1}$"))
37
+ r = _plot_sympify(args)
38
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 2
39
+ assert isinstance(r[0], Tuple)
40
+ assert isinstance(r[0][1], Tuple)
41
+ assert isinstance(r[0][1][1], Integer)
42
+ assert isinstance(r[0][2], str)
43
+ assert isinstance(r[1], Tuple)
44
+ assert isinstance(r[1][1], Tuple)
45
+ assert isinstance(r[1][1][1], Integer)
46
+ assert isinstance(r[1][2], str)
47
+
48
+ # vectors from sympy.physics.vectors module are not sympified
49
+ # vectors from sympy.vectors are sympified
50
+ # in both cases, no error should be raised
51
+ R = ReferenceFrame("R")
52
+ v1 = 2 * R.x + R.y
53
+ C = CoordSys3D("C")
54
+ v2 = 2 * C.i + C.j
55
+ args = (v1, v2)
56
+ r = _plot_sympify(args)
57
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 2
58
+ assert isinstance(v1, MechVector)
59
+ assert isinstance(v2, Vector)
60
+
61
+
62
+ def test_create_ranges():
63
+ x, y = symbols("x, y")
64
+
65
+ # user don't provide any range -> return a default range
66
+ r = _create_ranges({x}, [], 1)
67
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 1
68
+ assert isinstance(r[0], (Tuple, tuple))
69
+ assert r[0] == (x, -10, 10)
70
+
71
+ r = _create_ranges({x, y}, [], 2)
72
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 2
73
+ assert isinstance(r[0], (Tuple, tuple))
74
+ assert isinstance(r[1], (Tuple, tuple))
75
+ assert r[0] == (x, -10, 10) or (y, -10, 10)
76
+ assert r[1] == (y, -10, 10) or (x, -10, 10)
77
+ assert r[0] != r[1]
78
+
79
+ # not enough ranges provided by the user -> create default ranges
80
+ r = _create_ranges(
81
+ {x, y},
82
+ [
83
+ (x, 0, 1),
84
+ ],
85
+ 2,
86
+ )
87
+ assert isinstance(r, (list, tuple, Tuple)) and len(r) == 2
88
+ assert isinstance(r[0], (Tuple, tuple))
89
+ assert isinstance(r[1], (Tuple, tuple))
90
+ assert r[0] == (x, 0, 1) or (y, -10, 10)
91
+ assert r[1] == (y, -10, 10) or (x, 0, 1)
92
+ assert r[0] != r[1]
93
+
94
+ # too many free symbols
95
+ raises(ValueError, lambda: _create_ranges({x, y}, [], 1))
96
+ raises(ValueError, lambda: _create_ranges({x, y}, [(x, 0, 5), (y, 0, 1)], 1))
97
+
98
+
99
+ def test_extract_solution():
100
+ x = symbols("x")
101
+
102
+ sol = solveset(cos(10 * x))
103
+ assert sol.has(ImageSet)
104
+ res = extract_solution(sol)
105
+ assert len(res) == 20
106
+ assert isinstance(res, FiniteSet)
107
+
108
+ res = extract_solution(sol, 20)
109
+ assert len(res) == 40
110
+ assert isinstance(res, FiniteSet)
evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/diophantine/__pycache__/diophantine.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:310b467920b7c47537a5b4cc7be2d95d9bc3e1debd0eb0eec15d1a086c54bff2
3
+ size 106624
evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/ode/tests/__pycache__/test_systems.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:012aa5a4a636162aecfdaf1098e2074ea9e7136f1d75471f8ef7a0ca7df6e9e8
3
+ size 112044
evalkit_internvl/lib/python3.10/site-packages/sympy/solvers/tests/__pycache__/test_solvers.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbf9853e6395e88fe228f678aef9abe73d4c381a412697e313bdfce599db29b7
3
+ size 108394
evalkit_tf437/lib/python3.10/site-packages/pydantic_core/_pydantic_core.cpython-310-x86_64-linux-gnu.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f812dd0f40ab6a517a9a5a66ad96c6c0e56f1c9ad7568ad5ba4c9d29a129e42a
3
+ size 4985256
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__init__.py ADDED
File without changes
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (179 Bytes). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/tempita.cpython-310.pyc ADDED
Binary file (1.62 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/__pycache__/version.cpython-310.pyc ADDED
Binary file (659 Bytes). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/tempita.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Authors: The scikit-learn developers
2
+ # SPDX-License-Identifier: BSD-3-Clause
3
+
4
+ import argparse
5
+ import os
6
+
7
+ from Cython import Tempita as tempita
8
+
9
+ # XXX: If this import ever fails (does it really?), vendor either
10
+ # cython.tempita or numpy/npy_tempita.
11
+
12
+
13
+ def process_tempita(fromfile, outfile=None):
14
+ """Process tempita templated file and write out the result.
15
+
16
+ The template file is expected to end in `.c.tp` or `.pyx.tp`:
17
+ E.g. processing `template.c.in` generates `template.c`.
18
+
19
+ """
20
+ with open(fromfile, "r", encoding="utf-8") as f:
21
+ template_content = f.read()
22
+
23
+ template = tempita.Template(template_content)
24
+ content = template.substitute()
25
+
26
+ with open(outfile, "w", encoding="utf-8") as f:
27
+ f.write(content)
28
+
29
+
30
+ def main():
31
+ parser = argparse.ArgumentParser()
32
+ parser.add_argument("infile", type=str, help="Path to the input file")
33
+ parser.add_argument("-o", "--outdir", type=str, help="Path to the output directory")
34
+ parser.add_argument(
35
+ "-i",
36
+ "--ignore",
37
+ type=str,
38
+ help=(
39
+ "An ignored input - may be useful to add a "
40
+ "dependency between custom targets"
41
+ ),
42
+ )
43
+ args = parser.parse_args()
44
+
45
+ if not args.infile.endswith(".tp"):
46
+ raise ValueError(f"Unexpected extension: {args.infile}")
47
+
48
+ if not args.outdir:
49
+ raise ValueError("Missing `--outdir` argument to tempita.py")
50
+
51
+ outdir_abs = os.path.join(os.getcwd(), args.outdir)
52
+ outfile = os.path.join(
53
+ outdir_abs, os.path.splitext(os.path.split(args.infile)[1])[0]
54
+ )
55
+
56
+ process_tempita(args.infile, outfile)
57
+
58
+
59
+ if __name__ == "__main__":
60
+ main()
evalkit_tf437/lib/python3.10/site-packages/sklearn/_build_utils/version.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Extract version number from __init__.py"""
3
+
4
+ # Authors: The scikit-learn developers
5
+ # SPDX-License-Identifier: BSD-3-Clause
6
+
7
+ import os
8
+
9
+ sklearn_init = os.path.join(os.path.dirname(__file__), "../__init__.py")
10
+
11
+ data = open(sklearn_init).readlines()
12
+ version_line = next(line for line in data if line.startswith("__version__"))
13
+
14
+ version = version_line.strip().split(" = ")[1].replace('"', "").replace("'", "")
15
+
16
+ print(version)
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__init__.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Methods and algorithms to robustly estimate covariance.
2
+
3
+ They estimate the covariance of features at given sets of points, as well as the
4
+ precision matrix defined as the inverse of the covariance. Covariance estimation is
5
+ closely related to the theory of Gaussian graphical models.
6
+ """
7
+
8
+ # Authors: The scikit-learn developers
9
+ # SPDX-License-Identifier: BSD-3-Clause
10
+
11
+ from ._elliptic_envelope import EllipticEnvelope
12
+ from ._empirical_covariance import (
13
+ EmpiricalCovariance,
14
+ empirical_covariance,
15
+ log_likelihood,
16
+ )
17
+ from ._graph_lasso import GraphicalLasso, GraphicalLassoCV, graphical_lasso
18
+ from ._robust_covariance import MinCovDet, fast_mcd
19
+ from ._shrunk_covariance import (
20
+ OAS,
21
+ LedoitWolf,
22
+ ShrunkCovariance,
23
+ ledoit_wolf,
24
+ ledoit_wolf_shrinkage,
25
+ oas,
26
+ shrunk_covariance,
27
+ )
28
+
29
+ __all__ = [
30
+ "EllipticEnvelope",
31
+ "EmpiricalCovariance",
32
+ "GraphicalLasso",
33
+ "GraphicalLassoCV",
34
+ "LedoitWolf",
35
+ "MinCovDet",
36
+ "OAS",
37
+ "ShrunkCovariance",
38
+ "empirical_covariance",
39
+ "fast_mcd",
40
+ "graphical_lasso",
41
+ "ledoit_wolf",
42
+ "ledoit_wolf_shrinkage",
43
+ "log_likelihood",
44
+ "oas",
45
+ "shrunk_covariance",
46
+ ]
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.13 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_elliptic_envelope.cpython-310.pyc ADDED
Binary file (9.55 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_empirical_covariance.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_graph_lasso.cpython-310.pyc ADDED
Binary file (31.5 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/__pycache__/_robust_covariance.cpython-310.pyc ADDED
Binary file (24.3 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_elliptic_envelope.py ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Authors: The scikit-learn developers
2
+ # SPDX-License-Identifier: BSD-3-Clause
3
+
4
+ from numbers import Real
5
+
6
+ import numpy as np
7
+
8
+ from ..base import OutlierMixin, _fit_context
9
+ from ..metrics import accuracy_score
10
+ from ..utils._param_validation import Interval
11
+ from ..utils.validation import check_is_fitted
12
+ from ._robust_covariance import MinCovDet
13
+
14
+
15
+ class EllipticEnvelope(OutlierMixin, MinCovDet):
16
+ """An object for detecting outliers in a Gaussian distributed dataset.
17
+
18
+ Read more in the :ref:`User Guide <outlier_detection>`.
19
+
20
+ Parameters
21
+ ----------
22
+ store_precision : bool, default=True
23
+ Specify if the estimated precision is stored.
24
+
25
+ assume_centered : bool, default=False
26
+ If True, the support of robust location and covariance estimates
27
+ is computed, and a covariance estimate is recomputed from it,
28
+ without centering the data.
29
+ Useful to work with data whose mean is significantly equal to
30
+ zero but is not exactly zero.
31
+ If False, the robust location and covariance are directly computed
32
+ with the FastMCD algorithm without additional treatment.
33
+
34
+ support_fraction : float, default=None
35
+ The proportion of points to be included in the support of the raw
36
+ MCD estimate. If None, the minimum value of support_fraction will
37
+ be used within the algorithm: `(n_samples + n_features + 1) / 2 * n_samples`.
38
+ Range is (0, 1).
39
+
40
+ contamination : float, default=0.1
41
+ The amount of contamination of the data set, i.e. the proportion
42
+ of outliers in the data set. Range is (0, 0.5].
43
+
44
+ random_state : int, RandomState instance or None, default=None
45
+ Determines the pseudo random number generator for shuffling
46
+ the data. Pass an int for reproducible results across multiple function
47
+ calls. See :term:`Glossary <random_state>`.
48
+
49
+ Attributes
50
+ ----------
51
+ location_ : ndarray of shape (n_features,)
52
+ Estimated robust location.
53
+
54
+ covariance_ : ndarray of shape (n_features, n_features)
55
+ Estimated robust covariance matrix.
56
+
57
+ precision_ : ndarray of shape (n_features, n_features)
58
+ Estimated pseudo inverse matrix.
59
+ (stored only if store_precision is True)
60
+
61
+ support_ : ndarray of shape (n_samples,)
62
+ A mask of the observations that have been used to compute the
63
+ robust estimates of location and shape.
64
+
65
+ offset_ : float
66
+ Offset used to define the decision function from the raw scores.
67
+ We have the relation: ``decision_function = score_samples - offset_``.
68
+ The offset depends on the contamination parameter and is defined in
69
+ such a way we obtain the expected number of outliers (samples with
70
+ decision function < 0) in training.
71
+
72
+ .. versionadded:: 0.20
73
+
74
+ raw_location_ : ndarray of shape (n_features,)
75
+ The raw robust estimated location before correction and re-weighting.
76
+
77
+ raw_covariance_ : ndarray of shape (n_features, n_features)
78
+ The raw robust estimated covariance before correction and re-weighting.
79
+
80
+ raw_support_ : ndarray of shape (n_samples,)
81
+ A mask of the observations that have been used to compute
82
+ the raw robust estimates of location and shape, before correction
83
+ and re-weighting.
84
+
85
+ dist_ : ndarray of shape (n_samples,)
86
+ Mahalanobis distances of the training set (on which :meth:`fit` is
87
+ called) observations.
88
+
89
+ n_features_in_ : int
90
+ Number of features seen during :term:`fit`.
91
+
92
+ .. versionadded:: 0.24
93
+
94
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
95
+ Names of features seen during :term:`fit`. Defined only when `X`
96
+ has feature names that are all strings.
97
+
98
+ .. versionadded:: 1.0
99
+
100
+ See Also
101
+ --------
102
+ EmpiricalCovariance : Maximum likelihood covariance estimator.
103
+ GraphicalLasso : Sparse inverse covariance estimation
104
+ with an l1-penalized estimator.
105
+ LedoitWolf : LedoitWolf Estimator.
106
+ MinCovDet : Minimum Covariance Determinant
107
+ (robust estimator of covariance).
108
+ OAS : Oracle Approximating Shrinkage Estimator.
109
+ ShrunkCovariance : Covariance estimator with shrinkage.
110
+
111
+ Notes
112
+ -----
113
+ Outlier detection from covariance estimation may break or not
114
+ perform well in high-dimensional settings. In particular, one will
115
+ always take care to work with ``n_samples > n_features ** 2``.
116
+
117
+ References
118
+ ----------
119
+ .. [1] Rousseeuw, P.J., Van Driessen, K. "A fast algorithm for the
120
+ minimum covariance determinant estimator" Technometrics 41(3), 212
121
+ (1999)
122
+
123
+ Examples
124
+ --------
125
+ >>> import numpy as np
126
+ >>> from sklearn.covariance import EllipticEnvelope
127
+ >>> true_cov = np.array([[.8, .3],
128
+ ... [.3, .4]])
129
+ >>> X = np.random.RandomState(0).multivariate_normal(mean=[0, 0],
130
+ ... cov=true_cov,
131
+ ... size=500)
132
+ >>> cov = EllipticEnvelope(random_state=0).fit(X)
133
+ >>> # predict returns 1 for an inlier and -1 for an outlier
134
+ >>> cov.predict([[0, 0],
135
+ ... [3, 3]])
136
+ array([ 1, -1])
137
+ >>> cov.covariance_
138
+ array([[0.7411..., 0.2535...],
139
+ [0.2535..., 0.3053...]])
140
+ >>> cov.location_
141
+ array([0.0813... , 0.0427...])
142
+ """
143
+
144
+ _parameter_constraints: dict = {
145
+ **MinCovDet._parameter_constraints,
146
+ "contamination": [Interval(Real, 0, 0.5, closed="right")],
147
+ }
148
+
149
+ def __init__(
150
+ self,
151
+ *,
152
+ store_precision=True,
153
+ assume_centered=False,
154
+ support_fraction=None,
155
+ contamination=0.1,
156
+ random_state=None,
157
+ ):
158
+ super().__init__(
159
+ store_precision=store_precision,
160
+ assume_centered=assume_centered,
161
+ support_fraction=support_fraction,
162
+ random_state=random_state,
163
+ )
164
+ self.contamination = contamination
165
+
166
+ @_fit_context(prefer_skip_nested_validation=True)
167
+ def fit(self, X, y=None):
168
+ """Fit the EllipticEnvelope model.
169
+
170
+ Parameters
171
+ ----------
172
+ X : array-like of shape (n_samples, n_features)
173
+ Training data.
174
+
175
+ y : Ignored
176
+ Not used, present for API consistency by convention.
177
+
178
+ Returns
179
+ -------
180
+ self : object
181
+ Returns the instance itself.
182
+ """
183
+ super().fit(X)
184
+ self.offset_ = np.percentile(-self.dist_, 100.0 * self.contamination)
185
+ return self
186
+
187
+ def decision_function(self, X):
188
+ """Compute the decision function of the given observations.
189
+
190
+ Parameters
191
+ ----------
192
+ X : array-like of shape (n_samples, n_features)
193
+ The data matrix.
194
+
195
+ Returns
196
+ -------
197
+ decision : ndarray of shape (n_samples,)
198
+ Decision function of the samples.
199
+ It is equal to the shifted Mahalanobis distances.
200
+ The threshold for being an outlier is 0, which ensures a
201
+ compatibility with other outlier detection algorithms.
202
+ """
203
+ check_is_fitted(self)
204
+ negative_mahal_dist = self.score_samples(X)
205
+ return negative_mahal_dist - self.offset_
206
+
207
+ def score_samples(self, X):
208
+ """Compute the negative Mahalanobis distances.
209
+
210
+ Parameters
211
+ ----------
212
+ X : array-like of shape (n_samples, n_features)
213
+ The data matrix.
214
+
215
+ Returns
216
+ -------
217
+ negative_mahal_distances : array-like of shape (n_samples,)
218
+ Opposite of the Mahalanobis distances.
219
+ """
220
+ check_is_fitted(self)
221
+ return -self.mahalanobis(X)
222
+
223
+ def predict(self, X):
224
+ """
225
+ Predict labels (1 inlier, -1 outlier) of X according to fitted model.
226
+
227
+ Parameters
228
+ ----------
229
+ X : array-like of shape (n_samples, n_features)
230
+ The data matrix.
231
+
232
+ Returns
233
+ -------
234
+ is_inlier : ndarray of shape (n_samples,)
235
+ Returns -1 for anomalies/outliers and +1 for inliers.
236
+ """
237
+ values = self.decision_function(X)
238
+ is_inlier = np.full(values.shape[0], -1, dtype=int)
239
+ is_inlier[values >= 0] = 1
240
+
241
+ return is_inlier
242
+
243
+ def score(self, X, y, sample_weight=None):
244
+ """Return the mean accuracy on the given test data and labels.
245
+
246
+ In multi-label classification, this is the subset accuracy
247
+ which is a harsh metric since you require for each sample that
248
+ each label set be correctly predicted.
249
+
250
+ Parameters
251
+ ----------
252
+ X : array-like of shape (n_samples, n_features)
253
+ Test samples.
254
+
255
+ y : array-like of shape (n_samples,) or (n_samples, n_outputs)
256
+ True labels for X.
257
+
258
+ sample_weight : array-like of shape (n_samples,), default=None
259
+ Sample weights.
260
+
261
+ Returns
262
+ -------
263
+ score : float
264
+ Mean accuracy of self.predict(X) w.r.t. y.
265
+ """
266
+ return accuracy_score(y, self.predict(X), sample_weight=sample_weight)
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_empirical_covariance.py ADDED
@@ -0,0 +1,367 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Maximum likelihood covariance estimator.
3
+
4
+ """
5
+
6
+ # Authors: The scikit-learn developers
7
+ # SPDX-License-Identifier: BSD-3-Clause
8
+
9
+ # avoid division truncation
10
+ import warnings
11
+
12
+ import numpy as np
13
+ from scipy import linalg
14
+
15
+ from sklearn.utils import metadata_routing
16
+
17
+ from .. import config_context
18
+ from ..base import BaseEstimator, _fit_context
19
+ from ..metrics.pairwise import pairwise_distances
20
+ from ..utils import check_array
21
+ from ..utils._param_validation import validate_params
22
+ from ..utils.extmath import fast_logdet
23
+ from ..utils.validation import validate_data
24
+
25
+
26
+ @validate_params(
27
+ {
28
+ "emp_cov": [np.ndarray],
29
+ "precision": [np.ndarray],
30
+ },
31
+ prefer_skip_nested_validation=True,
32
+ )
33
+ def log_likelihood(emp_cov, precision):
34
+ """Compute the sample mean of the log_likelihood under a covariance model.
35
+
36
+ Computes the empirical expected log-likelihood, allowing for universal
37
+ comparison (beyond this software package), and accounts for normalization
38
+ terms and scaling.
39
+
40
+ Parameters
41
+ ----------
42
+ emp_cov : ndarray of shape (n_features, n_features)
43
+ Maximum Likelihood Estimator of covariance.
44
+
45
+ precision : ndarray of shape (n_features, n_features)
46
+ The precision matrix of the covariance model to be tested.
47
+
48
+ Returns
49
+ -------
50
+ log_likelihood_ : float
51
+ Sample mean of the log-likelihood.
52
+ """
53
+ p = precision.shape[0]
54
+ log_likelihood_ = -np.sum(emp_cov * precision) + fast_logdet(precision)
55
+ log_likelihood_ -= p * np.log(2 * np.pi)
56
+ log_likelihood_ /= 2.0
57
+ return log_likelihood_
58
+
59
+
60
+ @validate_params(
61
+ {
62
+ "X": ["array-like"],
63
+ "assume_centered": ["boolean"],
64
+ },
65
+ prefer_skip_nested_validation=True,
66
+ )
67
+ def empirical_covariance(X, *, assume_centered=False):
68
+ """Compute the Maximum likelihood covariance estimator.
69
+
70
+ Parameters
71
+ ----------
72
+ X : ndarray of shape (n_samples, n_features)
73
+ Data from which to compute the covariance estimate.
74
+
75
+ assume_centered : bool, default=False
76
+ If `True`, data will not be centered before computation.
77
+ Useful when working with data whose mean is almost, but not exactly
78
+ zero.
79
+ If `False`, data will be centered before computation.
80
+
81
+ Returns
82
+ -------
83
+ covariance : ndarray of shape (n_features, n_features)
84
+ Empirical covariance (Maximum Likelihood Estimator).
85
+
86
+ Examples
87
+ --------
88
+ >>> from sklearn.covariance import empirical_covariance
89
+ >>> X = [[1,1,1],[1,1,1],[1,1,1],
90
+ ... [0,0,0],[0,0,0],[0,0,0]]
91
+ >>> empirical_covariance(X)
92
+ array([[0.25, 0.25, 0.25],
93
+ [0.25, 0.25, 0.25],
94
+ [0.25, 0.25, 0.25]])
95
+ """
96
+ X = check_array(X, ensure_2d=False, ensure_all_finite=False)
97
+
98
+ if X.ndim == 1:
99
+ X = np.reshape(X, (1, -1))
100
+
101
+ if X.shape[0] == 1:
102
+ warnings.warn(
103
+ "Only one sample available. You may want to reshape your data array"
104
+ )
105
+
106
+ if assume_centered:
107
+ covariance = np.dot(X.T, X) / X.shape[0]
108
+ else:
109
+ covariance = np.cov(X.T, bias=1)
110
+
111
+ if covariance.ndim == 0:
112
+ covariance = np.array([[covariance]])
113
+ return covariance
114
+
115
+
116
+ class EmpiricalCovariance(BaseEstimator):
117
+ """Maximum likelihood covariance estimator.
118
+
119
+ Read more in the :ref:`User Guide <covariance>`.
120
+
121
+ Parameters
122
+ ----------
123
+ store_precision : bool, default=True
124
+ Specifies if the estimated precision is stored.
125
+
126
+ assume_centered : bool, default=False
127
+ If True, data are not centered before computation.
128
+ Useful when working with data whose mean is almost, but not exactly
129
+ zero.
130
+ If False (default), data are centered before computation.
131
+
132
+ Attributes
133
+ ----------
134
+ location_ : ndarray of shape (n_features,)
135
+ Estimated location, i.e. the estimated mean.
136
+
137
+ covariance_ : ndarray of shape (n_features, n_features)
138
+ Estimated covariance matrix
139
+
140
+ precision_ : ndarray of shape (n_features, n_features)
141
+ Estimated pseudo-inverse matrix.
142
+ (stored only if store_precision is True)
143
+
144
+ n_features_in_ : int
145
+ Number of features seen during :term:`fit`.
146
+
147
+ .. versionadded:: 0.24
148
+
149
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
150
+ Names of features seen during :term:`fit`. Defined only when `X`
151
+ has feature names that are all strings.
152
+
153
+ .. versionadded:: 1.0
154
+
155
+ See Also
156
+ --------
157
+ EllipticEnvelope : An object for detecting outliers in
158
+ a Gaussian distributed dataset.
159
+ GraphicalLasso : Sparse inverse covariance estimation
160
+ with an l1-penalized estimator.
161
+ LedoitWolf : LedoitWolf Estimator.
162
+ MinCovDet : Minimum Covariance Determinant
163
+ (robust estimator of covariance).
164
+ OAS : Oracle Approximating Shrinkage Estimator.
165
+ ShrunkCovariance : Covariance estimator with shrinkage.
166
+
167
+ Examples
168
+ --------
169
+ >>> import numpy as np
170
+ >>> from sklearn.covariance import EmpiricalCovariance
171
+ >>> from sklearn.datasets import make_gaussian_quantiles
172
+ >>> real_cov = np.array([[.8, .3],
173
+ ... [.3, .4]])
174
+ >>> rng = np.random.RandomState(0)
175
+ >>> X = rng.multivariate_normal(mean=[0, 0],
176
+ ... cov=real_cov,
177
+ ... size=500)
178
+ >>> cov = EmpiricalCovariance().fit(X)
179
+ >>> cov.covariance_
180
+ array([[0.7569..., 0.2818...],
181
+ [0.2818..., 0.3928...]])
182
+ >>> cov.location_
183
+ array([0.0622..., 0.0193...])
184
+ """
185
+
186
+ # X_test should have been called X
187
+ __metadata_request__score = {"X_test": metadata_routing.UNUSED}
188
+
189
+ _parameter_constraints: dict = {
190
+ "store_precision": ["boolean"],
191
+ "assume_centered": ["boolean"],
192
+ }
193
+
194
+ def __init__(self, *, store_precision=True, assume_centered=False):
195
+ self.store_precision = store_precision
196
+ self.assume_centered = assume_centered
197
+
198
+ def _set_covariance(self, covariance):
199
+ """Saves the covariance and precision estimates
200
+
201
+ Storage is done accordingly to `self.store_precision`.
202
+ Precision stored only if invertible.
203
+
204
+ Parameters
205
+ ----------
206
+ covariance : array-like of shape (n_features, n_features)
207
+ Estimated covariance matrix to be stored, and from which precision
208
+ is computed.
209
+ """
210
+ covariance = check_array(covariance)
211
+ # set covariance
212
+ self.covariance_ = covariance
213
+ # set precision
214
+ if self.store_precision:
215
+ self.precision_ = linalg.pinvh(covariance, check_finite=False)
216
+ else:
217
+ self.precision_ = None
218
+
219
+ def get_precision(self):
220
+ """Getter for the precision matrix.
221
+
222
+ Returns
223
+ -------
224
+ precision_ : array-like of shape (n_features, n_features)
225
+ The precision matrix associated to the current covariance object.
226
+ """
227
+ if self.store_precision:
228
+ precision = self.precision_
229
+ else:
230
+ precision = linalg.pinvh(self.covariance_, check_finite=False)
231
+ return precision
232
+
233
+ @_fit_context(prefer_skip_nested_validation=True)
234
+ def fit(self, X, y=None):
235
+ """Fit the maximum likelihood covariance estimator to X.
236
+
237
+ Parameters
238
+ ----------
239
+ X : array-like of shape (n_samples, n_features)
240
+ Training data, where `n_samples` is the number of samples and
241
+ `n_features` is the number of features.
242
+
243
+ y : Ignored
244
+ Not used, present for API consistency by convention.
245
+
246
+ Returns
247
+ -------
248
+ self : object
249
+ Returns the instance itself.
250
+ """
251
+ X = validate_data(self, X)
252
+ if self.assume_centered:
253
+ self.location_ = np.zeros(X.shape[1])
254
+ else:
255
+ self.location_ = X.mean(0)
256
+ covariance = empirical_covariance(X, assume_centered=self.assume_centered)
257
+ self._set_covariance(covariance)
258
+
259
+ return self
260
+
261
+ def score(self, X_test, y=None):
262
+ """Compute the log-likelihood of `X_test` under the estimated Gaussian model.
263
+
264
+ The Gaussian model is defined by its mean and covariance matrix which are
265
+ represented respectively by `self.location_` and `self.covariance_`.
266
+
267
+ Parameters
268
+ ----------
269
+ X_test : array-like of shape (n_samples, n_features)
270
+ Test data of which we compute the likelihood, where `n_samples` is
271
+ the number of samples and `n_features` is the number of features.
272
+ `X_test` is assumed to be drawn from the same distribution than
273
+ the data used in fit (including centering).
274
+
275
+ y : Ignored
276
+ Not used, present for API consistency by convention.
277
+
278
+ Returns
279
+ -------
280
+ res : float
281
+ The log-likelihood of `X_test` with `self.location_` and `self.covariance_`
282
+ as estimators of the Gaussian model mean and covariance matrix respectively.
283
+ """
284
+ X_test = validate_data(self, X_test, reset=False)
285
+ # compute empirical covariance of the test set
286
+ test_cov = empirical_covariance(X_test - self.location_, assume_centered=True)
287
+ # compute log likelihood
288
+ res = log_likelihood(test_cov, self.get_precision())
289
+
290
+ return res
291
+
292
+ def error_norm(self, comp_cov, norm="frobenius", scaling=True, squared=True):
293
+ """Compute the Mean Squared Error between two covariance estimators.
294
+
295
+ Parameters
296
+ ----------
297
+ comp_cov : array-like of shape (n_features, n_features)
298
+ The covariance to compare with.
299
+
300
+ norm : {"frobenius", "spectral"}, default="frobenius"
301
+ The type of norm used to compute the error. Available error types:
302
+ - 'frobenius' (default): sqrt(tr(A^t.A))
303
+ - 'spectral': sqrt(max(eigenvalues(A^t.A))
304
+ where A is the error ``(comp_cov - self.covariance_)``.
305
+
306
+ scaling : bool, default=True
307
+ If True (default), the squared error norm is divided by n_features.
308
+ If False, the squared error norm is not rescaled.
309
+
310
+ squared : bool, default=True
311
+ Whether to compute the squared error norm or the error norm.
312
+ If True (default), the squared error norm is returned.
313
+ If False, the error norm is returned.
314
+
315
+ Returns
316
+ -------
317
+ result : float
318
+ The Mean Squared Error (in the sense of the Frobenius norm) between
319
+ `self` and `comp_cov` covariance estimators.
320
+ """
321
+ # compute the error
322
+ error = comp_cov - self.covariance_
323
+ # compute the error norm
324
+ if norm == "frobenius":
325
+ squared_norm = np.sum(error**2)
326
+ elif norm == "spectral":
327
+ squared_norm = np.amax(linalg.svdvals(np.dot(error.T, error)))
328
+ else:
329
+ raise NotImplementedError(
330
+ "Only spectral and frobenius norms are implemented"
331
+ )
332
+ # optionally scale the error norm
333
+ if scaling:
334
+ squared_norm = squared_norm / error.shape[0]
335
+ # finally get either the squared norm or the norm
336
+ if squared:
337
+ result = squared_norm
338
+ else:
339
+ result = np.sqrt(squared_norm)
340
+
341
+ return result
342
+
343
+ def mahalanobis(self, X):
344
+ """Compute the squared Mahalanobis distances of given observations.
345
+
346
+ Parameters
347
+ ----------
348
+ X : array-like of shape (n_samples, n_features)
349
+ The observations, the Mahalanobis distances of the which we
350
+ compute. Observations are assumed to be drawn from the same
351
+ distribution than the data used in fit.
352
+
353
+ Returns
354
+ -------
355
+ dist : ndarray of shape (n_samples,)
356
+ Squared Mahalanobis distances of the observations.
357
+ """
358
+ X = validate_data(self, X, reset=False)
359
+
360
+ precision = self.get_precision()
361
+ with config_context(assume_finite=True):
362
+ # compute mahalanobis distances
363
+ dist = pairwise_distances(
364
+ X, self.location_[np.newaxis, :], metric="mahalanobis", VI=precision
365
+ )
366
+
367
+ return np.reshape(dist, (len(X),)) ** 2
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_graph_lasso.py ADDED
@@ -0,0 +1,1140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """GraphicalLasso: sparse inverse covariance estimation with an l1-penalized
2
+ estimator.
3
+ """
4
+
5
+ # Authors: The scikit-learn developers
6
+ # SPDX-License-Identifier: BSD-3-Clause
7
+
8
+ import operator
9
+ import sys
10
+ import time
11
+ import warnings
12
+ from numbers import Integral, Real
13
+
14
+ import numpy as np
15
+ from scipy import linalg
16
+
17
+ from ..base import _fit_context
18
+ from ..exceptions import ConvergenceWarning
19
+
20
+ # mypy error: Module 'sklearn.linear_model' has no attribute '_cd_fast'
21
+ from ..linear_model import _cd_fast as cd_fast # type: ignore
22
+ from ..linear_model import lars_path_gram
23
+ from ..model_selection import check_cv, cross_val_score
24
+ from ..utils import Bunch
25
+ from ..utils._param_validation import Interval, StrOptions, validate_params
26
+ from ..utils.metadata_routing import (
27
+ MetadataRouter,
28
+ MethodMapping,
29
+ _raise_for_params,
30
+ _routing_enabled,
31
+ process_routing,
32
+ )
33
+ from ..utils.parallel import Parallel, delayed
34
+ from ..utils.validation import (
35
+ _is_arraylike_not_scalar,
36
+ check_random_state,
37
+ check_scalar,
38
+ validate_data,
39
+ )
40
+ from . import EmpiricalCovariance, empirical_covariance, log_likelihood
41
+
42
+
43
+ # Helper functions to compute the objective and dual objective functions
44
+ # of the l1-penalized estimator
45
+ def _objective(mle, precision_, alpha):
46
+ """Evaluation of the graphical-lasso objective function
47
+
48
+ the objective function is made of a shifted scaled version of the
49
+ normalized log-likelihood (i.e. its empirical mean over the samples) and a
50
+ penalisation term to promote sparsity
51
+ """
52
+ p = precision_.shape[0]
53
+ cost = -2.0 * log_likelihood(mle, precision_) + p * np.log(2 * np.pi)
54
+ cost += alpha * (np.abs(precision_).sum() - np.abs(np.diag(precision_)).sum())
55
+ return cost
56
+
57
+
58
+ def _dual_gap(emp_cov, precision_, alpha):
59
+ """Expression of the dual gap convergence criterion
60
+
61
+ The specific definition is given in Duchi "Projected Subgradient Methods
62
+ for Learning Sparse Gaussians".
63
+ """
64
+ gap = np.sum(emp_cov * precision_)
65
+ gap -= precision_.shape[0]
66
+ gap += alpha * (np.abs(precision_).sum() - np.abs(np.diag(precision_)).sum())
67
+ return gap
68
+
69
+
70
+ # The g-lasso algorithm
71
+ def _graphical_lasso(
72
+ emp_cov,
73
+ alpha,
74
+ *,
75
+ cov_init=None,
76
+ mode="cd",
77
+ tol=1e-4,
78
+ enet_tol=1e-4,
79
+ max_iter=100,
80
+ verbose=False,
81
+ eps=np.finfo(np.float64).eps,
82
+ ):
83
+ _, n_features = emp_cov.shape
84
+ if alpha == 0:
85
+ # Early return without regularization
86
+ precision_ = linalg.inv(emp_cov)
87
+ cost = -2.0 * log_likelihood(emp_cov, precision_)
88
+ cost += n_features * np.log(2 * np.pi)
89
+ d_gap = np.sum(emp_cov * precision_) - n_features
90
+ return emp_cov, precision_, (cost, d_gap), 0
91
+
92
+ if cov_init is None:
93
+ covariance_ = emp_cov.copy()
94
+ else:
95
+ covariance_ = cov_init.copy()
96
+ # As a trivial regularization (Tikhonov like), we scale down the
97
+ # off-diagonal coefficients of our starting point: This is needed, as
98
+ # in the cross-validation the cov_init can easily be
99
+ # ill-conditioned, and the CV loop blows. Beside, this takes
100
+ # conservative stand-point on the initial conditions, and it tends to
101
+ # make the convergence go faster.
102
+ covariance_ *= 0.95
103
+ diagonal = emp_cov.flat[:: n_features + 1]
104
+ covariance_.flat[:: n_features + 1] = diagonal
105
+ precision_ = linalg.pinvh(covariance_)
106
+
107
+ indices = np.arange(n_features)
108
+ i = 0 # initialize the counter to be robust to `max_iter=0`
109
+ costs = list()
110
+ # The different l1 regression solver have different numerical errors
111
+ if mode == "cd":
112
+ errors = dict(over="raise", invalid="ignore")
113
+ else:
114
+ errors = dict(invalid="raise")
115
+ try:
116
+ # be robust to the max_iter=0 edge case, see:
117
+ # https://github.com/scikit-learn/scikit-learn/issues/4134
118
+ d_gap = np.inf
119
+ # set a sub_covariance buffer
120
+ sub_covariance = np.copy(covariance_[1:, 1:], order="C")
121
+ for i in range(max_iter):
122
+ for idx in range(n_features):
123
+ # To keep the contiguous matrix `sub_covariance` equal to
124
+ # covariance_[indices != idx].T[indices != idx]
125
+ # we only need to update 1 column and 1 line when idx changes
126
+ if idx > 0:
127
+ di = idx - 1
128
+ sub_covariance[di] = covariance_[di][indices != idx]
129
+ sub_covariance[:, di] = covariance_[:, di][indices != idx]
130
+ else:
131
+ sub_covariance[:] = covariance_[1:, 1:]
132
+ row = emp_cov[idx, indices != idx]
133
+ with np.errstate(**errors):
134
+ if mode == "cd":
135
+ # Use coordinate descent
136
+ coefs = -(
137
+ precision_[indices != idx, idx]
138
+ / (precision_[idx, idx] + 1000 * eps)
139
+ )
140
+ coefs, _, _, _ = cd_fast.enet_coordinate_descent_gram(
141
+ coefs,
142
+ alpha,
143
+ 0,
144
+ sub_covariance,
145
+ row,
146
+ row,
147
+ max_iter,
148
+ enet_tol,
149
+ check_random_state(None),
150
+ False,
151
+ )
152
+ else: # mode == "lars"
153
+ _, _, coefs = lars_path_gram(
154
+ Xy=row,
155
+ Gram=sub_covariance,
156
+ n_samples=row.size,
157
+ alpha_min=alpha / (n_features - 1),
158
+ copy_Gram=True,
159
+ eps=eps,
160
+ method="lars",
161
+ return_path=False,
162
+ )
163
+ # Update the precision matrix
164
+ precision_[idx, idx] = 1.0 / (
165
+ covariance_[idx, idx]
166
+ - np.dot(covariance_[indices != idx, idx], coefs)
167
+ )
168
+ precision_[indices != idx, idx] = -precision_[idx, idx] * coefs
169
+ precision_[idx, indices != idx] = -precision_[idx, idx] * coefs
170
+ coefs = np.dot(sub_covariance, coefs)
171
+ covariance_[idx, indices != idx] = coefs
172
+ covariance_[indices != idx, idx] = coefs
173
+ if not np.isfinite(precision_.sum()):
174
+ raise FloatingPointError(
175
+ "The system is too ill-conditioned for this solver"
176
+ )
177
+ d_gap = _dual_gap(emp_cov, precision_, alpha)
178
+ cost = _objective(emp_cov, precision_, alpha)
179
+ if verbose:
180
+ print(
181
+ "[graphical_lasso] Iteration % 3i, cost % 3.2e, dual gap %.3e"
182
+ % (i, cost, d_gap)
183
+ )
184
+ costs.append((cost, d_gap))
185
+ if np.abs(d_gap) < tol:
186
+ break
187
+ if not np.isfinite(cost) and i > 0:
188
+ raise FloatingPointError(
189
+ "Non SPD result: the system is too ill-conditioned for this solver"
190
+ )
191
+ else:
192
+ warnings.warn(
193
+ "graphical_lasso: did not converge after %i iteration: dual gap: %.3e"
194
+ % (max_iter, d_gap),
195
+ ConvergenceWarning,
196
+ )
197
+ except FloatingPointError as e:
198
+ e.args = (e.args[0] + ". The system is too ill-conditioned for this solver",)
199
+ raise e
200
+
201
+ return covariance_, precision_, costs, i + 1
202
+
203
+
204
+ def alpha_max(emp_cov):
205
+ """Find the maximum alpha for which there are some non-zeros off-diagonal.
206
+
207
+ Parameters
208
+ ----------
209
+ emp_cov : ndarray of shape (n_features, n_features)
210
+ The sample covariance matrix.
211
+
212
+ Notes
213
+ -----
214
+ This results from the bound for the all the Lasso that are solved
215
+ in GraphicalLasso: each time, the row of cov corresponds to Xy. As the
216
+ bound for alpha is given by `max(abs(Xy))`, the result follows.
217
+ """
218
+ A = np.copy(emp_cov)
219
+ A.flat[:: A.shape[0] + 1] = 0
220
+ return np.max(np.abs(A))
221
+
222
+
223
+ @validate_params(
224
+ {
225
+ "emp_cov": ["array-like"],
226
+ "return_costs": ["boolean"],
227
+ "return_n_iter": ["boolean"],
228
+ },
229
+ prefer_skip_nested_validation=False,
230
+ )
231
+ def graphical_lasso(
232
+ emp_cov,
233
+ alpha,
234
+ *,
235
+ mode="cd",
236
+ tol=1e-4,
237
+ enet_tol=1e-4,
238
+ max_iter=100,
239
+ verbose=False,
240
+ return_costs=False,
241
+ eps=np.finfo(np.float64).eps,
242
+ return_n_iter=False,
243
+ ):
244
+ """L1-penalized covariance estimator.
245
+
246
+ Read more in the :ref:`User Guide <sparse_inverse_covariance>`.
247
+
248
+ .. versionchanged:: v0.20
249
+ graph_lasso has been renamed to graphical_lasso
250
+
251
+ Parameters
252
+ ----------
253
+ emp_cov : array-like of shape (n_features, n_features)
254
+ Empirical covariance from which to compute the covariance estimate.
255
+
256
+ alpha : float
257
+ The regularization parameter: the higher alpha, the more
258
+ regularization, the sparser the inverse covariance.
259
+ Range is (0, inf].
260
+
261
+ mode : {'cd', 'lars'}, default='cd'
262
+ The Lasso solver to use: coordinate descent or LARS. Use LARS for
263
+ very sparse underlying graphs, where p > n. Elsewhere prefer cd
264
+ which is more numerically stable.
265
+
266
+ tol : float, default=1e-4
267
+ The tolerance to declare convergence: if the dual gap goes below
268
+ this value, iterations are stopped. Range is (0, inf].
269
+
270
+ enet_tol : float, default=1e-4
271
+ The tolerance for the elastic net solver used to calculate the descent
272
+ direction. This parameter controls the accuracy of the search direction
273
+ for a given column update, not of the overall parameter estimate. Only
274
+ used for mode='cd'. Range is (0, inf].
275
+
276
+ max_iter : int, default=100
277
+ The maximum number of iterations.
278
+
279
+ verbose : bool, default=False
280
+ If verbose is True, the objective function and dual gap are
281
+ printed at each iteration.
282
+
283
+ return_costs : bool, default=False
284
+ If return_costs is True, the objective function and dual gap
285
+ at each iteration are returned.
286
+
287
+ eps : float, default=eps
288
+ The machine-precision regularization in the computation of the
289
+ Cholesky diagonal factors. Increase this for very ill-conditioned
290
+ systems. Default is `np.finfo(np.float64).eps`.
291
+
292
+ return_n_iter : bool, default=False
293
+ Whether or not to return the number of iterations.
294
+
295
+ Returns
296
+ -------
297
+ covariance : ndarray of shape (n_features, n_features)
298
+ The estimated covariance matrix.
299
+
300
+ precision : ndarray of shape (n_features, n_features)
301
+ The estimated (sparse) precision matrix.
302
+
303
+ costs : list of (objective, dual_gap) pairs
304
+ The list of values of the objective function and the dual gap at
305
+ each iteration. Returned only if return_costs is True.
306
+
307
+ n_iter : int
308
+ Number of iterations. Returned only if `return_n_iter` is set to True.
309
+
310
+ See Also
311
+ --------
312
+ GraphicalLasso : Sparse inverse covariance estimation
313
+ with an l1-penalized estimator.
314
+ GraphicalLassoCV : Sparse inverse covariance with
315
+ cross-validated choice of the l1 penalty.
316
+
317
+ Notes
318
+ -----
319
+ The algorithm employed to solve this problem is the GLasso algorithm,
320
+ from the Friedman 2008 Biostatistics paper. It is the same algorithm
321
+ as in the R `glasso` package.
322
+
323
+ One possible difference with the `glasso` R package is that the
324
+ diagonal coefficients are not penalized.
325
+
326
+ Examples
327
+ --------
328
+ >>> import numpy as np
329
+ >>> from sklearn.datasets import make_sparse_spd_matrix
330
+ >>> from sklearn.covariance import empirical_covariance, graphical_lasso
331
+ >>> true_cov = make_sparse_spd_matrix(n_dim=3,random_state=42)
332
+ >>> rng = np.random.RandomState(42)
333
+ >>> X = rng.multivariate_normal(mean=np.zeros(3), cov=true_cov, size=3)
334
+ >>> emp_cov = empirical_covariance(X, assume_centered=True)
335
+ >>> emp_cov, _ = graphical_lasso(emp_cov, alpha=0.05)
336
+ >>> emp_cov
337
+ array([[ 1.68..., 0.21..., -0.20...],
338
+ [ 0.21..., 0.22..., -0.08...],
339
+ [-0.20..., -0.08..., 0.23...]])
340
+ """
341
+ model = GraphicalLasso(
342
+ alpha=alpha,
343
+ mode=mode,
344
+ covariance="precomputed",
345
+ tol=tol,
346
+ enet_tol=enet_tol,
347
+ max_iter=max_iter,
348
+ verbose=verbose,
349
+ eps=eps,
350
+ assume_centered=True,
351
+ ).fit(emp_cov)
352
+
353
+ output = [model.covariance_, model.precision_]
354
+ if return_costs:
355
+ output.append(model.costs_)
356
+ if return_n_iter:
357
+ output.append(model.n_iter_)
358
+ return tuple(output)
359
+
360
+
361
+ class BaseGraphicalLasso(EmpiricalCovariance):
362
+ _parameter_constraints: dict = {
363
+ **EmpiricalCovariance._parameter_constraints,
364
+ "tol": [Interval(Real, 0, None, closed="right")],
365
+ "enet_tol": [Interval(Real, 0, None, closed="right")],
366
+ "max_iter": [Interval(Integral, 0, None, closed="left")],
367
+ "mode": [StrOptions({"cd", "lars"})],
368
+ "verbose": ["verbose"],
369
+ "eps": [Interval(Real, 0, None, closed="both")],
370
+ }
371
+ _parameter_constraints.pop("store_precision")
372
+
373
+ def __init__(
374
+ self,
375
+ tol=1e-4,
376
+ enet_tol=1e-4,
377
+ max_iter=100,
378
+ mode="cd",
379
+ verbose=False,
380
+ eps=np.finfo(np.float64).eps,
381
+ assume_centered=False,
382
+ ):
383
+ super().__init__(assume_centered=assume_centered)
384
+ self.tol = tol
385
+ self.enet_tol = enet_tol
386
+ self.max_iter = max_iter
387
+ self.mode = mode
388
+ self.verbose = verbose
389
+ self.eps = eps
390
+
391
+
392
+ class GraphicalLasso(BaseGraphicalLasso):
393
+ """Sparse inverse covariance estimation with an l1-penalized estimator.
394
+
395
+ For a usage example see
396
+ :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py`.
397
+
398
+ Read more in the :ref:`User Guide <sparse_inverse_covariance>`.
399
+
400
+ .. versionchanged:: v0.20
401
+ GraphLasso has been renamed to GraphicalLasso
402
+
403
+ Parameters
404
+ ----------
405
+ alpha : float, default=0.01
406
+ The regularization parameter: the higher alpha, the more
407
+ regularization, the sparser the inverse covariance.
408
+ Range is (0, inf].
409
+
410
+ mode : {'cd', 'lars'}, default='cd'
411
+ The Lasso solver to use: coordinate descent or LARS. Use LARS for
412
+ very sparse underlying graphs, where p > n. Elsewhere prefer cd
413
+ which is more numerically stable.
414
+
415
+ covariance : "precomputed", default=None
416
+ If covariance is "precomputed", the input data in `fit` is assumed
417
+ to be the covariance matrix. If `None`, the empirical covariance
418
+ is estimated from the data `X`.
419
+
420
+ .. versionadded:: 1.3
421
+
422
+ tol : float, default=1e-4
423
+ The tolerance to declare convergence: if the dual gap goes below
424
+ this value, iterations are stopped. Range is (0, inf].
425
+
426
+ enet_tol : float, default=1e-4
427
+ The tolerance for the elastic net solver used to calculate the descent
428
+ direction. This parameter controls the accuracy of the search direction
429
+ for a given column update, not of the overall parameter estimate. Only
430
+ used for mode='cd'. Range is (0, inf].
431
+
432
+ max_iter : int, default=100
433
+ The maximum number of iterations.
434
+
435
+ verbose : bool, default=False
436
+ If verbose is True, the objective function and dual gap are
437
+ plotted at each iteration.
438
+
439
+ eps : float, default=eps
440
+ The machine-precision regularization in the computation of the
441
+ Cholesky diagonal factors. Increase this for very ill-conditioned
442
+ systems. Default is `np.finfo(np.float64).eps`.
443
+
444
+ .. versionadded:: 1.3
445
+
446
+ assume_centered : bool, default=False
447
+ If True, data are not centered before computation.
448
+ Useful when working with data whose mean is almost, but not exactly
449
+ zero.
450
+ If False, data are centered before computation.
451
+
452
+ Attributes
453
+ ----------
454
+ location_ : ndarray of shape (n_features,)
455
+ Estimated location, i.e. the estimated mean.
456
+
457
+ covariance_ : ndarray of shape (n_features, n_features)
458
+ Estimated covariance matrix
459
+
460
+ precision_ : ndarray of shape (n_features, n_features)
461
+ Estimated pseudo inverse matrix.
462
+
463
+ n_iter_ : int
464
+ Number of iterations run.
465
+
466
+ costs_ : list of (objective, dual_gap) pairs
467
+ The list of values of the objective function and the dual gap at
468
+ each iteration. Returned only if return_costs is True.
469
+
470
+ .. versionadded:: 1.3
471
+
472
+ n_features_in_ : int
473
+ Number of features seen during :term:`fit`.
474
+
475
+ .. versionadded:: 0.24
476
+
477
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
478
+ Names of features seen during :term:`fit`. Defined only when `X`
479
+ has feature names that are all strings.
480
+
481
+ .. versionadded:: 1.0
482
+
483
+ See Also
484
+ --------
485
+ graphical_lasso : L1-penalized covariance estimator.
486
+ GraphicalLassoCV : Sparse inverse covariance with
487
+ cross-validated choice of the l1 penalty.
488
+
489
+ Examples
490
+ --------
491
+ >>> import numpy as np
492
+ >>> from sklearn.covariance import GraphicalLasso
493
+ >>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
494
+ ... [0.0, 0.4, 0.0, 0.0],
495
+ ... [0.2, 0.0, 0.3, 0.1],
496
+ ... [0.0, 0.0, 0.1, 0.7]])
497
+ >>> np.random.seed(0)
498
+ >>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
499
+ ... cov=true_cov,
500
+ ... size=200)
501
+ >>> cov = GraphicalLasso().fit(X)
502
+ >>> np.around(cov.covariance_, decimals=3)
503
+ array([[0.816, 0.049, 0.218, 0.019],
504
+ [0.049, 0.364, 0.017, 0.034],
505
+ [0.218, 0.017, 0.322, 0.093],
506
+ [0.019, 0.034, 0.093, 0.69 ]])
507
+ >>> np.around(cov.location_, decimals=3)
508
+ array([0.073, 0.04 , 0.038, 0.143])
509
+ """
510
+
511
+ _parameter_constraints: dict = {
512
+ **BaseGraphicalLasso._parameter_constraints,
513
+ "alpha": [Interval(Real, 0, None, closed="both")],
514
+ "covariance": [StrOptions({"precomputed"}), None],
515
+ }
516
+
517
+ def __init__(
518
+ self,
519
+ alpha=0.01,
520
+ *,
521
+ mode="cd",
522
+ covariance=None,
523
+ tol=1e-4,
524
+ enet_tol=1e-4,
525
+ max_iter=100,
526
+ verbose=False,
527
+ eps=np.finfo(np.float64).eps,
528
+ assume_centered=False,
529
+ ):
530
+ super().__init__(
531
+ tol=tol,
532
+ enet_tol=enet_tol,
533
+ max_iter=max_iter,
534
+ mode=mode,
535
+ verbose=verbose,
536
+ eps=eps,
537
+ assume_centered=assume_centered,
538
+ )
539
+ self.alpha = alpha
540
+ self.covariance = covariance
541
+
542
+ @_fit_context(prefer_skip_nested_validation=True)
543
+ def fit(self, X, y=None):
544
+ """Fit the GraphicalLasso model to X.
545
+
546
+ Parameters
547
+ ----------
548
+ X : array-like of shape (n_samples, n_features)
549
+ Data from which to compute the covariance estimate.
550
+
551
+ y : Ignored
552
+ Not used, present for API consistency by convention.
553
+
554
+ Returns
555
+ -------
556
+ self : object
557
+ Returns the instance itself.
558
+ """
559
+ # Covariance does not make sense for a single feature
560
+ X = validate_data(self, X, ensure_min_features=2, ensure_min_samples=2)
561
+
562
+ if self.covariance == "precomputed":
563
+ emp_cov = X.copy()
564
+ self.location_ = np.zeros(X.shape[1])
565
+ else:
566
+ emp_cov = empirical_covariance(X, assume_centered=self.assume_centered)
567
+ if self.assume_centered:
568
+ self.location_ = np.zeros(X.shape[1])
569
+ else:
570
+ self.location_ = X.mean(0)
571
+
572
+ self.covariance_, self.precision_, self.costs_, self.n_iter_ = _graphical_lasso(
573
+ emp_cov,
574
+ alpha=self.alpha,
575
+ cov_init=None,
576
+ mode=self.mode,
577
+ tol=self.tol,
578
+ enet_tol=self.enet_tol,
579
+ max_iter=self.max_iter,
580
+ verbose=self.verbose,
581
+ eps=self.eps,
582
+ )
583
+ return self
584
+
585
+
586
+ # Cross-validation with GraphicalLasso
587
+ def graphical_lasso_path(
588
+ X,
589
+ alphas,
590
+ cov_init=None,
591
+ X_test=None,
592
+ mode="cd",
593
+ tol=1e-4,
594
+ enet_tol=1e-4,
595
+ max_iter=100,
596
+ verbose=False,
597
+ eps=np.finfo(np.float64).eps,
598
+ ):
599
+ """l1-penalized covariance estimator along a path of decreasing alphas
600
+
601
+ Read more in the :ref:`User Guide <sparse_inverse_covariance>`.
602
+
603
+ Parameters
604
+ ----------
605
+ X : ndarray of shape (n_samples, n_features)
606
+ Data from which to compute the covariance estimate.
607
+
608
+ alphas : array-like of shape (n_alphas,)
609
+ The list of regularization parameters, decreasing order.
610
+
611
+ cov_init : array of shape (n_features, n_features), default=None
612
+ The initial guess for the covariance.
613
+
614
+ X_test : array of shape (n_test_samples, n_features), default=None
615
+ Optional test matrix to measure generalisation error.
616
+
617
+ mode : {'cd', 'lars'}, default='cd'
618
+ The Lasso solver to use: coordinate descent or LARS. Use LARS for
619
+ very sparse underlying graphs, where p > n. Elsewhere prefer cd
620
+ which is more numerically stable.
621
+
622
+ tol : float, default=1e-4
623
+ The tolerance to declare convergence: if the dual gap goes below
624
+ this value, iterations are stopped. The tolerance must be a positive
625
+ number.
626
+
627
+ enet_tol : float, default=1e-4
628
+ The tolerance for the elastic net solver used to calculate the descent
629
+ direction. This parameter controls the accuracy of the search direction
630
+ for a given column update, not of the overall parameter estimate. Only
631
+ used for mode='cd'. The tolerance must be a positive number.
632
+
633
+ max_iter : int, default=100
634
+ The maximum number of iterations. This parameter should be a strictly
635
+ positive integer.
636
+
637
+ verbose : int or bool, default=False
638
+ The higher the verbosity flag, the more information is printed
639
+ during the fitting.
640
+
641
+ eps : float, default=eps
642
+ The machine-precision regularization in the computation of the
643
+ Cholesky diagonal factors. Increase this for very ill-conditioned
644
+ systems. Default is `np.finfo(np.float64).eps`.
645
+
646
+ .. versionadded:: 1.3
647
+
648
+ Returns
649
+ -------
650
+ covariances_ : list of shape (n_alphas,) of ndarray of shape \
651
+ (n_features, n_features)
652
+ The estimated covariance matrices.
653
+
654
+ precisions_ : list of shape (n_alphas,) of ndarray of shape \
655
+ (n_features, n_features)
656
+ The estimated (sparse) precision matrices.
657
+
658
+ scores_ : list of shape (n_alphas,), dtype=float
659
+ The generalisation error (log-likelihood) on the test data.
660
+ Returned only if test data is passed.
661
+ """
662
+ inner_verbose = max(0, verbose - 1)
663
+ emp_cov = empirical_covariance(X)
664
+ if cov_init is None:
665
+ covariance_ = emp_cov.copy()
666
+ else:
667
+ covariance_ = cov_init
668
+ covariances_ = list()
669
+ precisions_ = list()
670
+ scores_ = list()
671
+ if X_test is not None:
672
+ test_emp_cov = empirical_covariance(X_test)
673
+
674
+ for alpha in alphas:
675
+ try:
676
+ # Capture the errors, and move on
677
+ covariance_, precision_, _, _ = _graphical_lasso(
678
+ emp_cov,
679
+ alpha=alpha,
680
+ cov_init=covariance_,
681
+ mode=mode,
682
+ tol=tol,
683
+ enet_tol=enet_tol,
684
+ max_iter=max_iter,
685
+ verbose=inner_verbose,
686
+ eps=eps,
687
+ )
688
+ covariances_.append(covariance_)
689
+ precisions_.append(precision_)
690
+ if X_test is not None:
691
+ this_score = log_likelihood(test_emp_cov, precision_)
692
+ except FloatingPointError:
693
+ this_score = -np.inf
694
+ covariances_.append(np.nan)
695
+ precisions_.append(np.nan)
696
+ if X_test is not None:
697
+ if not np.isfinite(this_score):
698
+ this_score = -np.inf
699
+ scores_.append(this_score)
700
+ if verbose == 1:
701
+ sys.stderr.write(".")
702
+ elif verbose > 1:
703
+ if X_test is not None:
704
+ print(
705
+ "[graphical_lasso_path] alpha: %.2e, score: %.2e"
706
+ % (alpha, this_score)
707
+ )
708
+ else:
709
+ print("[graphical_lasso_path] alpha: %.2e" % alpha)
710
+ if X_test is not None:
711
+ return covariances_, precisions_, scores_
712
+ return covariances_, precisions_
713
+
714
+
715
+ class GraphicalLassoCV(BaseGraphicalLasso):
716
+ """Sparse inverse covariance w/ cross-validated choice of the l1 penalty.
717
+
718
+ See glossary entry for :term:`cross-validation estimator`.
719
+
720
+ Read more in the :ref:`User Guide <sparse_inverse_covariance>`.
721
+
722
+ .. versionchanged:: v0.20
723
+ GraphLassoCV has been renamed to GraphicalLassoCV
724
+
725
+ Parameters
726
+ ----------
727
+ alphas : int or array-like of shape (n_alphas,), dtype=float, default=4
728
+ If an integer is given, it fixes the number of points on the
729
+ grids of alpha to be used. If a list is given, it gives the
730
+ grid to be used. See the notes in the class docstring for
731
+ more details. Range is [1, inf) for an integer.
732
+ Range is (0, inf] for an array-like of floats.
733
+
734
+ n_refinements : int, default=4
735
+ The number of times the grid is refined. Not used if explicit
736
+ values of alphas are passed. Range is [1, inf).
737
+
738
+ cv : int, cross-validation generator or iterable, default=None
739
+ Determines the cross-validation splitting strategy.
740
+ Possible inputs for cv are:
741
+
742
+ - None, to use the default 5-fold cross-validation,
743
+ - integer, to specify the number of folds.
744
+ - :term:`CV splitter`,
745
+ - An iterable yielding (train, test) splits as arrays of indices.
746
+
747
+ For integer/None inputs :class:`~sklearn.model_selection.KFold` is used.
748
+
749
+ Refer :ref:`User Guide <cross_validation>` for the various
750
+ cross-validation strategies that can be used here.
751
+
752
+ .. versionchanged:: 0.20
753
+ ``cv`` default value if None changed from 3-fold to 5-fold.
754
+
755
+ tol : float, default=1e-4
756
+ The tolerance to declare convergence: if the dual gap goes below
757
+ this value, iterations are stopped. Range is (0, inf].
758
+
759
+ enet_tol : float, default=1e-4
760
+ The tolerance for the elastic net solver used to calculate the descent
761
+ direction. This parameter controls the accuracy of the search direction
762
+ for a given column update, not of the overall parameter estimate. Only
763
+ used for mode='cd'. Range is (0, inf].
764
+
765
+ max_iter : int, default=100
766
+ Maximum number of iterations.
767
+
768
+ mode : {'cd', 'lars'}, default='cd'
769
+ The Lasso solver to use: coordinate descent or LARS. Use LARS for
770
+ very sparse underlying graphs, where number of features is greater
771
+ than number of samples. Elsewhere prefer cd which is more numerically
772
+ stable.
773
+
774
+ n_jobs : int, default=None
775
+ Number of jobs to run in parallel.
776
+ ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
777
+ ``-1`` means using all processors. See :term:`Glossary <n_jobs>`
778
+ for more details.
779
+
780
+ .. versionchanged:: v0.20
781
+ `n_jobs` default changed from 1 to None
782
+
783
+ verbose : bool, default=False
784
+ If verbose is True, the objective function and duality gap are
785
+ printed at each iteration.
786
+
787
+ eps : float, default=eps
788
+ The machine-precision regularization in the computation of the
789
+ Cholesky diagonal factors. Increase this for very ill-conditioned
790
+ systems. Default is `np.finfo(np.float64).eps`.
791
+
792
+ .. versionadded:: 1.3
793
+
794
+ assume_centered : bool, default=False
795
+ If True, data are not centered before computation.
796
+ Useful when working with data whose mean is almost, but not exactly
797
+ zero.
798
+ If False, data are centered before computation.
799
+
800
+ Attributes
801
+ ----------
802
+ location_ : ndarray of shape (n_features,)
803
+ Estimated location, i.e. the estimated mean.
804
+
805
+ covariance_ : ndarray of shape (n_features, n_features)
806
+ Estimated covariance matrix.
807
+
808
+ precision_ : ndarray of shape (n_features, n_features)
809
+ Estimated precision matrix (inverse covariance).
810
+
811
+ costs_ : list of (objective, dual_gap) pairs
812
+ The list of values of the objective function and the dual gap at
813
+ each iteration. Returned only if return_costs is True.
814
+
815
+ .. versionadded:: 1.3
816
+
817
+ alpha_ : float
818
+ Penalization parameter selected.
819
+
820
+ cv_results_ : dict of ndarrays
821
+ A dict with keys:
822
+
823
+ alphas : ndarray of shape (n_alphas,)
824
+ All penalization parameters explored.
825
+
826
+ split(k)_test_score : ndarray of shape (n_alphas,)
827
+ Log-likelihood score on left-out data across (k)th fold.
828
+
829
+ .. versionadded:: 1.0
830
+
831
+ mean_test_score : ndarray of shape (n_alphas,)
832
+ Mean of scores over the folds.
833
+
834
+ .. versionadded:: 1.0
835
+
836
+ std_test_score : ndarray of shape (n_alphas,)
837
+ Standard deviation of scores over the folds.
838
+
839
+ .. versionadded:: 1.0
840
+
841
+ n_iter_ : int
842
+ Number of iterations run for the optimal alpha.
843
+
844
+ n_features_in_ : int
845
+ Number of features seen during :term:`fit`.
846
+
847
+ .. versionadded:: 0.24
848
+
849
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
850
+ Names of features seen during :term:`fit`. Defined only when `X`
851
+ has feature names that are all strings.
852
+
853
+ .. versionadded:: 1.0
854
+
855
+ See Also
856
+ --------
857
+ graphical_lasso : L1-penalized covariance estimator.
858
+ GraphicalLasso : Sparse inverse covariance estimation
859
+ with an l1-penalized estimator.
860
+
861
+ Notes
862
+ -----
863
+ The search for the optimal penalization parameter (`alpha`) is done on an
864
+ iteratively refined grid: first the cross-validated scores on a grid are
865
+ computed, then a new refined grid is centered around the maximum, and so
866
+ on.
867
+
868
+ One of the challenges which is faced here is that the solvers can
869
+ fail to converge to a well-conditioned estimate. The corresponding
870
+ values of `alpha` then come out as missing values, but the optimum may
871
+ be close to these missing values.
872
+
873
+ In `fit`, once the best parameter `alpha` is found through
874
+ cross-validation, the model is fit again using the entire training set.
875
+
876
+ Examples
877
+ --------
878
+ >>> import numpy as np
879
+ >>> from sklearn.covariance import GraphicalLassoCV
880
+ >>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
881
+ ... [0.0, 0.4, 0.0, 0.0],
882
+ ... [0.2, 0.0, 0.3, 0.1],
883
+ ... [0.0, 0.0, 0.1, 0.7]])
884
+ >>> np.random.seed(0)
885
+ >>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
886
+ ... cov=true_cov,
887
+ ... size=200)
888
+ >>> cov = GraphicalLassoCV().fit(X)
889
+ >>> np.around(cov.covariance_, decimals=3)
890
+ array([[0.816, 0.051, 0.22 , 0.017],
891
+ [0.051, 0.364, 0.018, 0.036],
892
+ [0.22 , 0.018, 0.322, 0.094],
893
+ [0.017, 0.036, 0.094, 0.69 ]])
894
+ >>> np.around(cov.location_, decimals=3)
895
+ array([0.073, 0.04 , 0.038, 0.143])
896
+ """
897
+
898
+ _parameter_constraints: dict = {
899
+ **BaseGraphicalLasso._parameter_constraints,
900
+ "alphas": [Interval(Integral, 0, None, closed="left"), "array-like"],
901
+ "n_refinements": [Interval(Integral, 1, None, closed="left")],
902
+ "cv": ["cv_object"],
903
+ "n_jobs": [Integral, None],
904
+ }
905
+
906
+ def __init__(
907
+ self,
908
+ *,
909
+ alphas=4,
910
+ n_refinements=4,
911
+ cv=None,
912
+ tol=1e-4,
913
+ enet_tol=1e-4,
914
+ max_iter=100,
915
+ mode="cd",
916
+ n_jobs=None,
917
+ verbose=False,
918
+ eps=np.finfo(np.float64).eps,
919
+ assume_centered=False,
920
+ ):
921
+ super().__init__(
922
+ tol=tol,
923
+ enet_tol=enet_tol,
924
+ max_iter=max_iter,
925
+ mode=mode,
926
+ verbose=verbose,
927
+ eps=eps,
928
+ assume_centered=assume_centered,
929
+ )
930
+ self.alphas = alphas
931
+ self.n_refinements = n_refinements
932
+ self.cv = cv
933
+ self.n_jobs = n_jobs
934
+
935
+ @_fit_context(prefer_skip_nested_validation=True)
936
+ def fit(self, X, y=None, **params):
937
+ """Fit the GraphicalLasso covariance model to X.
938
+
939
+ Parameters
940
+ ----------
941
+ X : array-like of shape (n_samples, n_features)
942
+ Data from which to compute the covariance estimate.
943
+
944
+ y : Ignored
945
+ Not used, present for API consistency by convention.
946
+
947
+ **params : dict, default=None
948
+ Parameters to be passed to the CV splitter and the
949
+ cross_val_score function.
950
+
951
+ .. versionadded:: 1.5
952
+ Only available if `enable_metadata_routing=True`,
953
+ which can be set by using
954
+ ``sklearn.set_config(enable_metadata_routing=True)``.
955
+ See :ref:`Metadata Routing User Guide <metadata_routing>` for
956
+ more details.
957
+
958
+ Returns
959
+ -------
960
+ self : object
961
+ Returns the instance itself.
962
+ """
963
+ # Covariance does not make sense for a single feature
964
+ _raise_for_params(params, self, "fit")
965
+
966
+ X = validate_data(self, X, ensure_min_features=2)
967
+ if self.assume_centered:
968
+ self.location_ = np.zeros(X.shape[1])
969
+ else:
970
+ self.location_ = X.mean(0)
971
+ emp_cov = empirical_covariance(X, assume_centered=self.assume_centered)
972
+
973
+ cv = check_cv(self.cv, y, classifier=False)
974
+
975
+ # List of (alpha, scores, covs)
976
+ path = list()
977
+ n_alphas = self.alphas
978
+ inner_verbose = max(0, self.verbose - 1)
979
+
980
+ if _is_arraylike_not_scalar(n_alphas):
981
+ for alpha in self.alphas:
982
+ check_scalar(
983
+ alpha,
984
+ "alpha",
985
+ Real,
986
+ min_val=0,
987
+ max_val=np.inf,
988
+ include_boundaries="right",
989
+ )
990
+ alphas = self.alphas
991
+ n_refinements = 1
992
+ else:
993
+ n_refinements = self.n_refinements
994
+ alpha_1 = alpha_max(emp_cov)
995
+ alpha_0 = 1e-2 * alpha_1
996
+ alphas = np.logspace(np.log10(alpha_0), np.log10(alpha_1), n_alphas)[::-1]
997
+
998
+ if _routing_enabled():
999
+ routed_params = process_routing(self, "fit", **params)
1000
+ else:
1001
+ routed_params = Bunch(splitter=Bunch(split={}))
1002
+
1003
+ t0 = time.time()
1004
+ for i in range(n_refinements):
1005
+ with warnings.catch_warnings():
1006
+ # No need to see the convergence warnings on this grid:
1007
+ # they will always be points that will not converge
1008
+ # during the cross-validation
1009
+ warnings.simplefilter("ignore", ConvergenceWarning)
1010
+ # Compute the cross-validated loss on the current grid
1011
+
1012
+ # NOTE: Warm-restarting graphical_lasso_path has been tried,
1013
+ # and this did not allow to gain anything
1014
+ # (same execution time with or without).
1015
+ this_path = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)(
1016
+ delayed(graphical_lasso_path)(
1017
+ X[train],
1018
+ alphas=alphas,
1019
+ X_test=X[test],
1020
+ mode=self.mode,
1021
+ tol=self.tol,
1022
+ enet_tol=self.enet_tol,
1023
+ max_iter=int(0.1 * self.max_iter),
1024
+ verbose=inner_verbose,
1025
+ eps=self.eps,
1026
+ )
1027
+ for train, test in cv.split(X, y, **routed_params.splitter.split)
1028
+ )
1029
+
1030
+ # Little danse to transform the list in what we need
1031
+ covs, _, scores = zip(*this_path)
1032
+ covs = zip(*covs)
1033
+ scores = zip(*scores)
1034
+ path.extend(zip(alphas, scores, covs))
1035
+ path = sorted(path, key=operator.itemgetter(0), reverse=True)
1036
+
1037
+ # Find the maximum (avoid using built in 'max' function to
1038
+ # have a fully-reproducible selection of the smallest alpha
1039
+ # in case of equality)
1040
+ best_score = -np.inf
1041
+ last_finite_idx = 0
1042
+ for index, (alpha, scores, _) in enumerate(path):
1043
+ this_score = np.mean(scores)
1044
+ if this_score >= 0.1 / np.finfo(np.float64).eps:
1045
+ this_score = np.nan
1046
+ if np.isfinite(this_score):
1047
+ last_finite_idx = index
1048
+ if this_score >= best_score:
1049
+ best_score = this_score
1050
+ best_index = index
1051
+
1052
+ # Refine the grid
1053
+ if best_index == 0:
1054
+ # We do not need to go back: we have chosen
1055
+ # the highest value of alpha for which there are
1056
+ # non-zero coefficients
1057
+ alpha_1 = path[0][0]
1058
+ alpha_0 = path[1][0]
1059
+ elif best_index == last_finite_idx and not best_index == len(path) - 1:
1060
+ # We have non-converged models on the upper bound of the
1061
+ # grid, we need to refine the grid there
1062
+ alpha_1 = path[best_index][0]
1063
+ alpha_0 = path[best_index + 1][0]
1064
+ elif best_index == len(path) - 1:
1065
+ alpha_1 = path[best_index][0]
1066
+ alpha_0 = 0.01 * path[best_index][0]
1067
+ else:
1068
+ alpha_1 = path[best_index - 1][0]
1069
+ alpha_0 = path[best_index + 1][0]
1070
+
1071
+ if not _is_arraylike_not_scalar(n_alphas):
1072
+ alphas = np.logspace(np.log10(alpha_1), np.log10(alpha_0), n_alphas + 2)
1073
+ alphas = alphas[1:-1]
1074
+
1075
+ if self.verbose and n_refinements > 1:
1076
+ print(
1077
+ "[GraphicalLassoCV] Done refinement % 2i out of %i: % 3is"
1078
+ % (i + 1, n_refinements, time.time() - t0)
1079
+ )
1080
+
1081
+ path = list(zip(*path))
1082
+ grid_scores = list(path[1])
1083
+ alphas = list(path[0])
1084
+ # Finally, compute the score with alpha = 0
1085
+ alphas.append(0)
1086
+ grid_scores.append(
1087
+ cross_val_score(
1088
+ EmpiricalCovariance(),
1089
+ X,
1090
+ cv=cv,
1091
+ n_jobs=self.n_jobs,
1092
+ verbose=inner_verbose,
1093
+ params=params,
1094
+ )
1095
+ )
1096
+ grid_scores = np.array(grid_scores)
1097
+
1098
+ self.cv_results_ = {"alphas": np.array(alphas)}
1099
+
1100
+ for i in range(grid_scores.shape[1]):
1101
+ self.cv_results_[f"split{i}_test_score"] = grid_scores[:, i]
1102
+
1103
+ self.cv_results_["mean_test_score"] = np.mean(grid_scores, axis=1)
1104
+ self.cv_results_["std_test_score"] = np.std(grid_scores, axis=1)
1105
+
1106
+ best_alpha = alphas[best_index]
1107
+ self.alpha_ = best_alpha
1108
+
1109
+ # Finally fit the model with the selected alpha
1110
+ self.covariance_, self.precision_, self.costs_, self.n_iter_ = _graphical_lasso(
1111
+ emp_cov,
1112
+ alpha=best_alpha,
1113
+ mode=self.mode,
1114
+ tol=self.tol,
1115
+ enet_tol=self.enet_tol,
1116
+ max_iter=self.max_iter,
1117
+ verbose=inner_verbose,
1118
+ eps=self.eps,
1119
+ )
1120
+ return self
1121
+
1122
+ def get_metadata_routing(self):
1123
+ """Get metadata routing of this object.
1124
+
1125
+ Please check :ref:`User Guide <metadata_routing>` on how the routing
1126
+ mechanism works.
1127
+
1128
+ .. versionadded:: 1.5
1129
+
1130
+ Returns
1131
+ -------
1132
+ routing : MetadataRouter
1133
+ A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating
1134
+ routing information.
1135
+ """
1136
+ router = MetadataRouter(owner=self.__class__.__name__).add(
1137
+ splitter=check_cv(self.cv),
1138
+ method_mapping=MethodMapping().add(callee="split", caller="fit"),
1139
+ )
1140
+ return router
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_robust_covariance.py ADDED
@@ -0,0 +1,870 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Robust location and covariance estimators.
3
+
4
+ Here are implemented estimators that are resistant to outliers.
5
+
6
+ """
7
+
8
+ # Authors: The scikit-learn developers
9
+ # SPDX-License-Identifier: BSD-3-Clause
10
+
11
+ import warnings
12
+ from numbers import Integral, Real
13
+
14
+ import numpy as np
15
+ from scipy import linalg
16
+ from scipy.stats import chi2
17
+
18
+ from ..base import _fit_context
19
+ from ..utils import check_array, check_random_state
20
+ from ..utils._param_validation import Interval
21
+ from ..utils.extmath import fast_logdet
22
+ from ..utils.validation import validate_data
23
+ from ._empirical_covariance import EmpiricalCovariance, empirical_covariance
24
+
25
+
26
+ # Minimum Covariance Determinant
27
+ # Implementing of an algorithm by Rousseeuw & Van Driessen described in
28
+ # (A Fast Algorithm for the Minimum Covariance Determinant Estimator,
29
+ # 1999, American Statistical Association and the American Society
30
+ # for Quality, TECHNOMETRICS)
31
+ # XXX Is this really a public function? It's not listed in the docs or
32
+ # exported by sklearn.covariance. Deprecate?
33
+ def c_step(
34
+ X,
35
+ n_support,
36
+ remaining_iterations=30,
37
+ initial_estimates=None,
38
+ verbose=False,
39
+ cov_computation_method=empirical_covariance,
40
+ random_state=None,
41
+ ):
42
+ """C_step procedure described in [Rouseeuw1984]_ aiming at computing MCD.
43
+
44
+ Parameters
45
+ ----------
46
+ X : array-like of shape (n_samples, n_features)
47
+ Data set in which we look for the n_support observations whose
48
+ scatter matrix has minimum determinant.
49
+
50
+ n_support : int
51
+ Number of observations to compute the robust estimates of location
52
+ and covariance from. This parameter must be greater than
53
+ `n_samples / 2`.
54
+
55
+ remaining_iterations : int, default=30
56
+ Number of iterations to perform.
57
+ According to [Rouseeuw1999]_, two iterations are sufficient to get
58
+ close to the minimum, and we never need more than 30 to reach
59
+ convergence.
60
+
61
+ initial_estimates : tuple of shape (2,), default=None
62
+ Initial estimates of location and shape from which to run the c_step
63
+ procedure:
64
+ - initial_estimates[0]: an initial location estimate
65
+ - initial_estimates[1]: an initial covariance estimate
66
+
67
+ verbose : bool, default=False
68
+ Verbose mode.
69
+
70
+ cov_computation_method : callable, \
71
+ default=:func:`sklearn.covariance.empirical_covariance`
72
+ The function which will be used to compute the covariance.
73
+ Must return array of shape (n_features, n_features).
74
+
75
+ random_state : int, RandomState instance or None, default=None
76
+ Determines the pseudo random number generator for shuffling the data.
77
+ Pass an int for reproducible results across multiple function calls.
78
+ See :term:`Glossary <random_state>`.
79
+
80
+ Returns
81
+ -------
82
+ location : ndarray of shape (n_features,)
83
+ Robust location estimates.
84
+
85
+ covariance : ndarray of shape (n_features, n_features)
86
+ Robust covariance estimates.
87
+
88
+ support : ndarray of shape (n_samples,)
89
+ A mask for the `n_support` observations whose scatter matrix has
90
+ minimum determinant.
91
+
92
+ References
93
+ ----------
94
+ .. [Rouseeuw1999] A Fast Algorithm for the Minimum Covariance Determinant
95
+ Estimator, 1999, American Statistical Association and the American
96
+ Society for Quality, TECHNOMETRICS
97
+ """
98
+ X = np.asarray(X)
99
+ random_state = check_random_state(random_state)
100
+ return _c_step(
101
+ X,
102
+ n_support,
103
+ remaining_iterations=remaining_iterations,
104
+ initial_estimates=initial_estimates,
105
+ verbose=verbose,
106
+ cov_computation_method=cov_computation_method,
107
+ random_state=random_state,
108
+ )
109
+
110
+
111
+ def _c_step(
112
+ X,
113
+ n_support,
114
+ random_state,
115
+ remaining_iterations=30,
116
+ initial_estimates=None,
117
+ verbose=False,
118
+ cov_computation_method=empirical_covariance,
119
+ ):
120
+ n_samples, n_features = X.shape
121
+ dist = np.inf
122
+
123
+ # Initialisation
124
+ if initial_estimates is None:
125
+ # compute initial robust estimates from a random subset
126
+ support_indices = random_state.permutation(n_samples)[:n_support]
127
+ else:
128
+ # get initial robust estimates from the function parameters
129
+ location = initial_estimates[0]
130
+ covariance = initial_estimates[1]
131
+ # run a special iteration for that case (to get an initial support_indices)
132
+ precision = linalg.pinvh(covariance)
133
+ X_centered = X - location
134
+ dist = (np.dot(X_centered, precision) * X_centered).sum(1)
135
+ # compute new estimates
136
+ support_indices = np.argpartition(dist, n_support - 1)[:n_support]
137
+
138
+ X_support = X[support_indices]
139
+ location = X_support.mean(0)
140
+ covariance = cov_computation_method(X_support)
141
+
142
+ # Iterative procedure for Minimum Covariance Determinant computation
143
+ det = fast_logdet(covariance)
144
+ # If the data already has singular covariance, calculate the precision,
145
+ # as the loop below will not be entered.
146
+ if np.isinf(det):
147
+ precision = linalg.pinvh(covariance)
148
+
149
+ previous_det = np.inf
150
+ while det < previous_det and remaining_iterations > 0 and not np.isinf(det):
151
+ # save old estimates values
152
+ previous_location = location
153
+ previous_covariance = covariance
154
+ previous_det = det
155
+ previous_support_indices = support_indices
156
+ # compute a new support_indices from the full data set mahalanobis distances
157
+ precision = linalg.pinvh(covariance)
158
+ X_centered = X - location
159
+ dist = (np.dot(X_centered, precision) * X_centered).sum(axis=1)
160
+ # compute new estimates
161
+ support_indices = np.argpartition(dist, n_support - 1)[:n_support]
162
+ X_support = X[support_indices]
163
+ location = X_support.mean(axis=0)
164
+ covariance = cov_computation_method(X_support)
165
+ det = fast_logdet(covariance)
166
+ # update remaining iterations for early stopping
167
+ remaining_iterations -= 1
168
+
169
+ previous_dist = dist
170
+ dist = (np.dot(X - location, precision) * (X - location)).sum(axis=1)
171
+ # Check if best fit already found (det => 0, logdet => -inf)
172
+ if np.isinf(det):
173
+ results = location, covariance, det, support_indices, dist
174
+ # Check convergence
175
+ if np.allclose(det, previous_det):
176
+ # c_step procedure converged
177
+ if verbose:
178
+ print(
179
+ "Optimal couple (location, covariance) found before"
180
+ " ending iterations (%d left)" % (remaining_iterations)
181
+ )
182
+ results = location, covariance, det, support_indices, dist
183
+ elif det > previous_det:
184
+ # determinant has increased (should not happen)
185
+ warnings.warn(
186
+ "Determinant has increased; this should not happen: "
187
+ "log(det) > log(previous_det) (%.15f > %.15f). "
188
+ "You may want to try with a higher value of "
189
+ "support_fraction (current value: %.3f)."
190
+ % (det, previous_det, n_support / n_samples),
191
+ RuntimeWarning,
192
+ )
193
+ results = (
194
+ previous_location,
195
+ previous_covariance,
196
+ previous_det,
197
+ previous_support_indices,
198
+ previous_dist,
199
+ )
200
+
201
+ # Check early stopping
202
+ if remaining_iterations == 0:
203
+ if verbose:
204
+ print("Maximum number of iterations reached")
205
+ results = location, covariance, det, support_indices, dist
206
+
207
+ location, covariance, det, support_indices, dist = results
208
+ # Convert from list of indices to boolean mask.
209
+ support = np.bincount(support_indices, minlength=n_samples).astype(bool)
210
+ return location, covariance, det, support, dist
211
+
212
+
213
+ def select_candidates(
214
+ X,
215
+ n_support,
216
+ n_trials,
217
+ select=1,
218
+ n_iter=30,
219
+ verbose=False,
220
+ cov_computation_method=empirical_covariance,
221
+ random_state=None,
222
+ ):
223
+ """Finds the best pure subset of observations to compute MCD from it.
224
+
225
+ The purpose of this function is to find the best sets of n_support
226
+ observations with respect to a minimization of their covariance
227
+ matrix determinant. Equivalently, it removes n_samples-n_support
228
+ observations to construct what we call a pure data set (i.e. not
229
+ containing outliers). The list of the observations of the pure
230
+ data set is referred to as the `support`.
231
+
232
+ Starting from a random support, the pure data set is found by the
233
+ c_step procedure introduced by Rousseeuw and Van Driessen in
234
+ [RV]_.
235
+
236
+ Parameters
237
+ ----------
238
+ X : array-like of shape (n_samples, n_features)
239
+ Data (sub)set in which we look for the n_support purest observations.
240
+
241
+ n_support : int
242
+ The number of samples the pure data set must contain.
243
+ This parameter must be in the range `[(n + p + 1)/2] < n_support < n`.
244
+
245
+ n_trials : int or tuple of shape (2,)
246
+ Number of different initial sets of observations from which to
247
+ run the algorithm. This parameter should be a strictly positive
248
+ integer.
249
+ Instead of giving a number of trials to perform, one can provide a
250
+ list of initial estimates that will be used to iteratively run
251
+ c_step procedures. In this case:
252
+ - n_trials[0]: array-like, shape (n_trials, n_features)
253
+ is the list of `n_trials` initial location estimates
254
+ - n_trials[1]: array-like, shape (n_trials, n_features, n_features)
255
+ is the list of `n_trials` initial covariances estimates
256
+
257
+ select : int, default=1
258
+ Number of best candidates results to return. This parameter must be
259
+ a strictly positive integer.
260
+
261
+ n_iter : int, default=30
262
+ Maximum number of iterations for the c_step procedure.
263
+ (2 is enough to be close to the final solution. "Never" exceeds 20).
264
+ This parameter must be a strictly positive integer.
265
+
266
+ verbose : bool, default=False
267
+ Control the output verbosity.
268
+
269
+ cov_computation_method : callable, \
270
+ default=:func:`sklearn.covariance.empirical_covariance`
271
+ The function which will be used to compute the covariance.
272
+ Must return an array of shape (n_features, n_features).
273
+
274
+ random_state : int, RandomState instance or None, default=None
275
+ Determines the pseudo random number generator for shuffling the data.
276
+ Pass an int for reproducible results across multiple function calls.
277
+ See :term:`Glossary <random_state>`.
278
+
279
+ See Also
280
+ ---------
281
+ c_step
282
+
283
+ Returns
284
+ -------
285
+ best_locations : ndarray of shape (select, n_features)
286
+ The `select` location estimates computed from the `select` best
287
+ supports found in the data set (`X`).
288
+
289
+ best_covariances : ndarray of shape (select, n_features, n_features)
290
+ The `select` covariance estimates computed from the `select`
291
+ best supports found in the data set (`X`).
292
+
293
+ best_supports : ndarray of shape (select, n_samples)
294
+ The `select` best supports found in the data set (`X`).
295
+
296
+ References
297
+ ----------
298
+ .. [RV] A Fast Algorithm for the Minimum Covariance Determinant
299
+ Estimator, 1999, American Statistical Association and the American
300
+ Society for Quality, TECHNOMETRICS
301
+ """
302
+ random_state = check_random_state(random_state)
303
+
304
+ if isinstance(n_trials, Integral):
305
+ run_from_estimates = False
306
+ elif isinstance(n_trials, tuple):
307
+ run_from_estimates = True
308
+ estimates_list = n_trials
309
+ n_trials = estimates_list[0].shape[0]
310
+ else:
311
+ raise TypeError(
312
+ "Invalid 'n_trials' parameter, expected tuple or integer, got %s (%s)"
313
+ % (n_trials, type(n_trials))
314
+ )
315
+
316
+ # compute `n_trials` location and shape estimates candidates in the subset
317
+ all_estimates = []
318
+ if not run_from_estimates:
319
+ # perform `n_trials` computations from random initial supports
320
+ for j in range(n_trials):
321
+ all_estimates.append(
322
+ _c_step(
323
+ X,
324
+ n_support,
325
+ remaining_iterations=n_iter,
326
+ verbose=verbose,
327
+ cov_computation_method=cov_computation_method,
328
+ random_state=random_state,
329
+ )
330
+ )
331
+ else:
332
+ # perform computations from every given initial estimates
333
+ for j in range(n_trials):
334
+ initial_estimates = (estimates_list[0][j], estimates_list[1][j])
335
+ all_estimates.append(
336
+ _c_step(
337
+ X,
338
+ n_support,
339
+ remaining_iterations=n_iter,
340
+ initial_estimates=initial_estimates,
341
+ verbose=verbose,
342
+ cov_computation_method=cov_computation_method,
343
+ random_state=random_state,
344
+ )
345
+ )
346
+ all_locs_sub, all_covs_sub, all_dets_sub, all_supports_sub, all_ds_sub = zip(
347
+ *all_estimates
348
+ )
349
+ # find the `n_best` best results among the `n_trials` ones
350
+ index_best = np.argsort(all_dets_sub)[:select]
351
+ best_locations = np.asarray(all_locs_sub)[index_best]
352
+ best_covariances = np.asarray(all_covs_sub)[index_best]
353
+ best_supports = np.asarray(all_supports_sub)[index_best]
354
+ best_ds = np.asarray(all_ds_sub)[index_best]
355
+
356
+ return best_locations, best_covariances, best_supports, best_ds
357
+
358
+
359
+ def fast_mcd(
360
+ X,
361
+ support_fraction=None,
362
+ cov_computation_method=empirical_covariance,
363
+ random_state=None,
364
+ ):
365
+ """Estimate the Minimum Covariance Determinant matrix.
366
+
367
+ Read more in the :ref:`User Guide <robust_covariance>`.
368
+
369
+ Parameters
370
+ ----------
371
+ X : array-like of shape (n_samples, n_features)
372
+ The data matrix, with p features and n samples.
373
+
374
+ support_fraction : float, default=None
375
+ The proportion of points to be included in the support of the raw
376
+ MCD estimate. Default is `None`, which implies that the minimum
377
+ value of `support_fraction` will be used within the algorithm:
378
+ `(n_samples + n_features + 1) / 2 * n_samples`. This parameter must be
379
+ in the range (0, 1).
380
+
381
+ cov_computation_method : callable, \
382
+ default=:func:`sklearn.covariance.empirical_covariance`
383
+ The function which will be used to compute the covariance.
384
+ Must return an array of shape (n_features, n_features).
385
+
386
+ random_state : int, RandomState instance or None, default=None
387
+ Determines the pseudo random number generator for shuffling the data.
388
+ Pass an int for reproducible results across multiple function calls.
389
+ See :term:`Glossary <random_state>`.
390
+
391
+ Returns
392
+ -------
393
+ location : ndarray of shape (n_features,)
394
+ Robust location of the data.
395
+
396
+ covariance : ndarray of shape (n_features, n_features)
397
+ Robust covariance of the features.
398
+
399
+ support : ndarray of shape (n_samples,), dtype=bool
400
+ A mask of the observations that have been used to compute
401
+ the robust location and covariance estimates of the data set.
402
+
403
+ Notes
404
+ -----
405
+ The FastMCD algorithm has been introduced by Rousseuw and Van Driessen
406
+ in "A Fast Algorithm for the Minimum Covariance Determinant Estimator,
407
+ 1999, American Statistical Association and the American Society
408
+ for Quality, TECHNOMETRICS".
409
+ The principle is to compute robust estimates and random subsets before
410
+ pooling them into a larger subsets, and finally into the full data set.
411
+ Depending on the size of the initial sample, we have one, two or three
412
+ such computation levels.
413
+
414
+ Note that only raw estimates are returned. If one is interested in
415
+ the correction and reweighting steps described in [RouseeuwVan]_,
416
+ see the MinCovDet object.
417
+
418
+ References
419
+ ----------
420
+
421
+ .. [RouseeuwVan] A Fast Algorithm for the Minimum Covariance
422
+ Determinant Estimator, 1999, American Statistical Association
423
+ and the American Society for Quality, TECHNOMETRICS
424
+
425
+ .. [Butler1993] R. W. Butler, P. L. Davies and M. Jhun,
426
+ Asymptotics For The Minimum Covariance Determinant Estimator,
427
+ The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400
428
+ """
429
+ random_state = check_random_state(random_state)
430
+
431
+ X = check_array(X, ensure_min_samples=2, estimator="fast_mcd")
432
+ n_samples, n_features = X.shape
433
+
434
+ # minimum breakdown value
435
+ if support_fraction is None:
436
+ n_support = int(np.ceil(0.5 * (n_samples + n_features + 1)))
437
+ else:
438
+ n_support = int(support_fraction * n_samples)
439
+
440
+ # 1-dimensional case quick computation
441
+ # (Rousseeuw, P. J. and Leroy, A. M. (2005) References, in Robust
442
+ # Regression and Outlier Detection, John Wiley & Sons, chapter 4)
443
+ if n_features == 1:
444
+ if n_support < n_samples:
445
+ # find the sample shortest halves
446
+ X_sorted = np.sort(np.ravel(X))
447
+ diff = X_sorted[n_support:] - X_sorted[: (n_samples - n_support)]
448
+ halves_start = np.where(diff == np.min(diff))[0]
449
+ # take the middle points' mean to get the robust location estimate
450
+ location = (
451
+ 0.5
452
+ * (X_sorted[n_support + halves_start] + X_sorted[halves_start]).mean()
453
+ )
454
+ support = np.zeros(n_samples, dtype=bool)
455
+ X_centered = X - location
456
+ support[np.argsort(np.abs(X_centered), 0)[:n_support]] = True
457
+ covariance = np.asarray([[np.var(X[support])]])
458
+ location = np.array([location])
459
+ # get precision matrix in an optimized way
460
+ precision = linalg.pinvh(covariance)
461
+ dist = (np.dot(X_centered, precision) * (X_centered)).sum(axis=1)
462
+ else:
463
+ support = np.ones(n_samples, dtype=bool)
464
+ covariance = np.asarray([[np.var(X)]])
465
+ location = np.asarray([np.mean(X)])
466
+ X_centered = X - location
467
+ # get precision matrix in an optimized way
468
+ precision = linalg.pinvh(covariance)
469
+ dist = (np.dot(X_centered, precision) * (X_centered)).sum(axis=1)
470
+ # Starting FastMCD algorithm for p-dimensional case
471
+ if (n_samples > 500) and (n_features > 1):
472
+ # 1. Find candidate supports on subsets
473
+ # a. split the set in subsets of size ~ 300
474
+ n_subsets = n_samples // 300
475
+ n_samples_subsets = n_samples // n_subsets
476
+ samples_shuffle = random_state.permutation(n_samples)
477
+ h_subset = int(np.ceil(n_samples_subsets * (n_support / float(n_samples))))
478
+ # b. perform a total of 500 trials
479
+ n_trials_tot = 500
480
+ # c. select 10 best (location, covariance) for each subset
481
+ n_best_sub = 10
482
+ n_trials = max(10, n_trials_tot // n_subsets)
483
+ n_best_tot = n_subsets * n_best_sub
484
+ all_best_locations = np.zeros((n_best_tot, n_features))
485
+ try:
486
+ all_best_covariances = np.zeros((n_best_tot, n_features, n_features))
487
+ except MemoryError:
488
+ # The above is too big. Let's try with something much small
489
+ # (and less optimal)
490
+ n_best_tot = 10
491
+ all_best_covariances = np.zeros((n_best_tot, n_features, n_features))
492
+ n_best_sub = 2
493
+ for i in range(n_subsets):
494
+ low_bound = i * n_samples_subsets
495
+ high_bound = low_bound + n_samples_subsets
496
+ current_subset = X[samples_shuffle[low_bound:high_bound]]
497
+ best_locations_sub, best_covariances_sub, _, _ = select_candidates(
498
+ current_subset,
499
+ h_subset,
500
+ n_trials,
501
+ select=n_best_sub,
502
+ n_iter=2,
503
+ cov_computation_method=cov_computation_method,
504
+ random_state=random_state,
505
+ )
506
+ subset_slice = np.arange(i * n_best_sub, (i + 1) * n_best_sub)
507
+ all_best_locations[subset_slice] = best_locations_sub
508
+ all_best_covariances[subset_slice] = best_covariances_sub
509
+ # 2. Pool the candidate supports into a merged set
510
+ # (possibly the full dataset)
511
+ n_samples_merged = min(1500, n_samples)
512
+ h_merged = int(np.ceil(n_samples_merged * (n_support / float(n_samples))))
513
+ if n_samples > 1500:
514
+ n_best_merged = 10
515
+ else:
516
+ n_best_merged = 1
517
+ # find the best couples (location, covariance) on the merged set
518
+ selection = random_state.permutation(n_samples)[:n_samples_merged]
519
+ locations_merged, covariances_merged, supports_merged, d = select_candidates(
520
+ X[selection],
521
+ h_merged,
522
+ n_trials=(all_best_locations, all_best_covariances),
523
+ select=n_best_merged,
524
+ cov_computation_method=cov_computation_method,
525
+ random_state=random_state,
526
+ )
527
+ # 3. Finally get the overall best (locations, covariance) couple
528
+ if n_samples < 1500:
529
+ # directly get the best couple (location, covariance)
530
+ location = locations_merged[0]
531
+ covariance = covariances_merged[0]
532
+ support = np.zeros(n_samples, dtype=bool)
533
+ dist = np.zeros(n_samples)
534
+ support[selection] = supports_merged[0]
535
+ dist[selection] = d[0]
536
+ else:
537
+ # select the best couple on the full dataset
538
+ locations_full, covariances_full, supports_full, d = select_candidates(
539
+ X,
540
+ n_support,
541
+ n_trials=(locations_merged, covariances_merged),
542
+ select=1,
543
+ cov_computation_method=cov_computation_method,
544
+ random_state=random_state,
545
+ )
546
+ location = locations_full[0]
547
+ covariance = covariances_full[0]
548
+ support = supports_full[0]
549
+ dist = d[0]
550
+ elif n_features > 1:
551
+ # 1. Find the 10 best couples (location, covariance)
552
+ # considering two iterations
553
+ n_trials = 30
554
+ n_best = 10
555
+ locations_best, covariances_best, _, _ = select_candidates(
556
+ X,
557
+ n_support,
558
+ n_trials=n_trials,
559
+ select=n_best,
560
+ n_iter=2,
561
+ cov_computation_method=cov_computation_method,
562
+ random_state=random_state,
563
+ )
564
+ # 2. Select the best couple on the full dataset amongst the 10
565
+ locations_full, covariances_full, supports_full, d = select_candidates(
566
+ X,
567
+ n_support,
568
+ n_trials=(locations_best, covariances_best),
569
+ select=1,
570
+ cov_computation_method=cov_computation_method,
571
+ random_state=random_state,
572
+ )
573
+ location = locations_full[0]
574
+ covariance = covariances_full[0]
575
+ support = supports_full[0]
576
+ dist = d[0]
577
+
578
+ return location, covariance, support, dist
579
+
580
+
581
+ class MinCovDet(EmpiricalCovariance):
582
+ """Minimum Covariance Determinant (MCD): robust estimator of covariance.
583
+
584
+ The Minimum Covariance Determinant covariance estimator is to be applied
585
+ on Gaussian-distributed data, but could still be relevant on data
586
+ drawn from a unimodal, symmetric distribution. It is not meant to be used
587
+ with multi-modal data (the algorithm used to fit a MinCovDet object is
588
+ likely to fail in such a case).
589
+ One should consider projection pursuit methods to deal with multi-modal
590
+ datasets.
591
+
592
+ Read more in the :ref:`User Guide <robust_covariance>`.
593
+
594
+ Parameters
595
+ ----------
596
+ store_precision : bool, default=True
597
+ Specify if the estimated precision is stored.
598
+
599
+ assume_centered : bool, default=False
600
+ If True, the support of the robust location and the covariance
601
+ estimates is computed, and a covariance estimate is recomputed from
602
+ it, without centering the data.
603
+ Useful to work with data whose mean is significantly equal to
604
+ zero but is not exactly zero.
605
+ If False, the robust location and covariance are directly computed
606
+ with the FastMCD algorithm without additional treatment.
607
+
608
+ support_fraction : float, default=None
609
+ The proportion of points to be included in the support of the raw
610
+ MCD estimate. Default is None, which implies that the minimum
611
+ value of support_fraction will be used within the algorithm:
612
+ `(n_samples + n_features + 1) / 2 * n_samples`. The parameter must be
613
+ in the range (0, 1].
614
+
615
+ random_state : int, RandomState instance or None, default=None
616
+ Determines the pseudo random number generator for shuffling the data.
617
+ Pass an int for reproducible results across multiple function calls.
618
+ See :term:`Glossary <random_state>`.
619
+
620
+ Attributes
621
+ ----------
622
+ raw_location_ : ndarray of shape (n_features,)
623
+ The raw robust estimated location before correction and re-weighting.
624
+
625
+ raw_covariance_ : ndarray of shape (n_features, n_features)
626
+ The raw robust estimated covariance before correction and re-weighting.
627
+
628
+ raw_support_ : ndarray of shape (n_samples,)
629
+ A mask of the observations that have been used to compute
630
+ the raw robust estimates of location and shape, before correction
631
+ and re-weighting.
632
+
633
+ location_ : ndarray of shape (n_features,)
634
+ Estimated robust location.
635
+
636
+ covariance_ : ndarray of shape (n_features, n_features)
637
+ Estimated robust covariance matrix.
638
+
639
+ precision_ : ndarray of shape (n_features, n_features)
640
+ Estimated pseudo inverse matrix.
641
+ (stored only if store_precision is True)
642
+
643
+ support_ : ndarray of shape (n_samples,)
644
+ A mask of the observations that have been used to compute
645
+ the robust estimates of location and shape.
646
+
647
+ dist_ : ndarray of shape (n_samples,)
648
+ Mahalanobis distances of the training set (on which :meth:`fit` is
649
+ called) observations.
650
+
651
+ n_features_in_ : int
652
+ Number of features seen during :term:`fit`.
653
+
654
+ .. versionadded:: 0.24
655
+
656
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
657
+ Names of features seen during :term:`fit`. Defined only when `X`
658
+ has feature names that are all strings.
659
+
660
+ .. versionadded:: 1.0
661
+
662
+ See Also
663
+ --------
664
+ EllipticEnvelope : An object for detecting outliers in
665
+ a Gaussian distributed dataset.
666
+ EmpiricalCovariance : Maximum likelihood covariance estimator.
667
+ GraphicalLasso : Sparse inverse covariance estimation
668
+ with an l1-penalized estimator.
669
+ GraphicalLassoCV : Sparse inverse covariance with cross-validated
670
+ choice of the l1 penalty.
671
+ LedoitWolf : LedoitWolf Estimator.
672
+ OAS : Oracle Approximating Shrinkage Estimator.
673
+ ShrunkCovariance : Covariance estimator with shrinkage.
674
+
675
+ References
676
+ ----------
677
+
678
+ .. [Rouseeuw1984] P. J. Rousseeuw. Least median of squares regression.
679
+ J. Am Stat Ass, 79:871, 1984.
680
+ .. [Rousseeuw] A Fast Algorithm for the Minimum Covariance Determinant
681
+ Estimator, 1999, American Statistical Association and the American
682
+ Society for Quality, TECHNOMETRICS
683
+ .. [ButlerDavies] R. W. Butler, P. L. Davies and M. Jhun,
684
+ Asymptotics For The Minimum Covariance Determinant Estimator,
685
+ The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400
686
+
687
+ Examples
688
+ --------
689
+ >>> import numpy as np
690
+ >>> from sklearn.covariance import MinCovDet
691
+ >>> from sklearn.datasets import make_gaussian_quantiles
692
+ >>> real_cov = np.array([[.8, .3],
693
+ ... [.3, .4]])
694
+ >>> rng = np.random.RandomState(0)
695
+ >>> X = rng.multivariate_normal(mean=[0, 0],
696
+ ... cov=real_cov,
697
+ ... size=500)
698
+ >>> cov = MinCovDet(random_state=0).fit(X)
699
+ >>> cov.covariance_
700
+ array([[0.7411..., 0.2535...],
701
+ [0.2535..., 0.3053...]])
702
+ >>> cov.location_
703
+ array([0.0813... , 0.0427...])
704
+ """
705
+
706
+ _parameter_constraints: dict = {
707
+ **EmpiricalCovariance._parameter_constraints,
708
+ "support_fraction": [Interval(Real, 0, 1, closed="right"), None],
709
+ "random_state": ["random_state"],
710
+ }
711
+ _nonrobust_covariance = staticmethod(empirical_covariance)
712
+
713
+ def __init__(
714
+ self,
715
+ *,
716
+ store_precision=True,
717
+ assume_centered=False,
718
+ support_fraction=None,
719
+ random_state=None,
720
+ ):
721
+ self.store_precision = store_precision
722
+ self.assume_centered = assume_centered
723
+ self.support_fraction = support_fraction
724
+ self.random_state = random_state
725
+
726
+ @_fit_context(prefer_skip_nested_validation=True)
727
+ def fit(self, X, y=None):
728
+ """Fit a Minimum Covariance Determinant with the FastMCD algorithm.
729
+
730
+ Parameters
731
+ ----------
732
+ X : array-like of shape (n_samples, n_features)
733
+ Training data, where `n_samples` is the number of samples
734
+ and `n_features` is the number of features.
735
+
736
+ y : Ignored
737
+ Not used, present for API consistency by convention.
738
+
739
+ Returns
740
+ -------
741
+ self : object
742
+ Returns the instance itself.
743
+ """
744
+ X = validate_data(self, X, ensure_min_samples=2, estimator="MinCovDet")
745
+ random_state = check_random_state(self.random_state)
746
+ n_samples, n_features = X.shape
747
+ # check that the empirical covariance is full rank
748
+ if (linalg.svdvals(np.dot(X.T, X)) > 1e-8).sum() != n_features:
749
+ warnings.warn(
750
+ "The covariance matrix associated to your dataset is not full rank"
751
+ )
752
+ # compute and store raw estimates
753
+ raw_location, raw_covariance, raw_support, raw_dist = fast_mcd(
754
+ X,
755
+ support_fraction=self.support_fraction,
756
+ cov_computation_method=self._nonrobust_covariance,
757
+ random_state=random_state,
758
+ )
759
+ if self.assume_centered:
760
+ raw_location = np.zeros(n_features)
761
+ raw_covariance = self._nonrobust_covariance(
762
+ X[raw_support], assume_centered=True
763
+ )
764
+ # get precision matrix in an optimized way
765
+ precision = linalg.pinvh(raw_covariance)
766
+ raw_dist = np.sum(np.dot(X, precision) * X, 1)
767
+ self.raw_location_ = raw_location
768
+ self.raw_covariance_ = raw_covariance
769
+ self.raw_support_ = raw_support
770
+ self.location_ = raw_location
771
+ self.support_ = raw_support
772
+ self.dist_ = raw_dist
773
+ # obtain consistency at normal models
774
+ self.correct_covariance(X)
775
+ # re-weight estimator
776
+ self.reweight_covariance(X)
777
+
778
+ return self
779
+
780
+ def correct_covariance(self, data):
781
+ """Apply a correction to raw Minimum Covariance Determinant estimates.
782
+
783
+ Correction using the empirical correction factor suggested
784
+ by Rousseeuw and Van Driessen in [RVD]_.
785
+
786
+ Parameters
787
+ ----------
788
+ data : array-like of shape (n_samples, n_features)
789
+ The data matrix, with p features and n samples.
790
+ The data set must be the one which was used to compute
791
+ the raw estimates.
792
+
793
+ Returns
794
+ -------
795
+ covariance_corrected : ndarray of shape (n_features, n_features)
796
+ Corrected robust covariance estimate.
797
+
798
+ References
799
+ ----------
800
+
801
+ .. [RVD] A Fast Algorithm for the Minimum Covariance
802
+ Determinant Estimator, 1999, American Statistical Association
803
+ and the American Society for Quality, TECHNOMETRICS
804
+ """
805
+
806
+ # Check that the covariance of the support data is not equal to 0.
807
+ # Otherwise self.dist_ = 0 and thus correction = 0.
808
+ n_samples = len(self.dist_)
809
+ n_support = np.sum(self.support_)
810
+ if n_support < n_samples and np.allclose(self.raw_covariance_, 0):
811
+ raise ValueError(
812
+ "The covariance matrix of the support data "
813
+ "is equal to 0, try to increase support_fraction"
814
+ )
815
+ correction = np.median(self.dist_) / chi2(data.shape[1]).isf(0.5)
816
+ covariance_corrected = self.raw_covariance_ * correction
817
+ self.dist_ /= correction
818
+ return covariance_corrected
819
+
820
+ def reweight_covariance(self, data):
821
+ """Re-weight raw Minimum Covariance Determinant estimates.
822
+
823
+ Re-weight observations using Rousseeuw's method (equivalent to
824
+ deleting outlying observations from the data set before
825
+ computing location and covariance estimates) described
826
+ in [RVDriessen]_.
827
+
828
+ Parameters
829
+ ----------
830
+ data : array-like of shape (n_samples, n_features)
831
+ The data matrix, with p features and n samples.
832
+ The data set must be the one which was used to compute
833
+ the raw estimates.
834
+
835
+ Returns
836
+ -------
837
+ location_reweighted : ndarray of shape (n_features,)
838
+ Re-weighted robust location estimate.
839
+
840
+ covariance_reweighted : ndarray of shape (n_features, n_features)
841
+ Re-weighted robust covariance estimate.
842
+
843
+ support_reweighted : ndarray of shape (n_samples,), dtype=bool
844
+ A mask of the observations that have been used to compute
845
+ the re-weighted robust location and covariance estimates.
846
+
847
+ References
848
+ ----------
849
+
850
+ .. [RVDriessen] A Fast Algorithm for the Minimum Covariance
851
+ Determinant Estimator, 1999, American Statistical Association
852
+ and the American Society for Quality, TECHNOMETRICS
853
+ """
854
+ n_samples, n_features = data.shape
855
+ mask = self.dist_ < chi2(n_features).isf(0.025)
856
+ if self.assume_centered:
857
+ location_reweighted = np.zeros(n_features)
858
+ else:
859
+ location_reweighted = data[mask].mean(0)
860
+ covariance_reweighted = self._nonrobust_covariance(
861
+ data[mask], assume_centered=self.assume_centered
862
+ )
863
+ support_reweighted = np.zeros(n_samples, dtype=bool)
864
+ support_reweighted[mask] = True
865
+ self._set_covariance(covariance_reweighted)
866
+ self.location_ = location_reweighted
867
+ self.support_ = support_reweighted
868
+ X_centered = data - self.location_
869
+ self.dist_ = np.sum(np.dot(X_centered, self.get_precision()) * X_centered, 1)
870
+ return location_reweighted, covariance_reweighted, support_reweighted
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/_shrunk_covariance.py ADDED
@@ -0,0 +1,820 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Covariance estimators using shrinkage.
3
+
4
+ Shrinkage corresponds to regularising `cov` using a convex combination:
5
+ shrunk_cov = (1-shrinkage)*cov + shrinkage*structured_estimate.
6
+
7
+ """
8
+
9
+ # Authors: The scikit-learn developers
10
+ # SPDX-License-Identifier: BSD-3-Clause
11
+
12
+ # avoid division truncation
13
+ import warnings
14
+ from numbers import Integral, Real
15
+
16
+ import numpy as np
17
+
18
+ from ..base import _fit_context
19
+ from ..utils import check_array
20
+ from ..utils._param_validation import Interval, validate_params
21
+ from ..utils.validation import validate_data
22
+ from . import EmpiricalCovariance, empirical_covariance
23
+
24
+
25
+ def _ledoit_wolf(X, *, assume_centered, block_size):
26
+ """Estimate the shrunk Ledoit-Wolf covariance matrix."""
27
+ # for only one feature, the result is the same whatever the shrinkage
28
+ if len(X.shape) == 2 and X.shape[1] == 1:
29
+ if not assume_centered:
30
+ X = X - X.mean()
31
+ return np.atleast_2d((X**2).mean()), 0.0
32
+ n_features = X.shape[1]
33
+
34
+ # get Ledoit-Wolf shrinkage
35
+ shrinkage = ledoit_wolf_shrinkage(
36
+ X, assume_centered=assume_centered, block_size=block_size
37
+ )
38
+ emp_cov = empirical_covariance(X, assume_centered=assume_centered)
39
+ mu = np.sum(np.trace(emp_cov)) / n_features
40
+ shrunk_cov = (1.0 - shrinkage) * emp_cov
41
+ shrunk_cov.flat[:: n_features + 1] += shrinkage * mu
42
+
43
+ return shrunk_cov, shrinkage
44
+
45
+
46
+ def _oas(X, *, assume_centered=False):
47
+ """Estimate covariance with the Oracle Approximating Shrinkage algorithm.
48
+
49
+ The formulation is based on [1]_.
50
+ [1] "Shrinkage algorithms for MMSE covariance estimation.",
51
+ Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O.
52
+ IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010.
53
+ https://arxiv.org/pdf/0907.4698.pdf
54
+ """
55
+ if len(X.shape) == 2 and X.shape[1] == 1:
56
+ # for only one feature, the result is the same whatever the shrinkage
57
+ if not assume_centered:
58
+ X = X - X.mean()
59
+ return np.atleast_2d((X**2).mean()), 0.0
60
+
61
+ n_samples, n_features = X.shape
62
+
63
+ emp_cov = empirical_covariance(X, assume_centered=assume_centered)
64
+
65
+ # The shrinkage is defined as:
66
+ # shrinkage = min(
67
+ # trace(S @ S.T) + trace(S)**2) / ((n + 1) (trace(S @ S.T) - trace(S)**2 / p), 1
68
+ # )
69
+ # where n and p are n_samples and n_features, respectively (cf. Eq. 23 in [1]).
70
+ # The factor 2 / p is omitted since it does not impact the value of the estimator
71
+ # for large p.
72
+
73
+ # Instead of computing trace(S)**2, we can compute the average of the squared
74
+ # elements of S that is equal to trace(S)**2 / p**2.
75
+ # See the definition of the Frobenius norm:
76
+ # https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm
77
+ alpha = np.mean(emp_cov**2)
78
+ mu = np.trace(emp_cov) / n_features
79
+ mu_squared = mu**2
80
+
81
+ # The factor 1 / p**2 will cancel out since it is in both the numerator and
82
+ # denominator
83
+ num = alpha + mu_squared
84
+ den = (n_samples + 1) * (alpha - mu_squared / n_features)
85
+ shrinkage = 1.0 if den == 0 else min(num / den, 1.0)
86
+
87
+ # The shrunk covariance is defined as:
88
+ # (1 - shrinkage) * S + shrinkage * F (cf. Eq. 4 in [1])
89
+ # where S is the empirical covariance and F is the shrinkage target defined as
90
+ # F = trace(S) / n_features * np.identity(n_features) (cf. Eq. 3 in [1])
91
+ shrunk_cov = (1.0 - shrinkage) * emp_cov
92
+ shrunk_cov.flat[:: n_features + 1] += shrinkage * mu
93
+
94
+ return shrunk_cov, shrinkage
95
+
96
+
97
+ ###############################################################################
98
+ # Public API
99
+ # ShrunkCovariance estimator
100
+
101
+
102
+ @validate_params(
103
+ {
104
+ "emp_cov": ["array-like"],
105
+ "shrinkage": [Interval(Real, 0, 1, closed="both")],
106
+ },
107
+ prefer_skip_nested_validation=True,
108
+ )
109
+ def shrunk_covariance(emp_cov, shrinkage=0.1):
110
+ """Calculate covariance matrices shrunk on the diagonal.
111
+
112
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
113
+
114
+ Parameters
115
+ ----------
116
+ emp_cov : array-like of shape (..., n_features, n_features)
117
+ Covariance matrices to be shrunk, at least 2D ndarray.
118
+
119
+ shrinkage : float, default=0.1
120
+ Coefficient in the convex combination used for the computation
121
+ of the shrunk estimate. Range is [0, 1].
122
+
123
+ Returns
124
+ -------
125
+ shrunk_cov : ndarray of shape (..., n_features, n_features)
126
+ Shrunk covariance matrices.
127
+
128
+ Notes
129
+ -----
130
+ The regularized (shrunk) covariance is given by::
131
+
132
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
133
+
134
+ where `mu = trace(cov) / n_features`.
135
+
136
+ Examples
137
+ --------
138
+ >>> import numpy as np
139
+ >>> from sklearn.datasets import make_gaussian_quantiles
140
+ >>> from sklearn.covariance import empirical_covariance, shrunk_covariance
141
+ >>> real_cov = np.array([[.8, .3], [.3, .4]])
142
+ >>> rng = np.random.RandomState(0)
143
+ >>> X = rng.multivariate_normal(mean=[0, 0], cov=real_cov, size=500)
144
+ >>> shrunk_covariance(empirical_covariance(X))
145
+ array([[0.73..., 0.25...],
146
+ [0.25..., 0.41...]])
147
+ """
148
+ emp_cov = check_array(emp_cov, allow_nd=True)
149
+ n_features = emp_cov.shape[-1]
150
+
151
+ shrunk_cov = (1.0 - shrinkage) * emp_cov
152
+ mu = np.trace(emp_cov, axis1=-2, axis2=-1) / n_features
153
+ mu = np.expand_dims(mu, axis=tuple(range(mu.ndim, emp_cov.ndim)))
154
+ shrunk_cov += shrinkage * mu * np.eye(n_features)
155
+
156
+ return shrunk_cov
157
+
158
+
159
+ class ShrunkCovariance(EmpiricalCovariance):
160
+ """Covariance estimator with shrinkage.
161
+
162
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
163
+
164
+ Parameters
165
+ ----------
166
+ store_precision : bool, default=True
167
+ Specify if the estimated precision is stored.
168
+
169
+ assume_centered : bool, default=False
170
+ If True, data will not be centered before computation.
171
+ Useful when working with data whose mean is almost, but not exactly
172
+ zero.
173
+ If False, data will be centered before computation.
174
+
175
+ shrinkage : float, default=0.1
176
+ Coefficient in the convex combination used for the computation
177
+ of the shrunk estimate. Range is [0, 1].
178
+
179
+ Attributes
180
+ ----------
181
+ covariance_ : ndarray of shape (n_features, n_features)
182
+ Estimated covariance matrix
183
+
184
+ location_ : ndarray of shape (n_features,)
185
+ Estimated location, i.e. the estimated mean.
186
+
187
+ precision_ : ndarray of shape (n_features, n_features)
188
+ Estimated pseudo inverse matrix.
189
+ (stored only if store_precision is True)
190
+
191
+ n_features_in_ : int
192
+ Number of features seen during :term:`fit`.
193
+
194
+ .. versionadded:: 0.24
195
+
196
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
197
+ Names of features seen during :term:`fit`. Defined only when `X`
198
+ has feature names that are all strings.
199
+
200
+ .. versionadded:: 1.0
201
+
202
+ See Also
203
+ --------
204
+ EllipticEnvelope : An object for detecting outliers in
205
+ a Gaussian distributed dataset.
206
+ EmpiricalCovariance : Maximum likelihood covariance estimator.
207
+ GraphicalLasso : Sparse inverse covariance estimation
208
+ with an l1-penalized estimator.
209
+ GraphicalLassoCV : Sparse inverse covariance with cross-validated
210
+ choice of the l1 penalty.
211
+ LedoitWolf : LedoitWolf Estimator.
212
+ MinCovDet : Minimum Covariance Determinant
213
+ (robust estimator of covariance).
214
+ OAS : Oracle Approximating Shrinkage Estimator.
215
+
216
+ Notes
217
+ -----
218
+ The regularized covariance is given by:
219
+
220
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
221
+
222
+ where mu = trace(cov) / n_features
223
+
224
+ Examples
225
+ --------
226
+ >>> import numpy as np
227
+ >>> from sklearn.covariance import ShrunkCovariance
228
+ >>> from sklearn.datasets import make_gaussian_quantiles
229
+ >>> real_cov = np.array([[.8, .3],
230
+ ... [.3, .4]])
231
+ >>> rng = np.random.RandomState(0)
232
+ >>> X = rng.multivariate_normal(mean=[0, 0],
233
+ ... cov=real_cov,
234
+ ... size=500)
235
+ >>> cov = ShrunkCovariance().fit(X)
236
+ >>> cov.covariance_
237
+ array([[0.7387..., 0.2536...],
238
+ [0.2536..., 0.4110...]])
239
+ >>> cov.location_
240
+ array([0.0622..., 0.0193...])
241
+ """
242
+
243
+ _parameter_constraints: dict = {
244
+ **EmpiricalCovariance._parameter_constraints,
245
+ "shrinkage": [Interval(Real, 0, 1, closed="both")],
246
+ }
247
+
248
+ def __init__(self, *, store_precision=True, assume_centered=False, shrinkage=0.1):
249
+ super().__init__(
250
+ store_precision=store_precision, assume_centered=assume_centered
251
+ )
252
+ self.shrinkage = shrinkage
253
+
254
+ @_fit_context(prefer_skip_nested_validation=True)
255
+ def fit(self, X, y=None):
256
+ """Fit the shrunk covariance model to X.
257
+
258
+ Parameters
259
+ ----------
260
+ X : array-like of shape (n_samples, n_features)
261
+ Training data, where `n_samples` is the number of samples
262
+ and `n_features` is the number of features.
263
+
264
+ y : Ignored
265
+ Not used, present for API consistency by convention.
266
+
267
+ Returns
268
+ -------
269
+ self : object
270
+ Returns the instance itself.
271
+ """
272
+ X = validate_data(self, X)
273
+ # Not calling the parent object to fit, to avoid a potential
274
+ # matrix inversion when setting the precision
275
+ if self.assume_centered:
276
+ self.location_ = np.zeros(X.shape[1])
277
+ else:
278
+ self.location_ = X.mean(0)
279
+ covariance = empirical_covariance(X, assume_centered=self.assume_centered)
280
+ covariance = shrunk_covariance(covariance, self.shrinkage)
281
+ self._set_covariance(covariance)
282
+
283
+ return self
284
+
285
+
286
+ # Ledoit-Wolf estimator
287
+
288
+
289
+ @validate_params(
290
+ {
291
+ "X": ["array-like"],
292
+ "assume_centered": ["boolean"],
293
+ "block_size": [Interval(Integral, 1, None, closed="left")],
294
+ },
295
+ prefer_skip_nested_validation=True,
296
+ )
297
+ def ledoit_wolf_shrinkage(X, assume_centered=False, block_size=1000):
298
+ """Estimate the shrunk Ledoit-Wolf covariance matrix.
299
+
300
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
301
+
302
+ Parameters
303
+ ----------
304
+ X : array-like of shape (n_samples, n_features)
305
+ Data from which to compute the Ledoit-Wolf shrunk covariance shrinkage.
306
+
307
+ assume_centered : bool, default=False
308
+ If True, data will not be centered before computation.
309
+ Useful to work with data whose mean is significantly equal to
310
+ zero but is not exactly zero.
311
+ If False, data will be centered before computation.
312
+
313
+ block_size : int, default=1000
314
+ Size of blocks into which the covariance matrix will be split.
315
+
316
+ Returns
317
+ -------
318
+ shrinkage : float
319
+ Coefficient in the convex combination used for the computation
320
+ of the shrunk estimate.
321
+
322
+ Notes
323
+ -----
324
+ The regularized (shrunk) covariance is:
325
+
326
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
327
+
328
+ where mu = trace(cov) / n_features
329
+
330
+ Examples
331
+ --------
332
+ >>> import numpy as np
333
+ >>> from sklearn.covariance import ledoit_wolf_shrinkage
334
+ >>> real_cov = np.array([[.4, .2], [.2, .8]])
335
+ >>> rng = np.random.RandomState(0)
336
+ >>> X = rng.multivariate_normal(mean=[0, 0], cov=real_cov, size=50)
337
+ >>> shrinkage_coefficient = ledoit_wolf_shrinkage(X)
338
+ >>> shrinkage_coefficient
339
+ np.float64(0.23...)
340
+ """
341
+ X = check_array(X)
342
+ # for only one feature, the result is the same whatever the shrinkage
343
+ if len(X.shape) == 2 and X.shape[1] == 1:
344
+ return 0.0
345
+ if X.ndim == 1:
346
+ X = np.reshape(X, (1, -1))
347
+
348
+ if X.shape[0] == 1:
349
+ warnings.warn(
350
+ "Only one sample available. You may want to reshape your data array"
351
+ )
352
+ n_samples, n_features = X.shape
353
+
354
+ # optionally center data
355
+ if not assume_centered:
356
+ X = X - X.mean(0)
357
+
358
+ # A non-blocked version of the computation is present in the tests
359
+ # in tests/test_covariance.py
360
+
361
+ # number of blocks to split the covariance matrix into
362
+ n_splits = int(n_features / block_size)
363
+ X2 = X**2
364
+ emp_cov_trace = np.sum(X2, axis=0) / n_samples
365
+ mu = np.sum(emp_cov_trace) / n_features
366
+ beta_ = 0.0 # sum of the coefficients of <X2.T, X2>
367
+ delta_ = 0.0 # sum of the *squared* coefficients of <X.T, X>
368
+ # starting block computation
369
+ for i in range(n_splits):
370
+ for j in range(n_splits):
371
+ rows = slice(block_size * i, block_size * (i + 1))
372
+ cols = slice(block_size * j, block_size * (j + 1))
373
+ beta_ += np.sum(np.dot(X2.T[rows], X2[:, cols]))
374
+ delta_ += np.sum(np.dot(X.T[rows], X[:, cols]) ** 2)
375
+ rows = slice(block_size * i, block_size * (i + 1))
376
+ beta_ += np.sum(np.dot(X2.T[rows], X2[:, block_size * n_splits :]))
377
+ delta_ += np.sum(np.dot(X.T[rows], X[:, block_size * n_splits :]) ** 2)
378
+ for j in range(n_splits):
379
+ cols = slice(block_size * j, block_size * (j + 1))
380
+ beta_ += np.sum(np.dot(X2.T[block_size * n_splits :], X2[:, cols]))
381
+ delta_ += np.sum(np.dot(X.T[block_size * n_splits :], X[:, cols]) ** 2)
382
+ delta_ += np.sum(
383
+ np.dot(X.T[block_size * n_splits :], X[:, block_size * n_splits :]) ** 2
384
+ )
385
+ delta_ /= n_samples**2
386
+ beta_ += np.sum(
387
+ np.dot(X2.T[block_size * n_splits :], X2[:, block_size * n_splits :])
388
+ )
389
+ # use delta_ to compute beta
390
+ beta = 1.0 / (n_features * n_samples) * (beta_ / n_samples - delta_)
391
+ # delta is the sum of the squared coefficients of (<X.T,X> - mu*Id) / p
392
+ delta = delta_ - 2.0 * mu * emp_cov_trace.sum() + n_features * mu**2
393
+ delta /= n_features
394
+ # get final beta as the min between beta and delta
395
+ # We do this to prevent shrinking more than "1", which would invert
396
+ # the value of covariances
397
+ beta = min(beta, delta)
398
+ # finally get shrinkage
399
+ shrinkage = 0 if beta == 0 else beta / delta
400
+ return shrinkage
401
+
402
+
403
+ @validate_params(
404
+ {"X": ["array-like"]},
405
+ prefer_skip_nested_validation=False,
406
+ )
407
+ def ledoit_wolf(X, *, assume_centered=False, block_size=1000):
408
+ """Estimate the shrunk Ledoit-Wolf covariance matrix.
409
+
410
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
411
+
412
+ Parameters
413
+ ----------
414
+ X : array-like of shape (n_samples, n_features)
415
+ Data from which to compute the covariance estimate.
416
+
417
+ assume_centered : bool, default=False
418
+ If True, data will not be centered before computation.
419
+ Useful to work with data whose mean is significantly equal to
420
+ zero but is not exactly zero.
421
+ If False, data will be centered before computation.
422
+
423
+ block_size : int, default=1000
424
+ Size of blocks into which the covariance matrix will be split.
425
+ This is purely a memory optimization and does not affect results.
426
+
427
+ Returns
428
+ -------
429
+ shrunk_cov : ndarray of shape (n_features, n_features)
430
+ Shrunk covariance.
431
+
432
+ shrinkage : float
433
+ Coefficient in the convex combination used for the computation
434
+ of the shrunk estimate.
435
+
436
+ Notes
437
+ -----
438
+ The regularized (shrunk) covariance is:
439
+
440
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
441
+
442
+ where mu = trace(cov) / n_features
443
+
444
+ Examples
445
+ --------
446
+ >>> import numpy as np
447
+ >>> from sklearn.covariance import empirical_covariance, ledoit_wolf
448
+ >>> real_cov = np.array([[.4, .2], [.2, .8]])
449
+ >>> rng = np.random.RandomState(0)
450
+ >>> X = rng.multivariate_normal(mean=[0, 0], cov=real_cov, size=50)
451
+ >>> covariance, shrinkage = ledoit_wolf(X)
452
+ >>> covariance
453
+ array([[0.44..., 0.16...],
454
+ [0.16..., 0.80...]])
455
+ >>> shrinkage
456
+ np.float64(0.23...)
457
+ """
458
+ estimator = LedoitWolf(
459
+ assume_centered=assume_centered,
460
+ block_size=block_size,
461
+ store_precision=False,
462
+ ).fit(X)
463
+
464
+ return estimator.covariance_, estimator.shrinkage_
465
+
466
+
467
+ class LedoitWolf(EmpiricalCovariance):
468
+ """LedoitWolf Estimator.
469
+
470
+ Ledoit-Wolf is a particular form of shrinkage, where the shrinkage
471
+ coefficient is computed using O. Ledoit and M. Wolf's formula as
472
+ described in "A Well-Conditioned Estimator for Large-Dimensional
473
+ Covariance Matrices", Ledoit and Wolf, Journal of Multivariate
474
+ Analysis, Volume 88, Issue 2, February 2004, pages 365-411.
475
+
476
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
477
+
478
+ Parameters
479
+ ----------
480
+ store_precision : bool, default=True
481
+ Specify if the estimated precision is stored.
482
+
483
+ assume_centered : bool, default=False
484
+ If True, data will not be centered before computation.
485
+ Useful when working with data whose mean is almost, but not exactly
486
+ zero.
487
+ If False (default), data will be centered before computation.
488
+
489
+ block_size : int, default=1000
490
+ Size of blocks into which the covariance matrix will be split
491
+ during its Ledoit-Wolf estimation. This is purely a memory
492
+ optimization and does not affect results.
493
+
494
+ Attributes
495
+ ----------
496
+ covariance_ : ndarray of shape (n_features, n_features)
497
+ Estimated covariance matrix.
498
+
499
+ location_ : ndarray of shape (n_features,)
500
+ Estimated location, i.e. the estimated mean.
501
+
502
+ precision_ : ndarray of shape (n_features, n_features)
503
+ Estimated pseudo inverse matrix.
504
+ (stored only if store_precision is True)
505
+
506
+ shrinkage_ : float
507
+ Coefficient in the convex combination used for the computation
508
+ of the shrunk estimate. Range is [0, 1].
509
+
510
+ n_features_in_ : int
511
+ Number of features seen during :term:`fit`.
512
+
513
+ .. versionadded:: 0.24
514
+
515
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
516
+ Names of features seen during :term:`fit`. Defined only when `X`
517
+ has feature names that are all strings.
518
+
519
+ .. versionadded:: 1.0
520
+
521
+ See Also
522
+ --------
523
+ EllipticEnvelope : An object for detecting outliers in
524
+ a Gaussian distributed dataset.
525
+ EmpiricalCovariance : Maximum likelihood covariance estimator.
526
+ GraphicalLasso : Sparse inverse covariance estimation
527
+ with an l1-penalized estimator.
528
+ GraphicalLassoCV : Sparse inverse covariance with cross-validated
529
+ choice of the l1 penalty.
530
+ MinCovDet : Minimum Covariance Determinant
531
+ (robust estimator of covariance).
532
+ OAS : Oracle Approximating Shrinkage Estimator.
533
+ ShrunkCovariance : Covariance estimator with shrinkage.
534
+
535
+ Notes
536
+ -----
537
+ The regularised covariance is:
538
+
539
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features)
540
+
541
+ where mu = trace(cov) / n_features
542
+ and shrinkage is given by the Ledoit and Wolf formula (see References)
543
+
544
+ References
545
+ ----------
546
+ "A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices",
547
+ Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2,
548
+ February 2004, pages 365-411.
549
+
550
+ Examples
551
+ --------
552
+ >>> import numpy as np
553
+ >>> from sklearn.covariance import LedoitWolf
554
+ >>> real_cov = np.array([[.4, .2],
555
+ ... [.2, .8]])
556
+ >>> np.random.seed(0)
557
+ >>> X = np.random.multivariate_normal(mean=[0, 0],
558
+ ... cov=real_cov,
559
+ ... size=50)
560
+ >>> cov = LedoitWolf().fit(X)
561
+ >>> cov.covariance_
562
+ array([[0.4406..., 0.1616...],
563
+ [0.1616..., 0.8022...]])
564
+ >>> cov.location_
565
+ array([ 0.0595... , -0.0075...])
566
+
567
+ See also :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py`
568
+ for a more detailed example.
569
+ """
570
+
571
+ _parameter_constraints: dict = {
572
+ **EmpiricalCovariance._parameter_constraints,
573
+ "block_size": [Interval(Integral, 1, None, closed="left")],
574
+ }
575
+
576
+ def __init__(self, *, store_precision=True, assume_centered=False, block_size=1000):
577
+ super().__init__(
578
+ store_precision=store_precision, assume_centered=assume_centered
579
+ )
580
+ self.block_size = block_size
581
+
582
+ @_fit_context(prefer_skip_nested_validation=True)
583
+ def fit(self, X, y=None):
584
+ """Fit the Ledoit-Wolf shrunk covariance model to X.
585
+
586
+ Parameters
587
+ ----------
588
+ X : array-like of shape (n_samples, n_features)
589
+ Training data, where `n_samples` is the number of samples
590
+ and `n_features` is the number of features.
591
+ y : Ignored
592
+ Not used, present for API consistency by convention.
593
+
594
+ Returns
595
+ -------
596
+ self : object
597
+ Returns the instance itself.
598
+ """
599
+ # Not calling the parent object to fit, to avoid computing the
600
+ # covariance matrix (and potentially the precision)
601
+ X = validate_data(self, X)
602
+ if self.assume_centered:
603
+ self.location_ = np.zeros(X.shape[1])
604
+ else:
605
+ self.location_ = X.mean(0)
606
+ covariance, shrinkage = _ledoit_wolf(
607
+ X - self.location_, assume_centered=True, block_size=self.block_size
608
+ )
609
+ self.shrinkage_ = shrinkage
610
+ self._set_covariance(covariance)
611
+
612
+ return self
613
+
614
+
615
+ # OAS estimator
616
+ @validate_params(
617
+ {"X": ["array-like"]},
618
+ prefer_skip_nested_validation=False,
619
+ )
620
+ def oas(X, *, assume_centered=False):
621
+ """Estimate covariance with the Oracle Approximating Shrinkage.
622
+
623
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
624
+
625
+ Parameters
626
+ ----------
627
+ X : array-like of shape (n_samples, n_features)
628
+ Data from which to compute the covariance estimate.
629
+
630
+ assume_centered : bool, default=False
631
+ If True, data will not be centered before computation.
632
+ Useful to work with data whose mean is significantly equal to
633
+ zero but is not exactly zero.
634
+ If False, data will be centered before computation.
635
+
636
+ Returns
637
+ -------
638
+ shrunk_cov : array-like of shape (n_features, n_features)
639
+ Shrunk covariance.
640
+
641
+ shrinkage : float
642
+ Coefficient in the convex combination used for the computation
643
+ of the shrunk estimate.
644
+
645
+ Notes
646
+ -----
647
+ The regularised covariance is:
648
+
649
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features),
650
+
651
+ where mu = trace(cov) / n_features and shrinkage is given by the OAS formula
652
+ (see [1]_).
653
+
654
+ The shrinkage formulation implemented here differs from Eq. 23 in [1]_. In
655
+ the original article, formula (23) states that 2/p (p being the number of
656
+ features) is multiplied by Trace(cov*cov) in both the numerator and
657
+ denominator, but this operation is omitted because for a large p, the value
658
+ of 2/p is so small that it doesn't affect the value of the estimator.
659
+
660
+ References
661
+ ----------
662
+ .. [1] :arxiv:`"Shrinkage algorithms for MMSE covariance estimation.",
663
+ Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O.
664
+ IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010.
665
+ <0907.4698>`
666
+
667
+ Examples
668
+ --------
669
+ >>> import numpy as np
670
+ >>> from sklearn.covariance import oas
671
+ >>> rng = np.random.RandomState(0)
672
+ >>> real_cov = [[.8, .3], [.3, .4]]
673
+ >>> X = rng.multivariate_normal(mean=[0, 0], cov=real_cov, size=500)
674
+ >>> shrunk_cov, shrinkage = oas(X)
675
+ >>> shrunk_cov
676
+ array([[0.7533..., 0.2763...],
677
+ [0.2763..., 0.3964...]])
678
+ >>> shrinkage
679
+ np.float64(0.0195...)
680
+ """
681
+ estimator = OAS(
682
+ assume_centered=assume_centered,
683
+ ).fit(X)
684
+ return estimator.covariance_, estimator.shrinkage_
685
+
686
+
687
+ class OAS(EmpiricalCovariance):
688
+ """Oracle Approximating Shrinkage Estimator.
689
+
690
+ Read more in the :ref:`User Guide <shrunk_covariance>`.
691
+
692
+ Parameters
693
+ ----------
694
+ store_precision : bool, default=True
695
+ Specify if the estimated precision is stored.
696
+
697
+ assume_centered : bool, default=False
698
+ If True, data will not be centered before computation.
699
+ Useful when working with data whose mean is almost, but not exactly
700
+ zero.
701
+ If False (default), data will be centered before computation.
702
+
703
+ Attributes
704
+ ----------
705
+ covariance_ : ndarray of shape (n_features, n_features)
706
+ Estimated covariance matrix.
707
+
708
+ location_ : ndarray of shape (n_features,)
709
+ Estimated location, i.e. the estimated mean.
710
+
711
+ precision_ : ndarray of shape (n_features, n_features)
712
+ Estimated pseudo inverse matrix.
713
+ (stored only if store_precision is True)
714
+
715
+ shrinkage_ : float
716
+ coefficient in the convex combination used for the computation
717
+ of the shrunk estimate. Range is [0, 1].
718
+
719
+ n_features_in_ : int
720
+ Number of features seen during :term:`fit`.
721
+
722
+ .. versionadded:: 0.24
723
+
724
+ feature_names_in_ : ndarray of shape (`n_features_in_`,)
725
+ Names of features seen during :term:`fit`. Defined only when `X`
726
+ has feature names that are all strings.
727
+
728
+ .. versionadded:: 1.0
729
+
730
+ See Also
731
+ --------
732
+ EllipticEnvelope : An object for detecting outliers in
733
+ a Gaussian distributed dataset.
734
+ EmpiricalCovariance : Maximum likelihood covariance estimator.
735
+ GraphicalLasso : Sparse inverse covariance estimation
736
+ with an l1-penalized estimator.
737
+ GraphicalLassoCV : Sparse inverse covariance with cross-validated
738
+ choice of the l1 penalty.
739
+ LedoitWolf : LedoitWolf Estimator.
740
+ MinCovDet : Minimum Covariance Determinant
741
+ (robust estimator of covariance).
742
+ ShrunkCovariance : Covariance estimator with shrinkage.
743
+
744
+ Notes
745
+ -----
746
+ The regularised covariance is:
747
+
748
+ (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features),
749
+
750
+ where mu = trace(cov) / n_features and shrinkage is given by the OAS formula
751
+ (see [1]_).
752
+
753
+ The shrinkage formulation implemented here differs from Eq. 23 in [1]_. In
754
+ the original article, formula (23) states that 2/p (p being the number of
755
+ features) is multiplied by Trace(cov*cov) in both the numerator and
756
+ denominator, but this operation is omitted because for a large p, the value
757
+ of 2/p is so small that it doesn't affect the value of the estimator.
758
+
759
+ References
760
+ ----------
761
+ .. [1] :arxiv:`"Shrinkage algorithms for MMSE covariance estimation.",
762
+ Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O.
763
+ IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010.
764
+ <0907.4698>`
765
+
766
+ Examples
767
+ --------
768
+ >>> import numpy as np
769
+ >>> from sklearn.covariance import OAS
770
+ >>> from sklearn.datasets import make_gaussian_quantiles
771
+ >>> real_cov = np.array([[.8, .3],
772
+ ... [.3, .4]])
773
+ >>> rng = np.random.RandomState(0)
774
+ >>> X = rng.multivariate_normal(mean=[0, 0],
775
+ ... cov=real_cov,
776
+ ... size=500)
777
+ >>> oas = OAS().fit(X)
778
+ >>> oas.covariance_
779
+ array([[0.7533..., 0.2763...],
780
+ [0.2763..., 0.3964...]])
781
+ >>> oas.precision_
782
+ array([[ 1.7833..., -1.2431... ],
783
+ [-1.2431..., 3.3889...]])
784
+ >>> oas.shrinkage_
785
+ np.float64(0.0195...)
786
+
787
+ See also :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py`
788
+ for a more detailed example.
789
+ """
790
+
791
+ @_fit_context(prefer_skip_nested_validation=True)
792
+ def fit(self, X, y=None):
793
+ """Fit the Oracle Approximating Shrinkage covariance model to X.
794
+
795
+ Parameters
796
+ ----------
797
+ X : array-like of shape (n_samples, n_features)
798
+ Training data, where `n_samples` is the number of samples
799
+ and `n_features` is the number of features.
800
+ y : Ignored
801
+ Not used, present for API consistency by convention.
802
+
803
+ Returns
804
+ -------
805
+ self : object
806
+ Returns the instance itself.
807
+ """
808
+ X = validate_data(self, X)
809
+ # Not calling the parent object to fit, to avoid computing the
810
+ # covariance matrix (and potentially the precision)
811
+ if self.assume_centered:
812
+ self.location_ = np.zeros(X.shape[1])
813
+ else:
814
+ self.location_ = X.mean(0)
815
+
816
+ covariance, shrinkage = _oas(X - self.location_, assume_centered=True)
817
+ self.shrinkage_ = shrinkage
818
+ self._set_covariance(covariance)
819
+
820
+ return self
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__init__.py ADDED
File without changes
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (183 Bytes). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_covariance.cpython-310.pyc ADDED
Binary file (7.78 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_elliptic_envelope.cpython-310.pyc ADDED
Binary file (1.67 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_graphical_lasso.cpython-310.pyc ADDED
Binary file (8.74 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/__pycache__/test_robust_covariance.cpython-310.pyc ADDED
Binary file (4.44 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_covariance.py ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Authors: The scikit-learn developers
2
+ # SPDX-License-Identifier: BSD-3-Clause
3
+
4
+ import numpy as np
5
+ import pytest
6
+
7
+ from sklearn import datasets
8
+ from sklearn.covariance import (
9
+ OAS,
10
+ EmpiricalCovariance,
11
+ LedoitWolf,
12
+ ShrunkCovariance,
13
+ empirical_covariance,
14
+ ledoit_wolf,
15
+ ledoit_wolf_shrinkage,
16
+ oas,
17
+ shrunk_covariance,
18
+ )
19
+ from sklearn.covariance._shrunk_covariance import _ledoit_wolf
20
+ from sklearn.utils._testing import (
21
+ assert_allclose,
22
+ assert_almost_equal,
23
+ assert_array_almost_equal,
24
+ assert_array_equal,
25
+ )
26
+
27
+ from .._shrunk_covariance import _oas
28
+
29
+ X, _ = datasets.load_diabetes(return_X_y=True)
30
+ X_1d = X[:, 0]
31
+ n_samples, n_features = X.shape
32
+
33
+
34
+ def test_covariance():
35
+ # Tests Covariance module on a simple dataset.
36
+ # test covariance fit from data
37
+ cov = EmpiricalCovariance()
38
+ cov.fit(X)
39
+ emp_cov = empirical_covariance(X)
40
+ assert_array_almost_equal(emp_cov, cov.covariance_, 4)
41
+ assert_almost_equal(cov.error_norm(emp_cov), 0)
42
+ assert_almost_equal(cov.error_norm(emp_cov, norm="spectral"), 0)
43
+ assert_almost_equal(cov.error_norm(emp_cov, norm="frobenius"), 0)
44
+ assert_almost_equal(cov.error_norm(emp_cov, scaling=False), 0)
45
+ assert_almost_equal(cov.error_norm(emp_cov, squared=False), 0)
46
+ with pytest.raises(NotImplementedError):
47
+ cov.error_norm(emp_cov, norm="foo")
48
+ # Mahalanobis distances computation test
49
+ mahal_dist = cov.mahalanobis(X)
50
+ assert np.amin(mahal_dist) > 0
51
+
52
+ # test with n_features = 1
53
+ X_1d = X[:, 0].reshape((-1, 1))
54
+ cov = EmpiricalCovariance()
55
+ cov.fit(X_1d)
56
+ assert_array_almost_equal(empirical_covariance(X_1d), cov.covariance_, 4)
57
+ assert_almost_equal(cov.error_norm(empirical_covariance(X_1d)), 0)
58
+ assert_almost_equal(cov.error_norm(empirical_covariance(X_1d), norm="spectral"), 0)
59
+
60
+ # test with one sample
61
+ # Create X with 1 sample and 5 features
62
+ X_1sample = np.arange(5).reshape(1, 5)
63
+ cov = EmpiricalCovariance()
64
+ warn_msg = "Only one sample available. You may want to reshape your data array"
65
+ with pytest.warns(UserWarning, match=warn_msg):
66
+ cov.fit(X_1sample)
67
+
68
+ assert_array_almost_equal(cov.covariance_, np.zeros(shape=(5, 5), dtype=np.float64))
69
+
70
+ # test integer type
71
+ X_integer = np.asarray([[0, 1], [1, 0]])
72
+ result = np.asarray([[0.25, -0.25], [-0.25, 0.25]])
73
+ assert_array_almost_equal(empirical_covariance(X_integer), result)
74
+
75
+ # test centered case
76
+ cov = EmpiricalCovariance(assume_centered=True)
77
+ cov.fit(X)
78
+ assert_array_equal(cov.location_, np.zeros(X.shape[1]))
79
+
80
+
81
+ @pytest.mark.parametrize("n_matrices", [1, 3])
82
+ def test_shrunk_covariance_func(n_matrices):
83
+ """Check `shrunk_covariance` function."""
84
+
85
+ n_features = 2
86
+ cov = np.ones((n_features, n_features))
87
+ cov_target = np.array([[1, 0.5], [0.5, 1]])
88
+
89
+ if n_matrices > 1:
90
+ cov = np.repeat(cov[np.newaxis, ...], n_matrices, axis=0)
91
+ cov_target = np.repeat(cov_target[np.newaxis, ...], n_matrices, axis=0)
92
+
93
+ cov_shrunk = shrunk_covariance(cov, 0.5)
94
+ assert_allclose(cov_shrunk, cov_target)
95
+
96
+
97
+ def test_shrunk_covariance():
98
+ """Check consistency between `ShrunkCovariance` and `shrunk_covariance`."""
99
+
100
+ # Tests ShrunkCovariance module on a simple dataset.
101
+ # compare shrunk covariance obtained from data and from MLE estimate
102
+ cov = ShrunkCovariance(shrinkage=0.5)
103
+ cov.fit(X)
104
+ assert_array_almost_equal(
105
+ shrunk_covariance(empirical_covariance(X), shrinkage=0.5), cov.covariance_, 4
106
+ )
107
+
108
+ # same test with shrinkage not provided
109
+ cov = ShrunkCovariance()
110
+ cov.fit(X)
111
+ assert_array_almost_equal(
112
+ shrunk_covariance(empirical_covariance(X)), cov.covariance_, 4
113
+ )
114
+
115
+ # same test with shrinkage = 0 (<==> empirical_covariance)
116
+ cov = ShrunkCovariance(shrinkage=0.0)
117
+ cov.fit(X)
118
+ assert_array_almost_equal(empirical_covariance(X), cov.covariance_, 4)
119
+
120
+ # test with n_features = 1
121
+ X_1d = X[:, 0].reshape((-1, 1))
122
+ cov = ShrunkCovariance(shrinkage=0.3)
123
+ cov.fit(X_1d)
124
+ assert_array_almost_equal(empirical_covariance(X_1d), cov.covariance_, 4)
125
+
126
+ # test shrinkage coeff on a simple data set (without saving precision)
127
+ cov = ShrunkCovariance(shrinkage=0.5, store_precision=False)
128
+ cov.fit(X)
129
+ assert cov.precision_ is None
130
+
131
+
132
+ def test_ledoit_wolf():
133
+ # Tests LedoitWolf module on a simple dataset.
134
+ # test shrinkage coeff on a simple data set
135
+ X_centered = X - X.mean(axis=0)
136
+ lw = LedoitWolf(assume_centered=True)
137
+ lw.fit(X_centered)
138
+ shrinkage_ = lw.shrinkage_
139
+
140
+ score_ = lw.score(X_centered)
141
+ assert_almost_equal(
142
+ ledoit_wolf_shrinkage(X_centered, assume_centered=True), shrinkage_
143
+ )
144
+ assert_almost_equal(
145
+ ledoit_wolf_shrinkage(X_centered, assume_centered=True, block_size=6),
146
+ shrinkage_,
147
+ )
148
+ # compare shrunk covariance obtained from data and from MLE estimate
149
+ lw_cov_from_mle, lw_shrinkage_from_mle = ledoit_wolf(
150
+ X_centered, assume_centered=True
151
+ )
152
+ assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
153
+ assert_almost_equal(lw_shrinkage_from_mle, lw.shrinkage_)
154
+ # compare estimates given by LW and ShrunkCovariance
155
+ scov = ShrunkCovariance(shrinkage=lw.shrinkage_, assume_centered=True)
156
+ scov.fit(X_centered)
157
+ assert_array_almost_equal(scov.covariance_, lw.covariance_, 4)
158
+
159
+ # test with n_features = 1
160
+ X_1d = X[:, 0].reshape((-1, 1))
161
+ lw = LedoitWolf(assume_centered=True)
162
+ lw.fit(X_1d)
163
+ lw_cov_from_mle, lw_shrinkage_from_mle = ledoit_wolf(X_1d, assume_centered=True)
164
+ assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
165
+ assert_almost_equal(lw_shrinkage_from_mle, lw.shrinkage_)
166
+ assert_array_almost_equal((X_1d**2).sum() / n_samples, lw.covariance_, 4)
167
+
168
+ # test shrinkage coeff on a simple data set (without saving precision)
169
+ lw = LedoitWolf(store_precision=False, assume_centered=True)
170
+ lw.fit(X_centered)
171
+ assert_almost_equal(lw.score(X_centered), score_, 4)
172
+ assert lw.precision_ is None
173
+
174
+ # Same tests without assuming centered data
175
+ # test shrinkage coeff on a simple data set
176
+ lw = LedoitWolf()
177
+ lw.fit(X)
178
+ assert_almost_equal(lw.shrinkage_, shrinkage_, 4)
179
+ assert_almost_equal(lw.shrinkage_, ledoit_wolf_shrinkage(X))
180
+ assert_almost_equal(lw.shrinkage_, ledoit_wolf(X)[1])
181
+ assert_almost_equal(
182
+ lw.shrinkage_, _ledoit_wolf(X=X, assume_centered=False, block_size=10000)[1]
183
+ )
184
+ assert_almost_equal(lw.score(X), score_, 4)
185
+ # compare shrunk covariance obtained from data and from MLE estimate
186
+ lw_cov_from_mle, lw_shrinkage_from_mle = ledoit_wolf(X)
187
+ assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
188
+ assert_almost_equal(lw_shrinkage_from_mle, lw.shrinkage_)
189
+ # compare estimates given by LW and ShrunkCovariance
190
+ scov = ShrunkCovariance(shrinkage=lw.shrinkage_)
191
+ scov.fit(X)
192
+ assert_array_almost_equal(scov.covariance_, lw.covariance_, 4)
193
+
194
+ # test with n_features = 1
195
+ X_1d = X[:, 0].reshape((-1, 1))
196
+ lw = LedoitWolf()
197
+ lw.fit(X_1d)
198
+ assert_allclose(
199
+ X_1d.var(ddof=0),
200
+ _ledoit_wolf(X=X_1d, assume_centered=False, block_size=10000)[0],
201
+ )
202
+ lw_cov_from_mle, lw_shrinkage_from_mle = ledoit_wolf(X_1d)
203
+ assert_array_almost_equal(lw_cov_from_mle, lw.covariance_, 4)
204
+ assert_almost_equal(lw_shrinkage_from_mle, lw.shrinkage_)
205
+ assert_array_almost_equal(empirical_covariance(X_1d), lw.covariance_, 4)
206
+
207
+ # test with one sample
208
+ # warning should be raised when using only 1 sample
209
+ X_1sample = np.arange(5).reshape(1, 5)
210
+ lw = LedoitWolf()
211
+
212
+ warn_msg = "Only one sample available. You may want to reshape your data array"
213
+ with pytest.warns(UserWarning, match=warn_msg):
214
+ lw.fit(X_1sample)
215
+
216
+ assert_array_almost_equal(lw.covariance_, np.zeros(shape=(5, 5), dtype=np.float64))
217
+
218
+ # test shrinkage coeff on a simple data set (without saving precision)
219
+ lw = LedoitWolf(store_precision=False)
220
+ lw.fit(X)
221
+ assert_almost_equal(lw.score(X), score_, 4)
222
+ assert lw.precision_ is None
223
+
224
+
225
+ def _naive_ledoit_wolf_shrinkage(X):
226
+ # A simple implementation of the formulas from Ledoit & Wolf
227
+
228
+ # The computation below achieves the following computations of the
229
+ # "O. Ledoit and M. Wolf, A Well-Conditioned Estimator for
230
+ # Large-Dimensional Covariance Matrices"
231
+ # beta and delta are given in the beginning of section 3.2
232
+ n_samples, n_features = X.shape
233
+ emp_cov = empirical_covariance(X, assume_centered=False)
234
+ mu = np.trace(emp_cov) / n_features
235
+ delta_ = emp_cov.copy()
236
+ delta_.flat[:: n_features + 1] -= mu
237
+ delta = (delta_**2).sum() / n_features
238
+ X2 = X**2
239
+ beta_ = (
240
+ 1.0
241
+ / (n_features * n_samples)
242
+ * np.sum(np.dot(X2.T, X2) / n_samples - emp_cov**2)
243
+ )
244
+
245
+ beta = min(beta_, delta)
246
+ shrinkage = beta / delta
247
+ return shrinkage
248
+
249
+
250
+ def test_ledoit_wolf_small():
251
+ # Compare our blocked implementation to the naive implementation
252
+ X_small = X[:, :4]
253
+ lw = LedoitWolf()
254
+ lw.fit(X_small)
255
+ shrinkage_ = lw.shrinkage_
256
+
257
+ assert_almost_equal(shrinkage_, _naive_ledoit_wolf_shrinkage(X_small))
258
+
259
+
260
+ def test_ledoit_wolf_large():
261
+ # test that ledoit_wolf doesn't error on data that is wider than block_size
262
+ rng = np.random.RandomState(0)
263
+ # use a number of features that is larger than the block-size
264
+ X = rng.normal(size=(10, 20))
265
+ lw = LedoitWolf(block_size=10).fit(X)
266
+ # check that covariance is about diagonal (random normal noise)
267
+ assert_almost_equal(lw.covariance_, np.eye(20), 0)
268
+ cov = lw.covariance_
269
+
270
+ # check that the result is consistent with not splitting data into blocks.
271
+ lw = LedoitWolf(block_size=25).fit(X)
272
+ assert_almost_equal(lw.covariance_, cov)
273
+
274
+
275
+ @pytest.mark.parametrize(
276
+ "ledoit_wolf_fitting_function", [LedoitWolf().fit, ledoit_wolf_shrinkage]
277
+ )
278
+ def test_ledoit_wolf_empty_array(ledoit_wolf_fitting_function):
279
+ """Check that we validate X and raise proper error with 0-sample array."""
280
+ X_empty = np.zeros((0, 2))
281
+ with pytest.raises(ValueError, match="Found array with 0 sample"):
282
+ ledoit_wolf_fitting_function(X_empty)
283
+
284
+
285
+ def test_oas():
286
+ # Tests OAS module on a simple dataset.
287
+ # test shrinkage coeff on a simple data set
288
+ X_centered = X - X.mean(axis=0)
289
+ oa = OAS(assume_centered=True)
290
+ oa.fit(X_centered)
291
+ shrinkage_ = oa.shrinkage_
292
+ score_ = oa.score(X_centered)
293
+ # compare shrunk covariance obtained from data and from MLE estimate
294
+ oa_cov_from_mle, oa_shrinkage_from_mle = oas(X_centered, assume_centered=True)
295
+ assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
296
+ assert_almost_equal(oa_shrinkage_from_mle, oa.shrinkage_)
297
+ # compare estimates given by OAS and ShrunkCovariance
298
+ scov = ShrunkCovariance(shrinkage=oa.shrinkage_, assume_centered=True)
299
+ scov.fit(X_centered)
300
+ assert_array_almost_equal(scov.covariance_, oa.covariance_, 4)
301
+
302
+ # test with n_features = 1
303
+ X_1d = X[:, 0:1]
304
+ oa = OAS(assume_centered=True)
305
+ oa.fit(X_1d)
306
+ oa_cov_from_mle, oa_shrinkage_from_mle = oas(X_1d, assume_centered=True)
307
+ assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
308
+ assert_almost_equal(oa_shrinkage_from_mle, oa.shrinkage_)
309
+ assert_array_almost_equal((X_1d**2).sum() / n_samples, oa.covariance_, 4)
310
+
311
+ # test shrinkage coeff on a simple data set (without saving precision)
312
+ oa = OAS(store_precision=False, assume_centered=True)
313
+ oa.fit(X_centered)
314
+ assert_almost_equal(oa.score(X_centered), score_, 4)
315
+ assert oa.precision_ is None
316
+
317
+ # Same tests without assuming centered data--------------------------------
318
+ # test shrinkage coeff on a simple data set
319
+ oa = OAS()
320
+ oa.fit(X)
321
+ assert_almost_equal(oa.shrinkage_, shrinkage_, 4)
322
+ assert_almost_equal(oa.score(X), score_, 4)
323
+ # compare shrunk covariance obtained from data and from MLE estimate
324
+ oa_cov_from_mle, oa_shrinkage_from_mle = oas(X)
325
+ assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
326
+ assert_almost_equal(oa_shrinkage_from_mle, oa.shrinkage_)
327
+ # compare estimates given by OAS and ShrunkCovariance
328
+ scov = ShrunkCovariance(shrinkage=oa.shrinkage_)
329
+ scov.fit(X)
330
+ assert_array_almost_equal(scov.covariance_, oa.covariance_, 4)
331
+
332
+ # test with n_features = 1
333
+ X_1d = X[:, 0].reshape((-1, 1))
334
+ oa = OAS()
335
+ oa.fit(X_1d)
336
+ oa_cov_from_mle, oa_shrinkage_from_mle = oas(X_1d)
337
+ assert_array_almost_equal(oa_cov_from_mle, oa.covariance_, 4)
338
+ assert_almost_equal(oa_shrinkage_from_mle, oa.shrinkage_)
339
+ assert_array_almost_equal(empirical_covariance(X_1d), oa.covariance_, 4)
340
+
341
+ # test with one sample
342
+ # warning should be raised when using only 1 sample
343
+ X_1sample = np.arange(5).reshape(1, 5)
344
+ oa = OAS()
345
+ warn_msg = "Only one sample available. You may want to reshape your data array"
346
+ with pytest.warns(UserWarning, match=warn_msg):
347
+ oa.fit(X_1sample)
348
+
349
+ assert_array_almost_equal(oa.covariance_, np.zeros(shape=(5, 5), dtype=np.float64))
350
+
351
+ # test shrinkage coeff on a simple data set (without saving precision)
352
+ oa = OAS(store_precision=False)
353
+ oa.fit(X)
354
+ assert_almost_equal(oa.score(X), score_, 4)
355
+ assert oa.precision_ is None
356
+
357
+ # test function _oas without assuming centered data
358
+ X_1f = X[:, 0:1]
359
+ oa = OAS()
360
+ oa.fit(X_1f)
361
+ # compare shrunk covariance obtained from data and from MLE estimate
362
+ _oa_cov_from_mle, _oa_shrinkage_from_mle = _oas(X_1f)
363
+ assert_array_almost_equal(_oa_cov_from_mle, oa.covariance_, 4)
364
+ assert_almost_equal(_oa_shrinkage_from_mle, oa.shrinkage_)
365
+ assert_array_almost_equal((X_1f**2).sum() / n_samples, oa.covariance_, 4)
366
+
367
+
368
+ def test_EmpiricalCovariance_validates_mahalanobis():
369
+ """Checks that EmpiricalCovariance validates data with mahalanobis."""
370
+ cov = EmpiricalCovariance().fit(X)
371
+
372
+ msg = f"X has 2 features, but \\w+ is expecting {X.shape[1]} features as input"
373
+ with pytest.raises(ValueError, match=msg):
374
+ cov.mahalanobis(X[:, :2])
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_elliptic_envelope.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Testing for Elliptic Envelope algorithm (sklearn.covariance.elliptic_envelope).
3
+ """
4
+
5
+ import numpy as np
6
+ import pytest
7
+
8
+ from sklearn.covariance import EllipticEnvelope
9
+ from sklearn.exceptions import NotFittedError
10
+ from sklearn.utils._testing import (
11
+ assert_almost_equal,
12
+ assert_array_almost_equal,
13
+ assert_array_equal,
14
+ )
15
+
16
+
17
+ def test_elliptic_envelope(global_random_seed):
18
+ rnd = np.random.RandomState(global_random_seed)
19
+ X = rnd.randn(100, 10)
20
+ clf = EllipticEnvelope(contamination=0.1)
21
+ with pytest.raises(NotFittedError):
22
+ clf.predict(X)
23
+ with pytest.raises(NotFittedError):
24
+ clf.decision_function(X)
25
+ clf.fit(X)
26
+ y_pred = clf.predict(X)
27
+ scores = clf.score_samples(X)
28
+ decisions = clf.decision_function(X)
29
+
30
+ assert_array_almost_equal(scores, -clf.mahalanobis(X))
31
+ assert_array_almost_equal(clf.mahalanobis(X), clf.dist_)
32
+ assert_almost_equal(
33
+ clf.score(X, np.ones(100)), (100 - y_pred[y_pred == -1].size) / 100.0
34
+ )
35
+ assert sum(y_pred == -1) == sum(decisions < 0)
36
+
37
+
38
+ def test_score_samples():
39
+ X_train = [[1, 1], [1, 2], [2, 1]]
40
+ clf1 = EllipticEnvelope(contamination=0.2).fit(X_train)
41
+ clf2 = EllipticEnvelope().fit(X_train)
42
+ assert_array_equal(
43
+ clf1.score_samples([[2.0, 2.0]]),
44
+ clf1.decision_function([[2.0, 2.0]]) + clf1.offset_,
45
+ )
46
+ assert_array_equal(
47
+ clf2.score_samples([[2.0, 2.0]]),
48
+ clf2.decision_function([[2.0, 2.0]]) + clf2.offset_,
49
+ )
50
+ assert_array_equal(
51
+ clf1.score_samples([[2.0, 2.0]]), clf2.score_samples([[2.0, 2.0]])
52
+ )
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_graphical_lasso.py ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Test the graphical_lasso module."""
2
+
3
+ import sys
4
+ from io import StringIO
5
+
6
+ import numpy as np
7
+ import pytest
8
+ from numpy.testing import assert_allclose
9
+ from scipy import linalg
10
+
11
+ from sklearn import config_context, datasets
12
+ from sklearn.covariance import (
13
+ GraphicalLasso,
14
+ GraphicalLassoCV,
15
+ empirical_covariance,
16
+ graphical_lasso,
17
+ )
18
+ from sklearn.datasets import make_sparse_spd_matrix
19
+ from sklearn.model_selection import GroupKFold
20
+ from sklearn.utils import check_random_state
21
+ from sklearn.utils._testing import (
22
+ _convert_container,
23
+ assert_array_almost_equal,
24
+ assert_array_less,
25
+ )
26
+
27
+
28
+ def test_graphical_lassos(random_state=1):
29
+ """Test the graphical lasso solvers.
30
+
31
+ This checks is unstable for some random seeds where the covariance found with "cd"
32
+ and "lars" solvers are different (4 cases / 100 tries).
33
+ """
34
+ # Sample data from a sparse multivariate normal
35
+ dim = 20
36
+ n_samples = 100
37
+ random_state = check_random_state(random_state)
38
+ prec = make_sparse_spd_matrix(dim, alpha=0.95, random_state=random_state)
39
+ cov = linalg.inv(prec)
40
+ X = random_state.multivariate_normal(np.zeros(dim), cov, size=n_samples)
41
+ emp_cov = empirical_covariance(X)
42
+
43
+ for alpha in (0.0, 0.1, 0.25):
44
+ covs = dict()
45
+ icovs = dict()
46
+ for method in ("cd", "lars"):
47
+ cov_, icov_, costs = graphical_lasso(
48
+ emp_cov, return_costs=True, alpha=alpha, mode=method
49
+ )
50
+ covs[method] = cov_
51
+ icovs[method] = icov_
52
+ costs, dual_gap = np.array(costs).T
53
+ # Check that the costs always decrease (doesn't hold if alpha == 0)
54
+ if not alpha == 0:
55
+ # use 1e-12 since the cost can be exactly 0
56
+ assert_array_less(np.diff(costs), 1e-12)
57
+ # Check that the 2 approaches give similar results
58
+ assert_allclose(covs["cd"], covs["lars"], atol=5e-4)
59
+ assert_allclose(icovs["cd"], icovs["lars"], atol=5e-4)
60
+
61
+ # Smoke test the estimator
62
+ model = GraphicalLasso(alpha=0.25).fit(X)
63
+ model.score(X)
64
+ assert_array_almost_equal(model.covariance_, covs["cd"], decimal=4)
65
+ assert_array_almost_equal(model.covariance_, covs["lars"], decimal=4)
66
+
67
+ # For a centered matrix, assume_centered could be chosen True or False
68
+ # Check that this returns indeed the same result for centered data
69
+ Z = X - X.mean(0)
70
+ precs = list()
71
+ for assume_centered in (False, True):
72
+ prec_ = GraphicalLasso(assume_centered=assume_centered).fit(Z).precision_
73
+ precs.append(prec_)
74
+ assert_array_almost_equal(precs[0], precs[1])
75
+
76
+
77
+ def test_graphical_lasso_when_alpha_equals_0():
78
+ """Test graphical_lasso's early return condition when alpha=0."""
79
+ X = np.random.randn(100, 10)
80
+ emp_cov = empirical_covariance(X, assume_centered=True)
81
+
82
+ model = GraphicalLasso(alpha=0, covariance="precomputed").fit(emp_cov)
83
+ assert_allclose(model.precision_, np.linalg.inv(emp_cov))
84
+
85
+ _, precision = graphical_lasso(emp_cov, alpha=0)
86
+ assert_allclose(precision, np.linalg.inv(emp_cov))
87
+
88
+
89
+ @pytest.mark.parametrize("mode", ["cd", "lars"])
90
+ def test_graphical_lasso_n_iter(mode):
91
+ X, _ = datasets.make_classification(n_samples=5_000, n_features=20, random_state=0)
92
+ emp_cov = empirical_covariance(X)
93
+
94
+ _, _, n_iter = graphical_lasso(
95
+ emp_cov, 0.2, mode=mode, max_iter=2, return_n_iter=True
96
+ )
97
+ assert n_iter == 2
98
+
99
+
100
+ def test_graphical_lasso_iris():
101
+ # Hard-coded solution from R glasso package for alpha=1.0
102
+ # (need to set penalize.diagonal to FALSE)
103
+ cov_R = np.array(
104
+ [
105
+ [0.68112222, 0.0000000, 0.265820, 0.02464314],
106
+ [0.00000000, 0.1887129, 0.000000, 0.00000000],
107
+ [0.26582000, 0.0000000, 3.095503, 0.28697200],
108
+ [0.02464314, 0.0000000, 0.286972, 0.57713289],
109
+ ]
110
+ )
111
+ icov_R = np.array(
112
+ [
113
+ [1.5190747, 0.000000, -0.1304475, 0.0000000],
114
+ [0.0000000, 5.299055, 0.0000000, 0.0000000],
115
+ [-0.1304475, 0.000000, 0.3498624, -0.1683946],
116
+ [0.0000000, 0.000000, -0.1683946, 1.8164353],
117
+ ]
118
+ )
119
+ X = datasets.load_iris().data
120
+ emp_cov = empirical_covariance(X)
121
+ for method in ("cd", "lars"):
122
+ cov, icov = graphical_lasso(emp_cov, alpha=1.0, return_costs=False, mode=method)
123
+ assert_array_almost_equal(cov, cov_R)
124
+ assert_array_almost_equal(icov, icov_R)
125
+
126
+
127
+ def test_graph_lasso_2D():
128
+ # Hard-coded solution from Python skggm package
129
+ # obtained by calling `quic(emp_cov, lam=.1, tol=1e-8)`
130
+ cov_skggm = np.array([[3.09550269, 1.186972], [1.186972, 0.57713289]])
131
+
132
+ icov_skggm = np.array([[1.52836773, -3.14334831], [-3.14334831, 8.19753385]])
133
+ X = datasets.load_iris().data[:, 2:]
134
+ emp_cov = empirical_covariance(X)
135
+ for method in ("cd", "lars"):
136
+ cov, icov = graphical_lasso(emp_cov, alpha=0.1, return_costs=False, mode=method)
137
+ assert_array_almost_equal(cov, cov_skggm)
138
+ assert_array_almost_equal(icov, icov_skggm)
139
+
140
+
141
+ def test_graphical_lasso_iris_singular():
142
+ # Small subset of rows to test the rank-deficient case
143
+ # Need to choose samples such that none of the variances are zero
144
+ indices = np.arange(10, 13)
145
+
146
+ # Hard-coded solution from R glasso package for alpha=0.01
147
+ cov_R = np.array(
148
+ [
149
+ [0.08, 0.056666662595, 0.00229729713223, 0.00153153142149],
150
+ [0.056666662595, 0.082222222222, 0.00333333333333, 0.00222222222222],
151
+ [0.002297297132, 0.003333333333, 0.00666666666667, 0.00009009009009],
152
+ [0.001531531421, 0.002222222222, 0.00009009009009, 0.00222222222222],
153
+ ]
154
+ )
155
+ icov_R = np.array(
156
+ [
157
+ [24.42244057, -16.831679593, 0.0, 0.0],
158
+ [-16.83168201, 24.351841681, -6.206896552, -12.5],
159
+ [0.0, -6.206896171, 153.103448276, 0.0],
160
+ [0.0, -12.499999143, 0.0, 462.5],
161
+ ]
162
+ )
163
+ X = datasets.load_iris().data[indices, :]
164
+ emp_cov = empirical_covariance(X)
165
+ for method in ("cd", "lars"):
166
+ cov, icov = graphical_lasso(
167
+ emp_cov, alpha=0.01, return_costs=False, mode=method
168
+ )
169
+ assert_array_almost_equal(cov, cov_R, decimal=5)
170
+ assert_array_almost_equal(icov, icov_R, decimal=5)
171
+
172
+
173
+ def test_graphical_lasso_cv(random_state=1):
174
+ # Sample data from a sparse multivariate normal
175
+ dim = 5
176
+ n_samples = 6
177
+ random_state = check_random_state(random_state)
178
+ prec = make_sparse_spd_matrix(dim, alpha=0.96, random_state=random_state)
179
+ cov = linalg.inv(prec)
180
+ X = random_state.multivariate_normal(np.zeros(dim), cov, size=n_samples)
181
+ # Capture stdout, to smoke test the verbose mode
182
+ orig_stdout = sys.stdout
183
+ try:
184
+ sys.stdout = StringIO()
185
+ # We need verbose very high so that Parallel prints on stdout
186
+ GraphicalLassoCV(verbose=100, alphas=5, tol=1e-1).fit(X)
187
+ finally:
188
+ sys.stdout = orig_stdout
189
+
190
+
191
+ @pytest.mark.parametrize("alphas_container_type", ["list", "tuple", "array"])
192
+ def test_graphical_lasso_cv_alphas_iterable(alphas_container_type):
193
+ """Check that we can pass an array-like to `alphas`.
194
+
195
+ Non-regression test for:
196
+ https://github.com/scikit-learn/scikit-learn/issues/22489
197
+ """
198
+ true_cov = np.array(
199
+ [
200
+ [0.8, 0.0, 0.2, 0.0],
201
+ [0.0, 0.4, 0.0, 0.0],
202
+ [0.2, 0.0, 0.3, 0.1],
203
+ [0.0, 0.0, 0.1, 0.7],
204
+ ]
205
+ )
206
+ rng = np.random.RandomState(0)
207
+ X = rng.multivariate_normal(mean=[0, 0, 0, 0], cov=true_cov, size=200)
208
+ alphas = _convert_container([0.02, 0.03], alphas_container_type)
209
+ GraphicalLassoCV(alphas=alphas, tol=1e-1, n_jobs=1).fit(X)
210
+
211
+
212
+ @pytest.mark.parametrize(
213
+ "alphas,err_type,err_msg",
214
+ [
215
+ ([-0.02, 0.03], ValueError, "must be > 0"),
216
+ ([0, 0.03], ValueError, "must be > 0"),
217
+ (["not_number", 0.03], TypeError, "must be an instance of float"),
218
+ ],
219
+ )
220
+ def test_graphical_lasso_cv_alphas_invalid_array(alphas, err_type, err_msg):
221
+ """Check that if an array-like containing a value
222
+ outside of (0, inf] is passed to `alphas`, a ValueError is raised.
223
+ Check if a string is passed, a TypeError is raised.
224
+ """
225
+ true_cov = np.array(
226
+ [
227
+ [0.8, 0.0, 0.2, 0.0],
228
+ [0.0, 0.4, 0.0, 0.0],
229
+ [0.2, 0.0, 0.3, 0.1],
230
+ [0.0, 0.0, 0.1, 0.7],
231
+ ]
232
+ )
233
+ rng = np.random.RandomState(0)
234
+ X = rng.multivariate_normal(mean=[0, 0, 0, 0], cov=true_cov, size=200)
235
+
236
+ with pytest.raises(err_type, match=err_msg):
237
+ GraphicalLassoCV(alphas=alphas, tol=1e-1, n_jobs=1).fit(X)
238
+
239
+
240
+ def test_graphical_lasso_cv_scores():
241
+ splits = 4
242
+ n_alphas = 5
243
+ n_refinements = 3
244
+ true_cov = np.array(
245
+ [
246
+ [0.8, 0.0, 0.2, 0.0],
247
+ [0.0, 0.4, 0.0, 0.0],
248
+ [0.2, 0.0, 0.3, 0.1],
249
+ [0.0, 0.0, 0.1, 0.7],
250
+ ]
251
+ )
252
+ rng = np.random.RandomState(0)
253
+ X = rng.multivariate_normal(mean=[0, 0, 0, 0], cov=true_cov, size=200)
254
+ cov = GraphicalLassoCV(cv=splits, alphas=n_alphas, n_refinements=n_refinements).fit(
255
+ X
256
+ )
257
+
258
+ _assert_graphical_lasso_cv_scores(
259
+ cov=cov,
260
+ n_splits=splits,
261
+ n_refinements=n_refinements,
262
+ n_alphas=n_alphas,
263
+ )
264
+
265
+
266
+ @config_context(enable_metadata_routing=True)
267
+ def test_graphical_lasso_cv_scores_with_routing(global_random_seed):
268
+ """Check that `GraphicalLassoCV` internally dispatches metadata to
269
+ the splitter.
270
+ """
271
+ splits = 5
272
+ n_alphas = 5
273
+ n_refinements = 3
274
+ true_cov = np.array(
275
+ [
276
+ [0.8, 0.0, 0.2, 0.0],
277
+ [0.0, 0.4, 0.0, 0.0],
278
+ [0.2, 0.0, 0.3, 0.1],
279
+ [0.0, 0.0, 0.1, 0.7],
280
+ ]
281
+ )
282
+ rng = np.random.RandomState(global_random_seed)
283
+ X = rng.multivariate_normal(mean=[0, 0, 0, 0], cov=true_cov, size=300)
284
+ n_samples = X.shape[0]
285
+ groups = rng.randint(0, 5, n_samples)
286
+ params = {"groups": groups}
287
+ cv = GroupKFold(n_splits=splits)
288
+ cv.set_split_request(groups=True)
289
+
290
+ cov = GraphicalLassoCV(cv=cv, alphas=n_alphas, n_refinements=n_refinements).fit(
291
+ X, **params
292
+ )
293
+
294
+ _assert_graphical_lasso_cv_scores(
295
+ cov=cov,
296
+ n_splits=splits,
297
+ n_refinements=n_refinements,
298
+ n_alphas=n_alphas,
299
+ )
300
+
301
+
302
+ def _assert_graphical_lasso_cv_scores(cov, n_splits, n_refinements, n_alphas):
303
+ cv_results = cov.cv_results_
304
+ # alpha and one for each split
305
+
306
+ total_alphas = n_refinements * n_alphas + 1
307
+ keys = ["alphas"]
308
+ split_keys = [f"split{i}_test_score" for i in range(n_splits)]
309
+ for key in keys + split_keys:
310
+ assert key in cv_results
311
+ assert len(cv_results[key]) == total_alphas
312
+
313
+ cv_scores = np.asarray([cov.cv_results_[key] for key in split_keys])
314
+ expected_mean = cv_scores.mean(axis=0)
315
+ expected_std = cv_scores.std(axis=0)
316
+
317
+ assert_allclose(cov.cv_results_["mean_test_score"], expected_mean)
318
+ assert_allclose(cov.cv_results_["std_test_score"], expected_std)
evalkit_tf437/lib/python3.10/site-packages/sklearn/covariance/tests/test_robust_covariance.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Authors: The scikit-learn developers
2
+ # SPDX-License-Identifier: BSD-3-Clause
3
+
4
+ import itertools
5
+
6
+ import numpy as np
7
+ import pytest
8
+
9
+ from sklearn import datasets
10
+ from sklearn.covariance import MinCovDet, empirical_covariance, fast_mcd
11
+ from sklearn.utils._testing import assert_array_almost_equal
12
+
13
+ X = datasets.load_iris().data
14
+ X_1d = X[:, 0]
15
+ n_samples, n_features = X.shape
16
+
17
+
18
+ def test_mcd(global_random_seed):
19
+ # Tests the FastMCD algorithm implementation
20
+ # Small data set
21
+ # test without outliers (random independent normal data)
22
+ launch_mcd_on_dataset(100, 5, 0, 0.02, 0.1, 75, global_random_seed)
23
+ # test with a contaminated data set (medium contamination)
24
+ launch_mcd_on_dataset(100, 5, 20, 0.3, 0.3, 65, global_random_seed)
25
+ # test with a contaminated data set (strong contamination)
26
+ launch_mcd_on_dataset(100, 5, 40, 0.1, 0.1, 50, global_random_seed)
27
+
28
+ # Medium data set
29
+ launch_mcd_on_dataset(1000, 5, 450, 0.1, 0.1, 540, global_random_seed)
30
+
31
+ # Large data set
32
+ launch_mcd_on_dataset(1700, 5, 800, 0.1, 0.1, 870, global_random_seed)
33
+
34
+ # 1D data set
35
+ launch_mcd_on_dataset(500, 1, 100, 0.02, 0.02, 350, global_random_seed)
36
+
37
+
38
+ def test_fast_mcd_on_invalid_input():
39
+ X = np.arange(100)
40
+ msg = "Expected 2D array, got 1D array instead"
41
+ with pytest.raises(ValueError, match=msg):
42
+ fast_mcd(X)
43
+
44
+
45
+ def test_mcd_class_on_invalid_input():
46
+ X = np.arange(100)
47
+ mcd = MinCovDet()
48
+ msg = "Expected 2D array, got 1D array instead"
49
+ with pytest.raises(ValueError, match=msg):
50
+ mcd.fit(X)
51
+
52
+
53
+ def launch_mcd_on_dataset(
54
+ n_samples, n_features, n_outliers, tol_loc, tol_cov, tol_support, seed
55
+ ):
56
+ rand_gen = np.random.RandomState(seed)
57
+ data = rand_gen.randn(n_samples, n_features)
58
+ # add some outliers
59
+ outliers_index = rand_gen.permutation(n_samples)[:n_outliers]
60
+ outliers_offset = 10.0 * (rand_gen.randint(2, size=(n_outliers, n_features)) - 0.5)
61
+ data[outliers_index] += outliers_offset
62
+ inliers_mask = np.ones(n_samples).astype(bool)
63
+ inliers_mask[outliers_index] = False
64
+
65
+ pure_data = data[inliers_mask]
66
+ # compute MCD by fitting an object
67
+ mcd_fit = MinCovDet(random_state=seed).fit(data)
68
+ T = mcd_fit.location_
69
+ S = mcd_fit.covariance_
70
+ H = mcd_fit.support_
71
+ # compare with the estimates learnt from the inliers
72
+ error_location = np.mean((pure_data.mean(0) - T) ** 2)
73
+ assert error_location < tol_loc
74
+ error_cov = np.mean((empirical_covariance(pure_data) - S) ** 2)
75
+ assert error_cov < tol_cov
76
+ assert np.sum(H) >= tol_support
77
+ assert_array_almost_equal(mcd_fit.mahalanobis(data), mcd_fit.dist_)
78
+
79
+
80
+ def test_mcd_issue1127():
81
+ # Check that the code does not break with X.shape = (3, 1)
82
+ # (i.e. n_support = n_samples)
83
+ rnd = np.random.RandomState(0)
84
+ X = rnd.normal(size=(3, 1))
85
+ mcd = MinCovDet()
86
+ mcd.fit(X)
87
+
88
+
89
+ def test_mcd_issue3367(global_random_seed):
90
+ # Check that MCD completes when the covariance matrix is singular
91
+ # i.e. one of the rows and columns are all zeros
92
+ rand_gen = np.random.RandomState(global_random_seed)
93
+
94
+ # Think of these as the values for X and Y -> 10 values between -5 and 5
95
+ data_values = np.linspace(-5, 5, 10).tolist()
96
+ # Get the cartesian product of all possible coordinate pairs from above set
97
+ data = np.array(list(itertools.product(data_values, data_values)))
98
+
99
+ # Add a third column that's all zeros to make our data a set of point
100
+ # within a plane, which means that the covariance matrix will be singular
101
+ data = np.hstack((data, np.zeros((data.shape[0], 1))))
102
+
103
+ # The below line of code should raise an exception if the covariance matrix
104
+ # is singular. As a further test, since we have points in XYZ, the
105
+ # principle components (Eigenvectors) of these directly relate to the
106
+ # geometry of the points. Since it's a plane, we should be able to test
107
+ # that the Eigenvector that corresponds to the smallest Eigenvalue is the
108
+ # plane normal, specifically [0, 0, 1], since everything is in the XY plane
109
+ # (as I've set it up above). To do this one would start by:
110
+ #
111
+ # evals, evecs = np.linalg.eigh(mcd_fit.covariance_)
112
+ # normal = evecs[:, np.argmin(evals)]
113
+ #
114
+ # After which we need to assert that our `normal` is equal to [0, 0, 1].
115
+ # Do note that there is floating point error associated with this, so it's
116
+ # best to subtract the two and then compare some small tolerance (e.g.
117
+ # 1e-12).
118
+ MinCovDet(random_state=rand_gen).fit(data)
119
+
120
+
121
+ def test_mcd_support_covariance_is_zero():
122
+ # Check that MCD returns a ValueError with informative message when the
123
+ # covariance of the support data is equal to 0.
124
+ X_1 = np.array([0.5, 0.1, 0.1, 0.1, 0.957, 0.1, 0.1, 0.1, 0.4285, 0.1])
125
+ X_1 = X_1.reshape(-1, 1)
126
+ X_2 = np.array([0.5, 0.3, 0.3, 0.3, 0.957, 0.3, 0.3, 0.3, 0.4285, 0.3])
127
+ X_2 = X_2.reshape(-1, 1)
128
+ msg = (
129
+ "The covariance matrix of the support data is equal to 0, try to "
130
+ "increase support_fraction"
131
+ )
132
+ for X in [X_1, X_2]:
133
+ with pytest.raises(ValueError, match=msg):
134
+ MinCovDet().fit(X)
135
+
136
+
137
+ def test_mcd_increasing_det_warning(global_random_seed):
138
+ # Check that a warning is raised if we observe increasing determinants
139
+ # during the c_step. In theory the sequence of determinants should be
140
+ # decreasing. Increasing determinants are likely due to ill-conditioned
141
+ # covariance matrices that result in poor precision matrices.
142
+
143
+ X = [
144
+ [5.1, 3.5, 1.4, 0.2],
145
+ [4.9, 3.0, 1.4, 0.2],
146
+ [4.7, 3.2, 1.3, 0.2],
147
+ [4.6, 3.1, 1.5, 0.2],
148
+ [5.0, 3.6, 1.4, 0.2],
149
+ [4.6, 3.4, 1.4, 0.3],
150
+ [5.0, 3.4, 1.5, 0.2],
151
+ [4.4, 2.9, 1.4, 0.2],
152
+ [4.9, 3.1, 1.5, 0.1],
153
+ [5.4, 3.7, 1.5, 0.2],
154
+ [4.8, 3.4, 1.6, 0.2],
155
+ [4.8, 3.0, 1.4, 0.1],
156
+ [4.3, 3.0, 1.1, 0.1],
157
+ [5.1, 3.5, 1.4, 0.3],
158
+ [5.7, 3.8, 1.7, 0.3],
159
+ [5.4, 3.4, 1.7, 0.2],
160
+ [4.6, 3.6, 1.0, 0.2],
161
+ [5.0, 3.0, 1.6, 0.2],
162
+ [5.2, 3.5, 1.5, 0.2],
163
+ ]
164
+
165
+ mcd = MinCovDet(support_fraction=0.5, random_state=global_random_seed)
166
+ warn_msg = "Determinant has increased"
167
+ with pytest.warns(RuntimeWarning, match=warn_msg):
168
+ mcd.fit(X)
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_base.cpython-310.pyc ADDED
Binary file (5.94 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_dict_learning.cpython-310.pyc ADDED
Binary file (61.6 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_factor_analysis.cpython-310.pyc ADDED
Binary file (13.5 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_fastica.cpython-310.pyc ADDED
Binary file (23.2 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_kernel_pca.cpython-310.pyc ADDED
Binary file (18.7 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_lda.cpython-310.pyc ADDED
Binary file (27 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_nmf.cpython-310.pyc ADDED
Binary file (64.5 kB). View file
 
evalkit_tf437/lib/python3.10/site-packages/sklearn/decomposition/__pycache__/_pca.cpython-310.pyc ADDED
Binary file (23.5 kB). View file