• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

florisvb / PyNumDiff / 19557917805

21 Nov 2025 02:27AM UTC coverage: 90.496% (+11.0%) from 79.519%
19557917805

push

github

web-flow
Merge pull request #176 from florisvb/improve-test-coverage

improving coverage with some tricks and judicious choices

17 of 22 new or added lines in 2 files covered. (77.27%)

1 existing line in 1 file now uncovered.

857 of 947 relevant lines covered (90.5%)

0.9 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

55.74
/pynumdiff/total_variation_regularization/_chartrand_tvregdiff.py
1
# pylint: skip-file
2

3
# u = TVRegDiff( data, iter, alph, u0, scale, ep, dx, plotflag, diagflag );
4
#
5
# Rick Chartrand (rickc@lanl.gov), Apr. 10, 2011
6
# Please cite Rick Chartrand, "Numerical differentiation of noisy,
7
# nonsmooth data," ISRN Applied Mathematics, Vol. 2011, Article ID 164564,
8
# 2011.
9
#
10
# Inputs:  (First three required; omitting the final N parameters for N < 7
11
#           or passing in [] results in default values being used.)
12
#       data        Vector of data to be differentiated.
13
#
14
#       iter        Number of iterations to run the main loop.  A stopping
15
#                   condition based on the norm of the gradient vector g
16
#                   below would be an easy modification. No default value.
17
#
18
#       alph        Regularization parameter. This is the main parameter
19
#                   to fiddle with. Start by varying by orders of
20
#                   magnitude until reasonable results are obtained.  A
21
#                   value to the nearest power of 10 is usally adequate.
22
#                   No default value. Higher values increase
23
#                   regularization strenght and improve conditioning.
24
#
25
#       u0          Initialization of the iteration.  Default value is the
26
#                   naive derivative (without scaling), of appropriate
27
#                   length (this being different for the two methods).
28
#                   Although the solution is theoretically independent of
29
#                   the initialization, a poor choice can exacerbate
30
#                   conditioning issues when the linear system is solved.
31
#
32
#       scale       'large' or 'small' (case insensitive).  Default is
33
#                   'small'. 'small' has somewhat better boundary
34
#                   behavior, but becomes unwieldly for data larger than
35
#                   1000 entries or so. 'large' has simpler numerics but
36
#                   is more efficient for large-scale problems.  'large' is
37
#                   more readily modified for higher-order derivatives,
38
#                   since the implicit differentiation matrix is square.
39
#
40
#       ep          Parameter for avoiding division by zero.  Default value
41
#                   is 1e-6. Results should not be very sensitive to the
42
#                   value. Larger values improve conditioning and
43
#                   therefore speed, while smaller values give more
44
#                   accurate results with sharper jumps.
45
#
46
#       dx          Grid spacing, used in the definition of the derivative
47
#                   operators. Default is the reciprocal of the data size.
48
#
49
#       plotflag    Flag whether to display plot at each iteration.
50
#                   Default is 1 (yes). Useful, but adds significant
51
#                   running time.
52
#
53
#       diagflag    Flag whether to display diagnostics at each
54
#                   iteration. Default is False. Useful for diagnosing
55
#                   preconditioning problems. When tolerance is not met,
56
#                   an early iterate being best is more worrying than a
57
#                   large relative residual.
58
#
59
# Output:
60
#
61
#       u           Estimate of the regularized derivative of data.  Due to
62
#                   different grid assumptions, length( u ) =
63
#                   length( data ) + 1 if scale = 'small', otherwise
64
#                   length( u ) = length( data ).
65

66
# Copyright notice:
67
# Copyright 2010. Los Alamos National Security, LLC. This material
68
# was produced under U.S. Government contract DE-AC52-06NA25396 for
69
# Los Alamos National Laboratory, which is operated by Los Alamos
70
# National Security, LLC, for the U.S. Department of Energy. The
71
# Government is granted for, itself and others acting on its
72
# behalf, a paid-up, nonexclusive, irrevocable worldwide license in
73
# this material to reproduce, prepare derivative works, and perform
74
# publicly and display publicly. Beginning five (5) years after
75
# (March 31, 2011) permission to assert copyright was obtained,
76
# subject to additional five-year worldwide renewals, the
77
# Government is granted for itself and others acting on its behalf
78
# a paid-up, nonexclusive, irrevocable worldwide license in this
79
# material to reproduce, prepare derivative works, distribute
80
# copies to the public, perform publicly and display publicly, and
81
# to permit others to do so. NEITHER THE UNITED STATES NOR THE
82
# UNITED STATES DEPARTMENT OF ENERGY, NOR LOS ALAMOS NATIONAL
83
# SECURITY, LLC, NOR ANY OF THEIR EMPLOYEES, MAKES ANY WARRANTY,
84
# EXPRESS OR IMPLIED, OR ASSUMES ANY LEGAL LIABILITY OR
85
# RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS, OR USEFULNESS OF
86
# ANY INFORMATION, APPARATUS, PRODUCT, OR PROCESS DISCLOSED, OR
87
# REPRESENTS THAT ITS USE WOULD NOT INFRINGE PRIVATELY OWNED
88
# RIGHTS.
89

90
# BSD License notice:
91
# Redistribution and use in source and binary forms, with or without
92
# modification, are permitted provided that the following conditions
93
# are met:
94
#
95
#      Redistributions of source code must retain the above
96
#      copyright notice, this list of conditions and the following
97
#      disclaimer.
98
#      Redistributions in binary form must reproduce the above
99
#      copyright notice, this list of conditions and the following
100
#      disclaimer in the documentation and/or other materials
101
#      provided with the distribution.
102
#      Neither the name of Los Alamos National Security nor the names of its
103
#      contributors may be used to endorse or promote products
104
#      derived from this software without specific prior written
105
#      permission.
106
#
107
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
108
# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
109
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
110
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
111
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
112
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
113
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
114
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
115
# USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
116
# AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
117
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
118
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
119
# POSSIBILITY OF SUCH DAMAGE.
120
#
121
#########################################################
122
#                                                       #
123
# Python translation by Simone Sturniolo                #
124
# Rutherford Appleton Laboratory, STFC, UK (2014)       #
125
# simonesturniolo@gmail.com                             #
126
#                                                       #
127
#########################################################
128

129
#########################################################
130
#                                                       #
131
# Summary of changes made by Floris van Breugel:        #
132
#                                                       #
133
# (1)                                                   #
134
# scipy.sparse.linalg.cg seems to require more          #
135
# iterations to reach the same result as                #
136
# MATLAB's pcg. (scipy version 1.1.0)                   #
137
# A good guess is the length of the data                #
138
#                                                       #
139
# (2)                                                   #
140
# Drop last entry of derivative (u) for small scale     #
141
# This way both small and large scale results are same  #
142
# length as data                                        #
143
#                                                       #
144
#########################################################
145

146
import sys
1✔
147
import matplotlib.pyplot as plt
1✔
148
import numpy as np
1✔
149
import scipy as sp
1✔
150
from scipy import sparse
1✔
151
from scipy.sparse import linalg as splin
1✔
152

153

154
def TVRegDiff(data, itern, alph, u0=None, scale='small', ep=1e-6, dx=None,
1✔
155
              plotflag=True, diagflag=False, maxit=None):
156
    # Input checking
157
    data = np.array(data)
1✔
158
    if (len(data.shape) != 1): raise ValueError("Data is must be a column vector")
1✔
159
    if maxit is None: maxit = len(data)
1✔
160
    n = len(data)
1✔
161
    if dx is None: dx = 1.0 / n
1✔
162

163
    # Different methods for small- and large-scale problems.
164
    if scale.lower() == 'small':
1✔
165
        # Construct differentiation matrix.
166
        c = np.ones(n + 1) / dx
1✔
167
        D = sparse.spdiags([-c, c], [0, 1], n, n + 1)
1✔
168

169
        # Construct antidifferentiation operator and its adjoint.
170
        def A(x): return (np.cumsum(x) - 0.5 * (x + x[0]))[1:] * dx
1✔
171

172
        def AT(w): return (sum(w) * np.ones(n + 1) -
1✔
173
                           np.transpose(np.concatenate(([sum(w) / 2.0],
174
                                                        np.cumsum(w) -
175
                                                        w / 2.0)))) * dx
176

177
        # Default initialization is naive derivative
178
        if u0 is None: u0 = np.concatenate(([0], np.diff(data), [0]))
1✔
179
        u = u0
1✔
180
        # Since Au( 0 ) = 0, we need to adjust.
181
        ofst = data[0]
1✔
182
        # Precompute.
183
        ATb = AT(ofst - data) # input: size n
1✔
184

185
        # Main loop.
186
        for ii in range(1, itern+1):
1✔
187
            # Diagonal matrix of weights, for linearizing E-L equation.
188
            Q = sparse.spdiags(1. / (np.sqrt((D * u)**2 + ep)), 0, n, n)
1✔
189
            # Linearized diffusion matrix, also approximation of Hessian.
190
            L = dx * D.T * Q * D
1✔
191

192
            # Gradient of functional.
193
            g = AT(A(u)) + ATb + alph * L * u
1✔
194

195
            # Prepare to solve linear equation.
196
            rtol = 1e-4
1✔
197
            # Simple preconditioner.
198
            P = alph * sparse.spdiags(L.diagonal() + 1, 0, n + 1, n + 1)
1✔
199

200
            def linop(v): return (alph * L * v + AT(A(v)))
1✔
201
            linop = splin.LinearOperator((n + 1, n + 1), linop)
1✔
202

203
            [s, info_i] = sparse.linalg.cg(linop, g, x0=None, rtol=rtol, maxiter=maxit, callback=None, M=P, atol=0)
1✔
204
            if diagflag:
205
                print('iteration {0:4d}: relative change = {1:.3e}, gradient norm = {2:.3e}\n'.format(
206
                    ii, np.linalg.norm(s[0])/np.linalg.norm(u), np.linalg.norm(g)))
207
                if (info_i > 0): print("WARNING - convergence to tolerance not achieved!")
208
                elif (info_i < 0): print("WARNING - illegal input or breakdown")
209

210
            # Update solution.
211
            u = u - s
1✔
212
            # Display plot.
213
            if plotflag: plt.plot(u); plt.show()
1✔
214
        u = u[:-1]
1✔
215
        
NEW
216
    elif scale.lower() == 'large':
×
217
        # Construct antidifferentiation operator and its adjoint.
218
        def A(v): return np.cumsum(v)
×
219

220
        def AT(w): return (sum(w) * np.ones(len(w)) -
×
221
                           np.transpose(np.concatenate(([0.0],
222
                                                        np.cumsum(w[:-1])))))
223
        # Construct differentiation matrix.
224
        c = np.ones(n)
×
225
        D = sparse.spdiags([-c, c], [0, 1], n, n) / dx
×
226
        mask = np.ones((n, n))
×
227
        mask[-1, -1] = 0.0
×
228
        D = sparse.dia_matrix(D.multiply(mask))
×
229
        # Since Au( 0 ) = 0, we need to adjust.
UNCOV
230
        data = data - data[0]
×
231
        # Default initialization is naive derivative.
NEW
232
        if u0 is None: u0 = np.concatenate(([0], np.diff(data)))
×
233
        u = u0
×
234
        # Precompute.
235
        ATd = AT(data)
×
236

237
        # Main loop.
238
        for ii in range(1, itern + 1):
×
239
            # Diagonal matrix of weights, for linearizing E-L equation.
240
            Q = sparse.spdiags(1. / np.sqrt((D*u)**2.0 + ep), 0, n, n)
×
241
            # Linearized diffusion matrix, also approximation of Hessian.
NEW
242
            L = D.T*Q*D
×
243
            # Gradient of functional.
244
            g = AT(A(u)) - ATd
×
245
            g = g + alph * L * u
×
246
            # Build preconditioner.
247
            c = np.cumsum(range(n, 0, -1))
×
248
            B = alph * L + sparse.spdiags(c[::-1], 0, n, n)
×
249
            # droptol = 1.0e-2
250
            R = sparse.dia_matrix(np.linalg.cholesky(B.todense()))
×
251
            # Prepare to solve linear equation.
252
            rtol = 1.0e-4
×
253

254
            def linop(v): return (alph * L * v + AT(A(v)))
×
255
            linop = splin.LinearOperator((n, n), linop)
×
256

NEW
257
            [s, info_i] = sparse.linalg.cg(linop, -g, x0=None, rtol=rtol, maxiter=maxit, callback=None, M=(R.T @ R), atol=0)
×
258
            if diagflag:
259
                print('iteration {0:4d}: relative change = {1:.3e}, gradient norm = {2:.3e}\n'.format(
260
                    ii, np.linalg.norm(s[0])/np.linalg.norm(u), np.linalg.norm(g)))
261
                if (info_i > 0): print("WARNING - convergence to tolerance not achieved!")
262
                elif (info_i < 0): print("WARNING - illegal input or breakdown")
263

264
            # Update current solution
265
            u = u + s
×
266
            # Display plot.
NEW
267
            if plotflag: plt.plot(u/dx); plt.show()
×
268
        u = u/dx
×
269

270
    return u
1✔
271

272
if __name__ == "__main__": # Small testing script
273
    dx = 0.05
274
    x0 = np.arange(0, 2.0*np.pi, dx)
275

276
    testf = np.sin(x0)
277
    testf += np.random.normal(0.0, 0.04, x0.shape)
278

279
    deriv_sm = TVRegDiff(testf, 1, 5e-2, dx=dx, ep=1e-1, scale='small', plotflag=False, diagflag=True)
280
    deriv_lrg = TVRegDiff(testf, 100, 1e-1, dx=dx, ep=1e-2, scale='large', plotflag=False, diagflag=True)
281

282
    plt.plot(np.cos(x0), label='Analytical', c=(0,0,0))
283
    plt.plot(np.gradient(testf, dx), label='numpy.gradient')
284
    plt.plot(deriv_sm, label='TVRegDiff (small)')
285
    plt.plot(deriv_lrg, label='TVRegDiff (large)')
286
    plt.legend()
287
    plt.show()
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc