• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

aewallin / allantools / 6997134815

26 Nov 2023 05:57PM UTC coverage: 74.663% (-10.1%) from 84.742%
6997134815

push

github

aewallin
coverage workflow

1273 of 1705 relevant lines covered (74.66%)

2.17 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

76.42
/allantools/allantools.py
1
"""
2
Allan deviation tools
3
=====================
4

5
**Author:** Anders Wallin (anders.e.e.wallin "at" gmail.com)
6

7
Version history
8
---------------
9

10
**unreleased**
11
- ITU PRC, PRTC, ePRTC masks for TDEV and MTIE in new file mask.py
12
- psd2allan() - convert PSD to ADEV/MDEV
13
- GCODEV
14

15
**2019.09** 2019 September
16
- packaging changes, for conda package
17
  (see https://anaconda.org/conda-forge/allantools)
18

19
**2019.07** 2019 August 3
20
- move edf-functions and noise-ID functions to ci.py
21
- mtotdev() htotdev() speed improvements
22
- save dataset results to text file
23
- real-time adev/mdev/hdev, in new file realtime.py
24
- travis testing on Linux, OSX, and Windows
25

26
**2018.03** 2018 March 27
27
- change license to LGPL v3 or later
28
- lag-1 autocorrelation noise identification function
29
- B1 noise identification
30
- R(n) noise identification
31
- Noise() class using Kasdin & Walter algorithm
32
- work on Greenhall's EDF and confidence intervals
33
- tests for confidence intervals
34

35
**2016.11** 2016 November 18
36
- Dataset class
37
- plotting with a Plot class
38
- confidence intervals based on Greenhall's EDF algorithm
39
- testing on multiple python versions with tox
40
- continuous integration with https://travis-ci.org/aewallin/allantools
41
- test coverage report on
42
  https://coveralls.io/github/aewallin/allantools?branch=master
43

44
**2016.4** 2016 April 8
45
- convert tests to use pytest
46
- split tests into individual pytests, make them all pass
47
- accept a numpy.array as taus parameter.
48
- Switch to new signature (https://github.com/aewallin/allantools/issues/29)
49
- remove old and slow pure-python implementation
50

51
**2016.3** 2016 March
52
- improve documentation and add __version__
53
- added Theo1 deviation theo1()
54
- added Hadamard Total Deviatio htotdev()
55
- added Modified Total Deviation mtotdev(), and Time Total Deviation ttotdev()
56
  http://www.anderswallin.net/2016/03/modified-total-deviation-in-allantools/
57
- automatic tau-lists:  taus=[ "all" | "octave" | "decade" ]
58
- merge adev() and adev_phase() into one, requiring phase= or
59
  frequency= argument
60
- add GPS dataset as example and test
61

62
**2016.2** 2016 February
63
- update release on PyPi https://pypi.python.org/pypi/AllanTools
64
- pytest and coverage
65
- setuptools
66
- change version number to year.month
67

68
**v1.2.1** 2015 July
69
- Python 3 compatibility using 2to3 tool, by kuzavas
70
- IPython notebook examples
71
- sphinx documentation, auto-built on readthedocs
72

73
**v1.2** 2014 November, Cantwell G. Carson conrtibuted:
74
- A gap-robust version of ADEV based on the paper by Sesia et al.
75
   gradev_phase() and gradev()
76
- Improved uncertainty estimates: uncertainty_estimate()
77
  This introduces a new dependency: scipy.stats.chi2()
78

79
**v1.1** 2014 August
80
- Danny Price converted the library to use numpy.
81
- many functions in allantools are now 100x faster than before.
82
- see http://www.anderswallin.net/2014/08/faster-allantools-with-numpy/
83

84
**v1.01** 2014 August
85
- PEP8 compliance improvements by Danny Price.
86

87
**v1.00** 2014 January, first version of allantools.
88
- see http://www.anderswallin.net/2014/01/allantools/
89

90
License
91
-------
92

93
This program is free software: you can redistribute it and/or modify
94
it under the terms of the GNU Lesser General Public License as published by
95
the Free Software Foundation, either version 3 of the License, or
96
(at your option) any later version.
97

98
This program is distributed in the hope that it will be useful,
99
but WITHOUT ANY WARRANTY; without even the implied warranty of
100
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
101
GNU Lesser General Public License for more details.
102

103
You should have received a copy of the GNU Lesser General Public License
104
along with this program.  If not, see <http://www.gnu.org/licenses/>.
105

106
"""
107

108
import os
2✔
109
import json
2✔
110
import numpy as np
2✔
111
from scipy import interpolate      # used in psd2allan()
2✔
112
from scipy.integrate import simps  # used in psd2allan()
2✔
113

114
from . import ci  # edf, confidence intervals
2✔
115

116
# Get version number from json metadata
117
pkginfo_path = os.path.join(os.path.dirname(__file__),
2✔
118
                            'allantools_info.json')
2✔
119
with open(pkginfo_path) as fp:
2✔
120
    pkginfo = json.load(fp)
2✔
121
__version__ = pkginfo["version"]
2✔
122

123

124
def tdev(data, rate=1.0, data_type="phase", taus=None):
2✔
125
    """ Time deviation.
126
        Based on modified Allan variance.
127

128
    .. math::
129

130
        \\sigma^2_{TDEV}( \\tau ) = { \\tau^2 \\over 3 }
131
        \\sigma^2_{MDEV}( \\tau )
132

133
    Note that TDEV has a unit of seconds.
134

135
    NIST [SP1065]_ eqn (15), page 18.
136

137
    Parameters
138
    ----------
139
    data: np.array
140
        Input data. Provide either phase or frequency (fractional,
141
        adimensional).
142
    rate: float
143
        The sampling rate for data, in Hz. Defaults to 1.0
144
    data_type: {'phase', 'freq'}
145
        Data type, i.e. phase or frequency. Defaults to "phase".
146
    taus: np.array
147
        Array of tau values, in seconds, for which to compute statistic.
148
        Optionally set taus=["all"|"octave"|"decade"] for automatic
149
        tau-list generation.
150

151
    Returns
152
    -------
153
    (taus, tdev, tdev_error, ns): tuple
154
          Tuple of values
155
    taus: np.array
156
        Tau values for which td computed
157
    tdev: np.array
158
        Computed time deviations (in seconds) for each tau value
159
    tdev_errors: np.array
160
        Time deviation errors
161
    ns: np.array
162
        Values of N used in mdev_phase()
163

164
    Notes
165
    -----
166
    http://en.wikipedia.org/wiki/Time_deviation
167
    """
168
    phase = input_to_phase(data, rate, data_type)
3✔
169
    (taus, md, mde, ns) = mdev(phase, rate=rate, taus=taus)
3✔
170
    td = taus * md / np.sqrt(3.0)
3✔
171
    tde = td / np.sqrt(ns)
3✔
172
    return taus, td, tde, ns
3✔
173

174

175
def mdev(data, rate=1.0, data_type="phase", taus=None):
3✔
176
    """  Modified Allan deviation.
177
         Used to distinguish between White and Flicker Phase Modulation.
178

179
    .. math::
180

181
        \\sigma^2_{MDEV}(m\\tau_0) = { 1 \\over 2 (m \\tau_0 )^2 (N-3m+1) }
182
        \\sum_{j=1}^{N-3m+1} \\lbrace
183
        \\sum_{i=j}^{j+m-1} {x}_{i+2m} - 2x_{i+m} + x_{i} \\rbrace^2
184

185
    see http://www.leapsecond.com/tools/adev_lib.c
186

187
    NIST [SP1065]_ eqn (14), page 17.
188

189
    Parameters
190
    ----------
191
    data: np.array
192
        Input data. Provide either phase or frequency (fractional,
193
        adimensional).
194
    rate: float
195
        The sampling rate for data, in Hz. Defaults to 1.0
196
    data_type: {'phase', 'freq'}
197
        Data type, i.e. phase or frequency. Defaults to "phase".
198
    taus: np.array
199
        Array of tau values, in seconds, for which to compute statistic.
200
        Optionally set taus=["all"|"octave"|"decade"] for automatic
201
        tau-list generation.
202

203
    Returns
204
    -------
205
    (taus2, md, mde, ns): tuple
206
          Tuple of values
207
    taus2: np.array
208
        Tau values for which td computed
209
    md: np.array
210
        Computed mdev for each tau value
211
    mde: np.array
212
        mdev errors
213
    ns: np.array
214
        Values of N used in each mdev calculation
215

216
    """
217
    phase = input_to_phase(data, rate, data_type)
3✔
218
    (phase, ms, taus_used) = tau_generator(phase, rate, taus=taus)
3✔
219
    data, taus = np.array(phase), np.array(taus)
3✔
220

221
    md = np.zeros_like(ms)
3✔
222
    mderr = np.zeros_like(ms)
3✔
223
    ns = np.zeros_like(ms)
3✔
224

225
    # this is a 'loop-unrolled' algorithm following
226
    # http://www.leapsecond.com/tools/adev_lib.c
227
    for idx, m in enumerate(ms):
3✔
228
        m = int(m)  # without this we get: VisibleDeprecationWarning:
3✔
229
        # using a non-integer number instead of an integer
230
        # will result in an error in the future
231
        tau = taus_used[idx]
3✔
232

233
        # First loop sum
234
        d0 = phase[0:m]
3✔
235
        d1 = phase[m:2*m]
3✔
236
        d2 = phase[2*m:3*m]
3✔
237
        e = min(len(d0), len(d1), len(d2))
3✔
238
        v = np.sum(d2[:e] - 2*d1[:e] + d0[:e])
3✔
239
        s = v * v
3✔
240

241
        # Second part of sum
242
        d3 = phase[3*m:]
3✔
243
        d2 = phase[2*m:]
3✔
244
        d1 = phase[1*m:]
3✔
245
        d0 = phase[0:]
3✔
246

247
        e = min(len(d0), len(d1), len(d2), len(d3))
3✔
248
        n = e + 1
3✔
249

250
        v_arr = v + np.cumsum(d3[:e] - 3 * d2[:e] + 3 * d1[:e] - d0[:e])
3✔
251

252
        s = s + np.sum(v_arr * v_arr)
3✔
253
        s /= 2.0 * m * m * tau * tau * n
3✔
254
        s = np.sqrt(s)
3✔
255

256
        md[idx] = s
3✔
257
        mderr[idx] = (s / np.sqrt(n))
3✔
258
        ns[idx] = n
3✔
259

260
    return remove_small_ns(taus_used, md, mderr, ns)
3✔
261

262

263
def adev(data, rate=1.0, data_type="phase", taus=None):
3✔
264
    """ Allan deviation.
265
        Classic - use only if required - relatively poor confidence.
266

267
    .. math::
268

269
        \\sigma^2_{ADEV}(\\tau) = { 1 \\over 2 \\tau^2 }
270
        \\langle ( {x}_{n+2} - 2x_{n+1} + x_{n} )^2 \\rangle
271
        = { 1 \\over 2 (N-2) \\tau^2 }
272
        \\sum_{n=1}^{N-2} ( {x}_{n+2} - 2x_{n+1} + x_{n} )^2
273

274
    where :math:`x_n` is the time-series of phase observations, spaced
275
    by the measurement interval :math:`\\tau`, and with length :math:`N`.
276

277
    Or alternatively calculated from a time-series of fractional frequency:
278

279
    .. math::
280

281
        \\sigma^{2}_{ADEV}(\\tau) =  { 1 \\over 2 }
282
        \\langle ( \\bar{y}_{n+1} - \\bar{y}_n )^2 \\rangle
283

284
    where :math:`\\bar{y}_n` is the time-series of fractional frequency
285
    at averaging time :math:`\\tau`
286

287
    NIST [SP1065]_ eqn (6) and (7), pages 14 and 15.
288

289
    Parameters
290
    ----------
291
    data: np.array
292
        Input data. Provide either phase or frequency (fractional,
293
        adimensional).
294
    rate: float
295
        The sampling rate for data, in Hz. Defaults to 1.0
296
    data_type: {'phase', 'freq'}
297
        Data type, i.e. phase or frequency. Defaults to "phase".
298
    taus: np.array
299
        Array of tau values, in seconds, for which to compute statistic.
300
        Optionally set taus=["all"|"octave"|"decade"] for automatic
301
        tau-list generation.
302

303
    Returns
304
    -------
305
    (taus2, ad, ade, ns): tuple
306
          Tuple of values
307
    taus2: np.array
308
        Tau values for which td computed
309
    ad: np.array
310
        Computed adev for each tau value
311
    ade: np.array
312
        adev errors
313
    ns: np.array
314
        Values of N used in each adev calculation
315

316
    """
317
    phase = input_to_phase(data, rate, data_type)
3✔
318
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
319

320
    ad = np.zeros_like(taus_used)
3✔
321
    ade = np.zeros_like(taus_used)
3✔
322
    adn = np.zeros_like(taus_used)
3✔
323

324
    for idx, mj in enumerate(m):  # loop through each tau value m(j)
3✔
325
        (ad[idx], ade[idx], adn[idx]) = calc_adev_phase(phase, rate, mj, mj)
3✔
326

327
    return remove_small_ns(taus_used, ad, ade, adn)
3✔
328

329

330
def calc_adev_phase(phase, rate, mj, stride):
3✔
331
    """  Main algorithm for adev() (stride=mj) and oadev() (stride=1)
332

333
        see http://www.leapsecond.com/tools/adev_lib.c
334
        stride = mj for nonoverlapping allan deviation
335

336
    Parameters
337
    ----------
338
    phase: np.array
339
        Phase data in seconds.
340
    rate: float
341
        The sampling rate for phase or frequency, in Hz
342
    mj: int
343
        M index value for stride
344
    stride: int
345
        Size of stride
346

347
    Returns
348
    -------
349
    (dev, deverr, n): tuple
350
        Array of computed values.
351

352
    Notes
353
    -----
354
    stride = mj for nonoverlapping Allan deviation
355
    stride = 1 for overlapping Allan deviation
356

357
    References
358
    ----------
359
    * http://en.wikipedia.org/wiki/Allan_variance
360
    * http://www.leapsecond.com/tools/adev_lib.c
361
    NIST [SP1065]_ eqn (7) and (11) page 16
362
    """
363
    mj = int(mj)
3✔
364
    stride = int(stride)
3✔
365
    d2 = phase[2 * mj::stride]
3✔
366
    d1 = phase[1 * mj::stride]
3✔
367
    d0 = phase[::stride]
3✔
368

369
    n = min(len(d0), len(d1), len(d2))
3✔
370

371
    if n == 0:
3✔
372
        RuntimeWarning("Data array length is too small: %i" % len(phase))
3✔
373
        n = 1
3✔
374

375
    v_arr = d2[:n] - 2 * d1[:n] + d0[:n]
3✔
376
    s = np.sum(v_arr * v_arr)
3✔
377

378
    dev = np.sqrt(s / (2.0*n)) / mj*rate
3✔
379
    deverr = dev / np.sqrt(n)
3✔
380

381
    return dev, deverr, n
3✔
382

383
def pdev(data, rate=1.0, data_type="phase", taus=None):
3✔
384
    """ Parabolic deviation.
385

386
    Parameters
387
    ----------
388
    data: np.array
389
        Input data. Provide either phase or frequency (fractional,
390
        adimensional).
391
    rate: float
392
        The sampling rate for data, in Hz. Defaults to 1.0
393
    data_type: {'phase', 'freq'}
394
        Data type, i.e. phase or frequency. Defaults to "phase".
395
    taus: np.array
396
        Array of tau values, in seconds, for which to compute statistic.
397
        Optionally set taus=["all"|"octave"|"decade"] for automatic
398
        tau-list generation.
399

400
    Returns
401
    -------
402
    (taus2, ad, ade, ns): tuple
403
          Tuple of values
404
    taus2: np.array
405
        Tau values for which td computed
406
    ad: np.array
407
        Computed adev for each tau value
408
    ade: np.array
409
        adev errors
410
    ns: np.array
411
        Values of N used in each adev calculation
412

413
    """
414
    phase = input_to_phase(data, rate, data_type)
3✔
415
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
416

417
    ad = np.zeros_like(taus_used)
3✔
418
    ade = np.zeros_like(taus_used)
3✔
419
    adn = np.zeros_like(taus_used)
3✔
420

421
    for idx, mj in enumerate(m):  # loop through each tau value m(j)
3✔
422
        (ad[idx], ade[idx], adn[idx]) = calc_pdev_phase(phase, rate, mj)
3✔
423

424
    return remove_small_ns(taus_used, ad, ade, adn)
3✔
425

426

427
def calc_pdev_phase(phase, rate, mj):
3✔
428
    """  Parabolic deviation
429
    
430
    Parameters
431
    ----------
432
    phase: np.array
433
        Phase data in seconds.
434
    rate: float
435
        The sampling rate for phase or frequency, in Hz
436
    mj: int
437
        M index value for stride
438

439
    Returns
440
    -------
441
    (dev, deverr, n): tuple
442
        Array of computed values.
443

444

445
    References
446
    ----------
447
    * https://arxiv.org/pdf/1506.00687.pdf
448

449
    """
450
    mj = int(mj)
3✔
451
    stride = int(1)
3✔
452
    tau0 = 1.0/rate
3✔
453
    if mj == 1: # same as OADEV
3✔
454
        d2 = phase[2 * mj::stride]
3✔
455
        d1 = phase[1 * mj::stride]
3✔
456
        d0 = phase[::stride]
3✔
457
        n = min(len(d0), len(d1), len(d2))
3✔
458

459
        if n == 0:
3✔
460
            RuntimeWarning("Data array length is too small: %i" % len(phase))
×
461
            n = 1
×
462

463
        v_arr = d2[:n] - 2 * d1[:n] + d0[:n]
3✔
464
        s = np.sum(v_arr * v_arr)
3✔
465

466
        dev = np.sqrt(s / (2.0*n)) / mj*rate
3✔
467
        deverr = dev / np.sqrt(n)
3✔
468
    else:
×
469
        N = len(phase) # number of frequency samples
3✔
470
        M = N-2*mj  # Vernotte2020 has the correct(?) M = N - 2m
3✔
471
        # Vernotte2015 has M = N-2m+2 which seems wrong, we get index out-of-bounds in the sum
472
        
473
        if M<1:
3✔
474
            return 0, 0, 0
×
475
        Msum=0
3✔
476
        Mi=0
3✔
477
        for i in range(0,M): # 0..M-1
3✔
478
            asum=0
3✔
479
            """
×
480
            # this naive for-loop is very slow
481
            # using sum( vector[range] )  below is much faster
482
            for k in range(0,mj): # 0..mj-1
×
483
                prefactor = (mj-1.0)/2.0 - k
×
484
                p1 = phase[i+k]
×
485
                p2 = phase[i+mj+k]
×
486
                asum = asum + prefactor*(p1-p2)
×
487
            """
×
488
            krange = np.linspace(0,mj-1,mj)
3✔
489
            asum = sum( ((mj-1.0)/2.0 - krange ) * (phase[i:i+mj] - phase[i+mj:i+2*mj]) )
3✔
490
            Msum=Msum + pow(asum, 2)
3✔
491
            Mi=Mi+1
3✔
492
        dev = np.sqrt( 72*Msum / ((M)*pow(mj,4)*pow(mj*tau0,2 )) )
3✔
493
        #print('N ',N,' M ', M,' Mi', Mi, 'mj ',mj,' dev', dev)
494
        deverr = dev / np.sqrt(M)
3✔
495
        n=M
3✔
496
        
497
    return dev, deverr, n
3✔
498

499
def calc_gcodev_phase(phase_1, phase_2, rate, mj, stride):
3✔
500
    """
501
    Main algorithm for the Groslambert codeviation (see arXiv:1904.05849)
502

503
    Parameters
504
    ----------
505
    phase_1 : np.array
506
        Phase data of oscillator 1.
507
    phase_2 : np.array
508
        Phase data of oscillator 2.
509
    mj: int
510
        M index value for stride.
511
    stride: int
512
        Size of stride.
513

514
    Returns
515
    -------
516
    (dev, deverr, n): tuple
517
        Array of computed values.
518

519
    stride = mj for nonoverlapping Allan deviation
520
    stride = 1 for overlapping Allan deviation (used for GCODEV by default)
521

522
    """
523

524
    mj = int(mj)
×
525
    stride = int(stride)
×
526
    d2_1 = phase_1[2 * mj::stride]
×
527
    d1_1 = phase_1[1 * mj::stride]
×
528
    d0_1 = phase_1[::stride]
×
529

530
    d2_2 = phase_2[2 * mj::stride]
×
531
    d1_2 = phase_2[1 * mj::stride]
×
532
    d0_2 = phase_2[::stride]
×
533

534
    n_1 = min(len(d0_1), len(d1_1), len(d2_1))
×
535
    n_2 = min(len(d0_2), len(d1_2), len(d2_2))
×
536

537
    if n_1 == 0:
×
538
        RuntimeWarning("Data array length is too small: %i" % len(phase_1))
×
539
        n_1 = 1
×
540
        n_2 = 1
×
541

542
    v_arr_1 = d2_1[:n_1] - 2 * d1_1[:n_1] + d0_1[:n_1]
×
543
    v_arr_2 = d2_2[:n_2] - 2 * d1_2[:n_2] + d0_2[:n_2]
×
544
    s = np.sum(v_arr_1 * v_arr_2)
×
545

546
    # Result can be negative
547
    if(s >= 0):
×
548
        dev = np.sqrt(s / (2.0*n_1)) / mj*rate
×
549
    else:
×
550
        # print(s, n_1, mj, rate)
551
        dev = -np.sqrt(np.abs(s) / (2.0*n_1)) / mj*rate
×
552

553
    deverr = dev / np.sqrt(n_1)
×
554

555
    return dev, deverr, n_1
×
556

557

558
def gcodev(data_1, data_2, rate=1.0, data_type="phase", taus=None):
3✔
559
    """ Groslambert codeviation  a.k.a. Allan Covariance
560

561
    Ref: (arXiv:1904.05849, https://arxiv.org/abs/1904.05849)
562

563
    Parameters
564
    ----------
565
    data_1: np.array
566
        Oscillator 1 input data. Provide either phase or frequency
567
    data_2: np.array
568
        Oscillator 2 input data. Provide either phase or frequency
569
    rate: float
570
        The sampling rate for data, in Hz. Defaults to 1.0
571
    data_type: {'phase', 'freq'}
572
        Data type, i.e. phase or frequency. Defaults to "phase".
573
    taus: np.array
574
        Array of tau values, in seconds, for which to compute statistic.
575
        Optionally set taus=["all"|"octave"|"decade"] for automatic
576
        tau-list generation.
577

578
    Returns
579
    -------
580
    (taus, gd): tuple
581
          Tuple of values
582
    taus: np.array
583
        Tau values for which td computed
584
    gd: np.array
585
        Computed gcodev for each tau value
586

587
    """
588

589
    phase_1 = input_to_phase(data_1, rate, data_type)
×
590
    phase_2 = input_to_phase(data_2, rate, data_type)
×
591
    (phase_1, m, taus_used) = tau_generator(phase_1, rate, taus)
×
592
    (phase_2, m, taus_used) = tau_generator(phase_2, rate, taus)
×
593

594
    gd = np.zeros_like(taus_used)
×
595
    gde = np.zeros_like(taus_used)
×
596
    gdn = np.zeros_like(taus_used)
×
597

598
    for idx, mj in enumerate(m):  # stride=1 for overlapping ADEV
×
599
        (gd[idx], gde[idx], gdn[idx]) = calc_gcodev_phase(phase_1,
×
600
                                                          phase_2,
×
601
                                                          rate,
×
602
                                                          mj,
×
603
                                                          stride=1)
×
604

605
    return remove_small_ns(taus_used, gd, gde, gdn)
×
606

607

608
def oadev(data, rate=1.0, data_type="phase", taus=None):
3✔
609
    """ overlapping Allan deviation.
610
        General purpose - most widely used - first choice
611

612
    .. math::
613

614
        \\sigma^2_{OADEV}(m\\tau_0) = { 1 \\over 2 (m \\tau_0 )^2 (N-2m) }
615
        \\sum_{n=1}^{N-2m} ( {x}_{n+2m} - 2x_{n+1m} + x_{n} )^2
616

617
    where :math:`\\sigma^2_x(m\\tau_0)` is the overlapping Allan
618
    deviation at an averaging time of :math:`\\tau=m\\tau_0`, and
619
    :math:`x_n` is the time-series of phase observations, spaced by the
620
    measurement interval :math:`\\tau_0`, with length :math:`N`.
621

622
    NIST [SP1065]_ eqn (11), page 16.
623

624
    Parameters
625
    ----------
626
    data: np.array
627
        Input data. Provide either phase or frequency (fractional,
628
        adimensional).
629
    rate: float
630
        The sampling rate for data, in Hz. Defaults to 1.0
631
    data_type: {'phase', 'freq'}
632
        Data type, i.e. phase or frequency. Defaults to "phase".
633
    taus: np.array
634
        Array of tau values, in seconds, for which to compute statistic.
635
        Optionally set taus=["all"|"octave"|"decade"] for automatic
636
        tau-list generation.
637

638
    Returns
639
    -------
640
    (taus2, ad, ade, ns): tuple
641
          Tuple of values
642
    taus2: np.array
643
        Tau values for which td computed
644
    ad: np.array
645
        Computed oadev for each tau value
646
    ade: np.array
647
        oadev errors
648
    ns: np.array
649
        Values of N used in each oadev calculation
650

651
    """
652
    phase = input_to_phase(data, rate, data_type)
3✔
653
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
654
    ad = np.zeros_like(taus_used)
3✔
655
    ade = np.zeros_like(taus_used)
3✔
656
    adn = np.zeros_like(taus_used)
3✔
657

658
    for idx, mj in enumerate(m):  # stride=1 for overlapping ADEV
3✔
659
        (ad[idx], ade[idx], adn[idx]) = calc_adev_phase(phase, rate, mj, 1)
3✔
660

661
    return remove_small_ns(taus_used, ad, ade, adn)
3✔
662

663

664
def ohdev(data, rate=1.0, data_type="phase", taus=None):
3✔
665
    """ Overlapping Hadamard deviation.
666
        Better confidence than normal Hadamard.
667

668
    .. math::
669

670
        \\sigma^2_{OHDEV}(m\\tau_0) = { 1 \\over 6 (m \\tau_0 )^2 (N-3m) }
671
        \\sum_{i=1}^{N-3m} ( {x}_{i+3m} - 3x_{i+2m} + 3x_{i+m} - x_{i} )^2
672

673
    where :math:`x_i` is the time-series of phase observations, spaced
674
    by the measurement interval :math:`\\tau_0`, and with length :math:`N`.
675

676
    Parameters
677
    ----------
678
    data: np.array
679
        Input data. Provide either phase or frequency (fractional,
680
        adimensional).
681
    rate: float
682
        The sampling rate for data, in Hz. Defaults to 1.0
683
    data_type: {'phase', 'freq'}
684
        Data type, i.e. phase or frequency. Defaults to "phase".
685
    taus: np.array
686
        Array of tau values, in seconds, for which to compute statistic.
687
        Optionally set taus=["all"|"octave"|"decade"] for automatic
688
        tau-list generation.
689

690
    Returns
691
    -------
692
    (taus2, hd, hde, ns): tuple
693
          Tuple of values
694
    taus2: np.array
695
        Tau values for which td computed
696
    hd: np.array
697
        Computed hdev for each tau value
698
    hde: np.array
699
        hdev errors
700
    ns: np.array
701
        Values of N used in each hdev calculation
702

703
    """
704
    phase = input_to_phase(data, rate, data_type)
3✔
705
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
706
    hdevs = np.zeros_like(taus_used)
3✔
707
    hdeverrs = np.zeros_like(taus_used)
3✔
708
    ns = np.zeros_like(taus_used)
3✔
709

710
    for idx, mj in enumerate(m):
3✔
711
        (hdevs[idx],
2✔
712
         hdeverrs[idx],
2✔
713
         ns[idx]) = calc_hdev_phase(phase, rate, mj, 1)
3✔
714

715
    return remove_small_ns(taus_used, hdevs, hdeverrs, ns)
3✔
716

717

718
def hdev(data, rate=1.0, data_type="phase", taus=None):
3✔
719
    """ Hadamard deviation.
720
        Rejects frequency drift, and handles divergent noise.
721

722
    .. math::
723

724
        \\sigma^2_{HDEV}( \\tau ) = { 1 \\over 6 \\tau^2 (N-3) }
725
        \\sum_{i=1}^{N-3} ( {x}_{i+3} - 3x_{i+2} + 3x_{i+1} - x_{i} )^2
726

727
    where :math:`x_i` is the time-series of phase observations, spaced
728
    by the measurement interval :math:`\\tau`, and with length :math:`N`.
729

730
    NIST [SP1065]_ eqn (17) and (18), page 20
731

732
    Parameters
733
    ----------
734
    data: np.array
735
        Input data. Provide either phase or frequency (fractional,
736
        adimensional).
737
    rate: float
738
        The sampling rate for data, in Hz. Defaults to 1.0
739
    data_type: {'phase', 'freq'}
740
        Data type, i.e. phase or frequency. Defaults to "phase".
741
    taus: np.array
742
        Array of tau values, in seconds, for which to compute statistic.
743
        Optionally set taus=["all"|"octave"|"decade"] for automatic
744
        tau-list generation.
745
    """
746
    phase = input_to_phase(data, rate, data_type)
3✔
747
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
748
    hdevs = np.zeros_like(taus_used)
3✔
749
    hdeverrs = np.zeros_like(taus_used)
3✔
750
    ns = np.zeros_like(taus_used)
3✔
751

752
    for idx, mj in enumerate(m):
3✔
753
        (hdevs[idx],
2✔
754
         hdeverrs[idx],
2✔
755
         ns[idx]) = calc_hdev_phase(phase, rate, mj, mj)  # stride = mj
3✔
756

757
    return remove_small_ns(taus_used, hdevs, hdeverrs, ns)
3✔
758

759

760
def calc_hdev_phase(phase, rate, mj, stride):
3✔
761
    """ main calculation fungtion for HDEV and OHDEV
762

763
    Parameters
764
    ----------
765
    phase: np.array
766
        Phase data in seconds.
767
    rate: float
768
        The sampling rate for phase or frequency, in Hz
769
    mj: int
770
        M index value for stride
771
    stride: int
772
        Size of stride
773

774
    Returns
775
    -------
776
    (dev, deverr, n): tuple
777
        Array of computed values.
778

779
    Notes
780
    -----
781
    http://www.leapsecond.com/tools/adev_lib.c
782
                         1        N-3
783
         s2y(t) = --------------- sum [x(i+3) - 3x(i+2) + 3x(i+1) - x(i) ]^2
784
                  6*tau^2 (N-3m)  i=1
785

786
        N=M+1 phase measurements
787
        m is averaging factor
788

789
    NIST [SP1065]_ eqn (18) and (20) pages 20 and 21
790
    """
791

792
    tau0 = 1.0 / float(rate)
3✔
793
    mj = int(mj)
3✔
794
    stride = int(stride)
3✔
795
    d3 = phase[3 * mj::stride]
3✔
796
    d2 = phase[2 * mj::stride]
3✔
797
    d1 = phase[1 * mj::stride]
3✔
798
    d0 = phase[::stride]
3✔
799

800
    n = min(len(d0), len(d1), len(d2), len(d3))
3✔
801

802
    v_arr = d3[:n] - 3 * d2[:n] + 3 * d1[:n] - d0[:n]
3✔
803

804
    s = np.sum(v_arr * v_arr)
3✔
805

806
    if n == 0:
3✔
807
        n = 1
3✔
808

809
    h = np.sqrt(s / 6.0 / float(n)) / float(tau0 * mj)
3✔
810
    e = h / np.sqrt(n)
3✔
811
    return h, e, n
3✔
812

813

814
def totdev(data, rate=1.0, data_type="phase", taus=None):
3✔
815
    """ Total deviation.
816
        Better confidence at long averages for Allan deviation.
817

818
    .. math::
819

820
        \\sigma^2_{TOTDEV}( m\\tau_0 ) = { 1 \\over 2 (m\\tau_0)^2 (N-2) }
821
            \\sum_{i=2}^{N-1} ( {x}^*_{i-m} - 2x^*_{i} + x^*_{i+m} )^2
822

823

824
    Where :math:`x^*_i` is a new time-series of length :math:`3N-4`
825
    derived from the original phase time-series :math:`x_n` of
826
    length :math:`N` by reflection at both ends.
827

828
    FIXME: better description of reflection operation.
829
    the original data x is in the center of x*:
830
    x*(1-j) = 2x(1) - x(1+j)  for j=1..N-2
831
    x*(i)   = x(i)            for i=1..N
832
    x*(N+j) = 2x(N) - x(N-j)  for j=1..N-2
833
    x* has length 3N-4
834
    tau = m*tau0
835

836
    NIST [SP1065]_ eqn (25) page 23
837

838
    FIXME: bias correction http://www.wriley.com/CI2.pdf page 5
839

840
    Parameters
841
    ----------
842
    phase: np.array
843
        Phase data in seconds. Provide either phase or frequency.
844
    frequency: np.array
845
        Fractional frequency data (nondimensional). Provide either
846
        frequency or phase.
847
    rate: float
848
        The sampling rate for phase or frequency, in Hz
849
    taus: np.array
850
        Array of tau values for which to compute measurement
851

852
    """
853
    phase = input_to_phase(data, rate, data_type)
3✔
854
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
855
    N = len(phase)
3✔
856

857
    # totdev requires a new dataset
858
    # Begin by adding reflected data before dataset
859
    x1 = 2.0 * phase[0] * np.ones((N - 2,))
3✔
860
    x1 = x1 - phase[1:-1]
3✔
861
    x1 = x1[::-1]
3✔
862

863
    # Reflected data at end of dataset
864
    x2 = 2.0 * phase[-1] * np.ones((N - 2,))
3✔
865
    x2 = x2 - phase[1:-1][::-1]
3✔
866

867
    # check length of new dataset
868
    assert len(x1)+len(phase)+len(x2) == 3*N - 4
3✔
869
    # Combine into a single array
870
    x = np.zeros((3*N - 4))
3✔
871
    x[0:N-2] = x1
3✔
872
    x[N-2:2*(N-2)+2] = phase  # original data in the middle
3✔
873
    x[2*(N-2)+2:] = x2
3✔
874

875
    devs = np.zeros_like(taus_used)
3✔
876
    deverrs = np.zeros_like(taus_used)
3✔
877
    ns = np.zeros_like(taus_used)
3✔
878

879
    mid = len(x1)
3✔
880

881
    for idx, mj in enumerate(m):
3✔
882
        mj = int(mj)
3✔
883
        d0 = x[mid + 1:]
3✔
884
        d1 = x[mid + mj + 1:]
3✔
885
        d1n = x[mid - mj + 1:]
3✔
886
        e = min(len(d0), len(d1), len(d1n))
3✔
887

888
        v_arr = d1n[:e] - 2.0 * d0[:e] + d1[:e]
3✔
889
        dev = np.sum(v_arr[:mid] * v_arr[:mid])
3✔
890

891
        dev /= float(2 * pow(mj / rate, 2) * (N - 2))
3✔
892
        dev = np.sqrt(dev)
3✔
893
        devs[idx] = dev
3✔
894
        deverrs[idx] = dev / np.sqrt(mid)
3✔
895
        ns[idx] = mid
3✔
896

897
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
898

899

900
def ttotdev(data, rate=1.0, data_type="phase", taus=None):
3✔
901
    """ Time Total Deviation
902

903
        Modified total variance scaled by tau^2 / 3
904

905
        NIST [SP1065]_ eqn (28) page 26.
906
        Note that [SP1065]_ erroneously has tau-cubed here (!).
907
    """
908

909
    (taus, mtotdevs, mde, ns) = mtotdev(data, data_type=data_type,
3✔
910
                                        rate=rate, taus=taus)
3✔
911
    td = taus*mtotdevs / np.sqrt(3.0)
3✔
912
    tde = td / np.sqrt(ns)
3✔
913
    return taus, td, tde, ns
3✔
914

915

916
def mtotdev(data, rate=1.0, data_type="phase", taus=None):
3✔
917
    """ PRELIMINARY - REQUIRES FURTHER TESTING.
918
        Modified Total deviation.
919
        Better confidence at long averages for modified Allan
920

921
        FIXME: bias-correction http://www.wriley.com/CI2.pdf page 6
922

923
        The variance is scaled up (divided by this number) based on the
924
        noise-type identified.
925
        WPM 0.94
926
        FPM 0.83
927
        WFM 0.73
928
        FFM 0.70
929
        RWFM 0.69
930

931
    Parameters
932
    ----------
933
    data: np.array
934
        Input data. Provide either phase or frequency (fractional,
935
        adimensional).
936
    rate: float
937
        The sampling rate for data, in Hz. Defaults to 1.0
938
    data_type: {'phase', 'freq'}
939
        Data type, i.e. phase or frequency. Defaults to "phase".
940
    taus: np.array
941
        Array of tau values, in seconds, for which to compute statistic.
942
        Optionally set taus=["all"|"octave"|"decade"] for automatic
943
        tau-list generation.
944

945
    NIST [SP1065]_ eqn (27) page 25
946

947
    """
948
    phase = input_to_phase(data, rate, data_type)
3✔
949
    (phase, ms, taus_used) = tau_generator(phase, rate, taus,
3✔
950
                                           maximum_m=float(len(phase))/3.0)
3✔
951
    devs = np.zeros_like(taus_used)
3✔
952
    deverrs = np.zeros_like(taus_used)
3✔
953
    ns = np.zeros_like(taus_used)
3✔
954

955
    for idx, mj in enumerate(ms):
3✔
956
        devs[idx], deverrs[idx], ns[idx] = calc_mtotdev_phase(phase, rate, mj)
3✔
957

958
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
959

960

961
def calc_mtotdev_phase(phase, rate, m):
3✔
962
    """ PRELIMINARY - REQUIRES FURTHER TESTING.
963
        calculation of mtotdev for one averaging factor m
964
        tau = m*tau0
965

966
        NIST [SP1065]_ Eqn (27), page 25.
967

968
        Computed from a set of N - 3m + 1 subsequences of 3m points.
969
        1. A linear trend (frequency offset) is removed from the subsequence
970
           by averaging the first and last halves of the subsequence and
971
           dividing by half the interval.
972
        2. The offset-removed subsequence is extended at both ends
973
           by uninverted, even reflection.
974

975
        [Howe1999]_
976
        D.A. Howe and F. Vernotte, "Generalization of the Total Variance
977
        Approach to the Modified Allan Variance," Proc.
978
        31 st PTTI Meeting, pp. 267-276, Dec. 1999.
979
    """
980
    tau0 = 1.0/rate
3✔
981
    N = len(phase)  # phase data, N points
3✔
982
    m = int(m)
3✔
983

984
    n = 0      # number of terms in the sum, for error estimation
3✔
985
    dev = 0.0  # the deviation we are computing
3✔
986
    # print('calc_mtotdev N=%d m=%d' % (N,m) )
987
    for i in range(0, N-3*m+1):
3✔
988
        # subsequence of length 3m, from the original phase data
989
        xs = phase[i:i+3*m]
3✔
990
        assert len(xs) == 3*m
3✔
991
        # Step 1.
992
        # remove linear trend. by averaging first/last half,
993
        # computing slope, and subtracting
994
        half1_idx = int(np.floor(3*m/2.0))
3✔
995
        half2_idx = int(np.ceil(3*m/2.0))
3✔
996
        # m
997
        # 1    0:1   2:2
998
        mean1 = np.mean(xs[:half1_idx])
3✔
999
        mean2 = np.mean(xs[half2_idx:])
3✔
1000

1001
        if int(3*m) % 2 == 1:  # m is odd
3✔
1002
            # 3m = 2k+1 is odd, with the averages at both ends over k points
1003
            # the distance between the averages is then k+1 = (3m-1)/2 +1
1004
            slope = (mean2-mean1) / ((0.5*(3*m-1)+1)*tau0)
3✔
1005
        else:  # m is even
×
1006
            # 3m = 2k is even, so distance between averages is k=m/2
1007
            slope = (mean2-mean1) / (0.5*3*m*tau0)
3✔
1008

1009
        # remove the linear trend
1010
        x0 = [x - slope*idx*tau0 for (idx, x) in enumerate(xs)]
3✔
1011
        x0_flip = x0[::-1]  # left-right flipped version of array
3✔
1012

1013
        # Step 2.
1014
        # extend sequence, by uninverted even reflection
1015
        # extended sequence xstar, of length 9m,
1016
        xstar = np.concatenate((x0_flip, x0, x0_flip))
3✔
1017
        assert len(xstar) == 9*m
3✔
1018

1019
        # now compute mdev on these 9m points
1020
        # 6m unique groups of m-point averages,
1021
        # use all possible overlapping second differences
1022
        # one term in the 6m sum:  [ x_i - 2 x_i+m + x_i+2m ]^2
1023
        squaresum = 0.0
3✔
1024

1025
        # below we want the following sums
1026
        # (averages, see squaresum below where we divide by m)
1027
        # xmean1=np.sum(xstar[j     :   j+m])
1028
        # xmean2=np.sum(xstar[j+m   : j+2*m])
1029
        # xmean3=np.sum(xstar[j+2*m : j+3*m])
1030
        # for speed these are not computed with np.sum or np.mean in each loop
1031
        # instead they are initialized at m=0, and then just updated
1032
        for j in range(0, 6*m):  # summation of the 6m terms.
3✔
1033
            # faster inner sum, based on Stable32 MTC.c code
1034
            if j == 0:
3✔
1035
                # intialize the sum
1036
                xmean1 = np.sum(xstar[0:m])
3✔
1037
                xmean2 = np.sum(xstar[m:2*m])
3✔
1038
                xmean3 = np.sum(xstar[2*m:3*m])
3✔
1039
            else:
×
1040
                # j>=1, subtract old point, add new point
1041
                xmean1 = xmean1 - xstar[j-1] + xstar[j+m-1]
3✔
1042
                xmean2 = xmean2 - xstar[m+j-1] + xstar[j+2*m-1]
3✔
1043
                xmean3 = xmean3 - xstar[2*m+j-1] + xstar[j+3*m-1]
3✔
1044

1045
            squaresum += pow((xmean1 - 2.0*xmean2 + xmean3)/float(m), 2)
3✔
1046

1047
        squaresum = (1.0/(6.0*m)) * squaresum
3✔
1048
        dev += squaresum
3✔
1049
        n = n+1
3✔
1050

1051
    # scaling in front of double-sum
1052
    assert n == N-3*m+1  # sanity check on the number of terms n
3✔
1053
    dev = dev * 1.0 / (2.0*pow(m*tau0, 2)*(N-3*m+1))
3✔
1054
    dev = np.sqrt(dev)
3✔
1055
    error = dev / np.sqrt(n)
3✔
1056
    return (dev, error, n)
3✔
1057

1058

1059
def htotdev(data, rate=1.0, data_type="phase", taus=None):
3✔
1060
    """ PRELIMINARY - REQUIRES FURTHER TESTING.
1061
        Hadamard Total deviation.
1062
        Better confidence at long averages for Hadamard deviation
1063

1064
        Computed for N fractional frequency points y_i with sampling
1065
        period tau0, analyzed at tau = m*tau0
1066
        1. remove linear trend by averaging first and last half,
1067
        and dividing by interval
1068
        2. extend sequence by uninverted even reflection
1069
        3. compute Hadamard for extended, length 9m, sequence.
1070

1071
        FIXME: bias corrections from http://www.wriley.com/CI2.pdf
1072
        W FM    0.995      alpha= 0
1073
        F FM    0.851      alpha=-1
1074
        RW FM   0.771      alpha=-2
1075
        FW FM   0.717      alpha=-3
1076
        RR FM   0.679      alpha=-4
1077

1078
        Parameters
1079
        ----------
1080
        data: np.array
1081
            Input data. Provide either phase or frequency (fractional,
1082
            adimensional).
1083
        rate: float
1084
            The sampling rate for data, in Hz. Defaults to 1.0
1085
        data_type: {'phase', 'freq'}
1086
            Data type, i.e. phase or frequency. Defaults to "phase".
1087
        taus: np.array
1088
            Array of tau values, in seconds, for which to compute statistic.
1089
            Optionally set taus=["all"|"octave"|"decade"] for automatic
1090
            tau-list generation.
1091

1092
    """
1093
    if data_type == "phase":
3✔
1094
        phase = data
3✔
1095
        freq = phase2frequency(phase, rate)
3✔
1096
    elif data_type == "freq":
3✔
1097
        phase = frequency2phase(data, rate)
3✔
1098
        freq = data
3✔
1099
    else:
×
1100
        raise Exception("unknown data_type: " + data_type)
×
1101

1102
    rate = float(rate)
3✔
1103
    (freq, ms, taus_used) = tau_generator(freq, rate, taus,
3✔
1104
                                          maximum_m=float(len(freq))/3.0)
3✔
1105
    phase = np.array(phase)
3✔
1106
    freq = np.array(freq)
3✔
1107
    devs = np.zeros_like(taus_used)
3✔
1108
    deverrs = np.zeros_like(taus_used)
3✔
1109
    ns = np.zeros_like(taus_used)
3✔
1110

1111
    # NOTE at mj==1 we use ohdev(), based on comment from here:
1112
    # http://www.wriley.com/paper4ht.htm
1113
    # "For best consistency, the overlapping Hadamard variance is used
1114
    # instead of the Hadamard total variance at m=1"
1115
    # FIXME: this uses both freq and phase datasets,
1116
    # which uses double the memory really needed...
1117
    for idx, mj in enumerate(ms):
3✔
1118
        if int(mj) == 1:
3✔
1119
            (devs[idx],
2✔
1120
             deverrs[idx],
2✔
1121
             ns[idx]) = calc_hdev_phase(phase, rate, mj, 1)
3✔
1122
        else:
×
1123
            (devs[idx],
2✔
1124
             deverrs[idx],
2✔
1125
             ns[idx]) = calc_htotdev_freq(freq, mj)
3✔
1126

1127
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
1128

1129

1130
def calc_htotdev_freq(freq, m):
3✔
1131
    """ PRELIMINARY - REQUIRES FURTHER TESTING.
1132
        calculation of htotdev for one averaging factor m
1133
        tau = m*tau0
1134

1135
        Parameters
1136
        ----------
1137
        frequency: np.array
1138
            Fractional frequency data (nondimensional).
1139
        m: int
1140
            Averaging factor. tau = m*tau0, where tau0=1/rate.
1141
    """
1142

1143
    N = int(len(freq))  # frequency data, N points
3✔
1144
    m = int(m)
3✔
1145
    n = 0      # number of terms in the sum, for error estimation
3✔
1146
    dev = 0.0  # the deviation we are computing
3✔
1147
    for i in range(0, N-3*m+1):
3✔
1148
        # subsequence of length 3m, from the original phase data
1149
        xs = freq[i:i+3*m]
3✔
1150
        assert len(xs) == 3*m
3✔
1151
        # remove linear trend. by averaging first/last half,
1152
        # computing slope, and subtracting
1153
        half1_idx = int(np.floor(3*m/2.0))
3✔
1154
        half2_idx = int(np.ceil(3*m/2.0))
3✔
1155
        # m
1156
        # 1    0:1   2:2
1157
        mean1 = np.mean(xs[:half1_idx])
3✔
1158
        mean2 = np.mean(xs[half2_idx:])
3✔
1159

1160
        if int(3*m) % 2 == 1:  # m is odd
3✔
1161
            # 3m = 2k+1 is odd, with the averages at both ends over k points
1162
            # the distance between the averages is then k+1 = (3m-1)/2 +1
1163
            slope = (mean2-mean1) / ((0.5*(3*m-1)+1))
×
1164
        else:  # m is even
×
1165
            # 3m = 2k is even, so distance between averages is k=3m/2
1166
            slope = (mean2-mean1) / (0.5*3*m)
3✔
1167

1168
        # remove the linear trend
1169
        x0 = [x - slope*(idx-np.floor(3*m/2)) for (idx, x) in enumerate(xs)]
3✔
1170
        x0_flip = x0[::-1]  # left-right flipped version of array
3✔
1171
        # extended sequence, to length 9m, by uninverted even reflection
1172
        xstar = np.concatenate((x0_flip, x0, x0_flip))
3✔
1173
        assert len(xstar) == 9*m
3✔
1174

1175
        # now compute totdev on these 9m points
1176
        # 6m unique groups of m-point averages,
1177
        # all possible overlapping second differences
1178
        # one term in the 6m sum:  [ x_i - 2 x_i+m + x_i+2m ]^2
1179
        squaresum = 0.0
3✔
1180
        k = 0
3✔
1181
        for j in range(0, 6*int(m)):  # summation of the 6m terms.
3✔
1182
            # old naive code
1183
            # xmean1 = np.mean(xstar[j+0*m : j+1*m])
1184
            # xmean2 = np.mean(xstar[j+1*m : j+2*m])
1185
            # xmean3 = np.mean(xstar[j+2*m : j+3*m])
1186
            # squaresum += pow(xmean1 - 2.0*xmean2 + xmean3, 2)
1187
            # new faster way of doing the sums
1188
            if j == 0:
3✔
1189
                # intialize the sum
1190
                xmean1 = np.sum(xstar[0:m])
3✔
1191
                xmean2 = np.sum(xstar[m:2*m])
3✔
1192
                xmean3 = np.sum(xstar[2*m:3*m])
3✔
1193
            else:
×
1194
                # j>=1, subtract old point, add new point
1195
                xmean1 = xmean1 - xstar[j-1] + xstar[j+m-1]
3✔
1196
                xmean2 = xmean2 - xstar[m+j-1] + xstar[j+2*m-1]
3✔
1197
                xmean3 = xmean3 - xstar[2*m+j-1] + xstar[j+3*m-1]
3✔
1198

1199
            squaresum += pow((xmean1 - 2.0*xmean2 + xmean3)/float(m), 2)
3✔
1200

1201
            k = k+1
3✔
1202
        assert k == 6*m  # check number of terms in the sum
3✔
1203
        squaresum = (1.0/(6.0*k)) * squaresum
3✔
1204
        dev += squaresum
3✔
1205
        n = n+1
3✔
1206

1207
    # scaling in front of double-sum
1208
    assert n == N-3*m+1  # sanity check on the number of terms n
3✔
1209
    dev = dev * 1.0/(N-3*m+1)
3✔
1210
    dev = np.sqrt(dev)
3✔
1211
    error = dev / np.sqrt(n)
3✔
1212
    return (dev, error, n)
3✔
1213

1214

1215
def theo1(data, rate=1.0, data_type="phase", taus=None):
3✔
1216
    """ PRELIMINARY - REQUIRES FURTHER TESTING.
1217
        Theo1 is a two-sample variance with improved confidence and
1218
        extended averaging factor range.
1219

1220
        .. math::
1221

1222
            \\sigma^2_{THEO1}(m\\tau_0) = { 1 \\over  (m \\tau_0 )^2 (N-m) }
1223
                \\sum_{i=1}^{N-m}   \\sum_{\\delta=0}^{m/2-1}
1224
                {1\\over m/2-\\delta}\\lbrace
1225
                    ({x}_{i} - x_{i-\\delta +m/2}) +
1226
                    (x_{i+m}- x_{i+\\delta +m/2}) \\rbrace^2
1227

1228

1229
        Where :math:`10<=m<=N-1` is even.
1230

1231
        FIXME: bias correction
1232

1233
        NIST [SP1065]_ eq (30) page 29
1234

1235
    Parameters
1236
    ----------
1237
    data: np.array
1238
        Input data. Provide either phase or frequency (fractional,
1239
        adimensional).
1240
    rate: float
1241
        The sampling rate for data, in Hz. Defaults to 1.0
1242
    data_type: {'phase', 'freq'}
1243
        Data type, i.e. phase or frequency. Defaults to "phase".
1244
    taus: np.array
1245
        Array of tau values, in seconds, for which to compute statistic.
1246
        Optionally set taus=["all"|"octave"|"decade"] for automatic
1247
        tau-list generation.
1248

1249
    """
1250
    phase = input_to_phase(data, rate, data_type)
3✔
1251

1252
    tau0 = 1.0/rate
3✔
1253
    (phase, ms, taus_used) = tau_generator(phase, rate, taus, even=True)
3✔
1254

1255
    devs = np.zeros_like(taus_used)
3✔
1256
    deverrs = np.zeros_like(taus_used)
3✔
1257
    ns = np.zeros_like(taus_used)
3✔
1258

1259
    N = len(phase)
3✔
1260
    for idx, m in enumerate(ms):
3✔
1261
        m = int(m)  # to avoid: VisibleDeprecationWarning: using a
3✔
1262
        # non-integer number instead of an integer will
1263
        # result in an error in the future
1264
        assert m % 2 == 0  # m must be even
3✔
1265
        dev = 0
3✔
1266
        n = 0
3✔
1267
        for i in range(int(N-m)):
3✔
1268
            s = 0
3✔
1269
            for d in range(int(m/2)):  # inner sum
3✔
1270
                pre = 1.0 / (float(m)/2 - float(d))
3✔
1271
                s += pre*pow(phase[i]-phase[i-d+int(m/2)] +
3✔
1272
                             phase[i+m]-phase[i+d+int(m/2)], 2)
3✔
1273
                n = n+1
3✔
1274
            dev += s
3✔
1275
        assert n == (N-m)*m/2  # N-m outer sums, m/2 inner sums
3✔
1276
        dev = dev/(0.75*(N-m)*pow(m*tau0, 2))
3✔
1277
        # factor 0.75 used here? http://tf.nist.gov/general/pdf/1990.pdf
1278
        # but not here? http://tf.nist.gov/timefreq/general/pdf/2220.pdf
1279
        # (page 29)
1280
        devs[idx] = np.sqrt(dev)
3✔
1281
        deverrs[idx] = devs[idx] / np.sqrt(N-m)
3✔
1282
        ns[idx] = n
3✔
1283

1284
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
1285

1286

1287
def tierms(data, rate=1.0, data_type="phase", taus=None):
3✔
1288
    """ Time Interval Error RMS.
1289

1290
    Parameters
1291
    ----------
1292
    data: np.array
1293
        Input data. Provide either phase or frequency (fractional,
1294
        adimensional).
1295
    rate: float
1296
        The sampling rate for data, in Hz. Defaults to 1.0
1297
    data_type: {'phase', 'freq'}
1298
        Data type, i.e. phase or frequency. Defaults to "phase".
1299
    taus: np.array
1300
        Array of tau values, in seconds, for which to compute statistic.
1301
        Optionally set taus=["all"|"octave"|"decade"] for automatic
1302
        tau-list generation.
1303

1304
    """
1305
    phase = input_to_phase(data, rate, data_type)
3✔
1306
    (data, m, taus_used) = tau_generator(phase, rate, taus)
3✔
1307

1308
    count = len(phase)
3✔
1309

1310
    devs = np.zeros_like(taus_used)
3✔
1311
    deverrs = np.zeros_like(taus_used)
3✔
1312
    ns = np.zeros_like(taus_used)
3✔
1313

1314
    for idx, mj in enumerate(m):
3✔
1315
        mj = int(mj)
3✔
1316

1317
        # This seems like an unusual way to
1318
        phases = np.column_stack((phase[:-mj], phase[mj:]))
3✔
1319
        p_max = np.max(phases, axis=1)
3✔
1320
        p_min = np.min(phases, axis=1)
3✔
1321
        phases = p_max - p_min
3✔
1322
        tie = np.sqrt(np.mean(phases * phases))
3✔
1323

1324
        ncount = count - mj
3✔
1325

1326
        devs[idx] = tie
3✔
1327
        deverrs[idx] = 0 / np.sqrt(ncount)  # TODO! I THINK THIS IS WRONG!
3✔
1328
        ns[idx] = ncount
3✔
1329

1330
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
1331

1332

1333
def mtie_rolling_window(a, window):
3✔
1334
    """
1335
    Make an ndarray with a rolling window of the last dimension, from
1336
    http://mail.scipy.org/pipermail/numpy-discussion/2011-January/054401.html
1337

1338
    Parameters
1339
    ----------
1340
    a : array_like
1341
        Array to add rolling window to
1342
    window : int
1343
        Size of rolling window
1344

1345
    Returns
1346
    -------
1347
    Array that is a view of the original array with a added dimension
1348
    of size window.
1349

1350
    Note
1351
    ----
1352
    This may consume large amounts of memory. See discussion:
1353
    https://mail.python.org/pipermail/numpy-discussion/2011-January/054364.html
1354
    https://mail.python.org/pipermail/numpy-discussion/2011-January/054370.html
1355

1356
    """
1357
    if window < 1:
3✔
1358
        raise ValueError("`window` must be at least 1.")
×
1359
    if window > a.shape[-1]:
3✔
1360
        raise ValueError("`window` is too long.")
×
1361
    shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
3✔
1362
    strides = a.strides + (a.strides[-1],)
3✔
1363
    return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
3✔
1364

1365

1366
def mtie(data, rate=1.0, data_type="phase", taus=None):
3✔
1367
    """ Maximum Time Interval Error.
1368

1369
    Parameters
1370
    ----------
1371
    data: np.array
1372
        Input data. Provide either phase or frequency (fractional,
1373
        adimensional).
1374
    rate: float
1375
        The sampling rate for data, in Hz. Defaults to 1.0
1376
    data_type: {'phase', 'freq'}
1377
        Data type, i.e. phase or frequency. Defaults to "phase".
1378
    taus: np.array
1379
        Array of tau values, in seconds, for which to compute statistic.
1380
        Optionally set taus=["all"|"octave"|"decade"] for automatic
1381
        tau-list generation.
1382

1383
    Notes
1384
    -----
1385
    this seems to correspond to Stable32 setting "Fast(u)"
1386
    Stable32 also has "Decade" and "Octave" modes where the
1387
    dataset is extended somehow?
1388
    """
1389
    phase = input_to_phase(data, rate, data_type)
3✔
1390
    (phase, m, taus_used) = tau_generator(phase, rate, taus)
3✔
1391
    devs = np.zeros_like(taus_used)
3✔
1392
    deverrs = np.zeros_like(taus_used)
3✔
1393
    ns = np.zeros_like(taus_used)
3✔
1394

1395
    for idx, mj in enumerate(m):
3✔
1396
        try:
3✔
1397
            # the older algorithm uses a lot of memory
1398
            # but can be used for short datasets.
1399
            rw = mtie_rolling_window(phase, int(mj + 1))
3✔
1400
            win_max = np.max(rw, axis=1)
3✔
1401
            win_min = np.min(rw, axis=1)
3✔
1402
            tie = win_max - win_min
3✔
1403
            dev = np.max(tie)
3✔
1404
        except ValueError:
×
1405
            if int(mj + 1) < 1:
×
1406
                raise ValueError("`window` must be at least 1.")
×
1407
            if int(mj + 1) > phase.shape[-1]:
×
1408
                raise ValueError("`window` is too long.")
×
1409

1410
            mj = int(mj)
×
1411
            currMax = np.max(phase[0:mj])
×
1412
            currMin = np.min(phase[0:mj])
×
1413
            dev = currMax - currMin
×
1414
            for winStartIdx in range(1, int(phase.shape[0] - mj)):
×
1415
                winEndIdx = mj + winStartIdx
×
1416
                if currMax == phase[winStartIdx - 1]:
×
1417
                    currMax = np.max(phase[winStartIdx:winEndIdx])
×
1418
                elif currMax < phase[winEndIdx]:
×
1419
                    currMax = phase[winEndIdx]
×
1420

1421
                if currMin == phase[winStartIdx - 1]:
×
1422
                    currMin = np.min(phase[winStartIdx:winEndIdx])
×
1423
                elif currMin > phase[winEndIdx]:
×
1424
                    currMin = phase[winEndIdx]
×
1425

1426
                if dev < currMax - currMin:
×
1427
                    dev = currMax - currMin
×
1428

1429
        ncount = phase.shape[0] - mj
3✔
1430
        devs[idx] = dev
3✔
1431
        deverrs[idx] = dev / np.sqrt(ncount)
3✔
1432
        ns[idx] = ncount
3✔
1433

1434
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
1435

1436
#
1437
# !!!!!!!
1438
# FIXME: mtie_phase_fast() is incomplete.
1439
# !!!!!!!
1440
#
1441

1442

1443
def mtie_phase_fast(phase, rate=1.0, data_type="phase", taus=None):
3✔
1444
    """ fast binary decomposition algorithm for MTIE
1445

1446
        See: [Bregni2001]_ STEFANO BREGNI
1447
        "Fast Algorithms for TVAR and MTIE Computation in
1448
        Characterization of Network Synchronization Performance"
1449
    """
1450
    rate = float(rate)
3✔
1451
    phase = np.asarray(phase)
3✔
1452
    k_max = int(np.floor(np.log2(len(phase))))
3✔
1453
    phase = phase[0:pow(2, k_max)]  # truncate data to 2**k_max datapoints
3✔
1454
    assert len(phase) == pow(2, k_max)
3✔
1455
    taus = [pow(2, k) for k in range(k_max)]
3✔
1456

1457
    print("taus N=", len(taus), " ", taus)
3✔
1458
    devs = np.zeros(len(taus))
3✔
1459
    deverrs = np.zeros(len(taus))
3✔
1460
    ns = np.zeros(len(taus))
3✔
1461
    taus_used = np.array(taus)  # [(1.0/rate)*t for t in taus]
3✔
1462
    # matrices to store results
1463
    mtie_max = np.zeros((len(phase)-1, k_max))
3✔
1464
    mtie_min = np.zeros((len(phase)-1, k_max))
3✔
1465
    for kidx in range(k_max):
3✔
1466
        k = kidx+1
3✔
1467
        imax = len(phase)-pow(2, k)+1
3✔
1468
        # print k, imax
1469
        tie = np.zeros(imax)
3✔
1470
        ns[kidx] = imax
3✔
1471
        # print np.max( tie )
1472
        for i in range(imax):
3✔
1473
            if k == 1:
3✔
1474
                mtie_max[i, kidx] = max(phase[i], phase[i+1])
3✔
1475
                mtie_min[i, kidx] = min(phase[i], phase[i+1])
3✔
1476
            else:
×
1477
                p = int(pow(2, k-1))
3✔
1478
                mtie_max[i, kidx] = max(mtie_max[i, kidx-1],
3✔
1479
                                        mtie_max[i+p, kidx-1])
3✔
1480
                mtie_min[i, kidx] = min(mtie_min[i, kidx-1],
3✔
1481
                                        mtie_min[i+p, kidx-1])
3✔
1482

1483
        # for i in range(imax):
1484
            tie[i] = mtie_max[i, kidx] - mtie_min[i, kidx]
3✔
1485
            # print tie[i]
1486
        devs[kidx] = np.amax(tie)  # maximum along axis
3✔
1487

1488
    devs = np.array(devs)
3✔
1489
    print("devs N=", len(devs), " ", devs)
3✔
1490
    print("taus N=", len(taus_used), " ", taus_used)
3✔
1491
    return remove_small_ns(taus_used, devs, deverrs, ns)
3✔
1492

1493

1494
########################################################################
1495
#
1496
#  gap resistant Allan deviation
1497
#
1498

1499

1500
def gradev(data, rate=1.0, data_type="phase", taus=None,
2✔
1501
           ci=0.9, noisetype='wp'):
1✔
1502
    """ gap resistant overlapping Allan deviation
×
1503

1504
    Parameters
×
1505
    ----------
×
1506
    data: np.array
×
1507
        Input data. Provide either phase or frequency (fractional,
×
1508
        adimensional). Warning : phase data works better (frequency data is
×
1509
        first trantformed into phase using numpy.cumsum() function, which can
×
1510
        lead to poor results).
×
1511
    rate: float
×
1512
        The sampling rate for data, in Hz. Defaults to 1.0
×
1513
    data_type: {'phase', 'freq'}
×
1514
        Data type, i.e. phase or frequency. Defaults to "phase".
×
1515
    taus: np.array
×
1516
        Array of tau values, in seconds, for which to compute statistic.
×
1517
        Optionally set taus=["all"|"octave"|"decade"] for automatic
×
1518
        tau-list generation.
×
1519
    ci: float
×
1520
        the total confidence interval desired, i.e. if ci = 0.9, the bounds
×
1521
        will be at 0.05 and 0.95.
×
1522
    noisetype: string
×
1523
        the type of noise desired:
×
1524
        'wp' returns white phase noise.
×
1525
        'wf' returns white frequency noise.
×
1526
        'fp' returns flicker phase noise.
×
1527
        'ff' returns flicker frequency noise.
×
1528
        'rf' returns random walk frequency noise.
×
1529
        If the input is not recognized, it defaults to idealized, uncorrelated
×
1530
        noise with (N-1) degrees of freedom.
×
1531

1532
    Returns
×
1533
    -------
×
1534
    taus: np.array
×
1535
        list of tau vales in seconds
×
1536
    adev: np.array
×
1537
        deviations
×
1538
    [err_l, err_h] : list of len()==2, np.array
×
1539
        the upper and lower bounds of the confidence interval taken as
×
1540
        distances from the the estimated two sample variance.
×
1541
    ns: np.array
×
1542
        numper of terms n in the adev estimate.
×
1543

1544
    """
×
1545
    if data_type == "freq":
3✔
1546
        print("Warning : phase data is preferred as input to gradev()")
×
1547
    phase = input_to_phase(data, rate, data_type)
3✔
1548
    (data, m, taus_used) = tau_generator(phase, rate, taus)
3✔
1549

1550
    ad = np.zeros_like(taus_used)
3✔
1551
    ade_l = np.zeros_like(taus_used)
3✔
1552
    ade_h = np.zeros_like(taus_used)
3✔
1553
    adn = np.zeros_like(taus_used)
3✔
1554

1555
    for idx, mj in enumerate(m):
3✔
1556
        (dev, deverr, n) = calc_gradev_phase(data,
3✔
1557
                                             rate,
3✔
1558
                                             mj,
3✔
1559
                                             1,
3✔
1560
                                             ci,
3✔
1561
                                             noisetype)
3✔
1562
        # stride=1 for overlapping ADEV
1563
        ad[idx] = dev
3✔
1564
        ade_l[idx] = deverr[0]
3✔
1565
        ade_h[idx] = deverr[1]
3✔
1566
        adn[idx] = n
3✔
1567

1568
    # Note that errors are split in 2 arrays
1569
    return remove_small_ns(taus_used, ad, [ade_l, ade_h], adn)
3✔
1570

1571

1572
def calc_gradev_phase(data, rate, mj, stride, confidence, noisetype):
3✔
1573
    """ see http://www.leapsecond.com/tools/adev_lib.c
1574
        stride = mj for nonoverlapping allan deviation
1575
        stride = 1 for overlapping allan deviation
1576

1577
        see http://en.wikipedia.org/wiki/Allan_variance
1578
             1       1
1579
         s2y(t) = --------- sum [x(i+2) - 2x(i+1) + x(i) ]^2
1580
                  2*tau^2
1581
    """
1582

1583
    d2 = data[2 * int(mj)::int(stride)]
3✔
1584
    d1 = data[1 * int(mj)::int(stride)]
3✔
1585
    d0 = data[::int(stride)]
3✔
1586

1587
    n = min(len(d0), len(d1), len(d2))
3✔
1588

1589
    v_arr = d2[:n] - 2 * d1[:n] + d0[:n]
3✔
1590

1591
    # only average for non-nans
1592
    n = len(np.where(np.isnan(v_arr) == False)[0])  # noqa
3✔
1593

1594
    if n == 0:
3✔
1595
        RuntimeWarning("Data array length is too small: %i" % len(data))
3✔
1596
        n = 1
3✔
1597

1598
    N = len(np.where(np.isnan(data) == False)[0])  # noqa
3✔
1599

1600
    # a summation robust to nans
1601
    s = np.nansum(v_arr * v_arr)
3✔
1602

1603
    dev = np.sqrt(s / (2.0 * n)) / mj * rate
3✔
1604
    # deverr = dev / np.sqrt(n) # old simple errorbars
1605
    if noisetype == 'wp':
3✔
1606
        alpha = 2
3✔
1607
    elif noisetype == 'wf':
×
1608
        alpha = 0
×
1609
    elif noisetype == 'fp':
×
1610
        alpha = -2
×
1611
    else:
×
1612
        alpha = None
×
1613

1614
    if n > 1:
3✔
1615
        edf = ci.edf_simple(N, mj, alpha)
3✔
1616
        deverr = ci.confidence_interval(dev, confidence, edf)
3✔
1617
    else:
×
1618
        deverr = [0, 0]
3✔
1619

1620
    return dev, deverr, n
3✔
1621

1622

1623
def psd2allan(S_y, f=1.0, kind='adev', base=2):
3✔
1624
    """ Convert a given (one-sided) power spectral density :math:`S_y(f)` to Allan
1625
        deviation or modified Allan deviation
1626

1627
    For ergodic noise, the Allan variance or modified Allan variance
1628
    is related to the power spectral density :math:`S_y` of the fractional
1629
    frequency deviation:
1630

1631
    .. math::
1632

1633
        \\sigma^2_y(\\tau) = 2 \\int_0^\\infty S_y(f)
1634
        \\left| \\sin(\\pi f \\tau)^{(k+1)} \\over (\\pi f \\tau)^k \\right|^2 df,
1635

1636

1637
    where :math:`f` is the Fourier frequency and :math:`\\tau` the averaging
1638
    time. The exponent :math:`k` is 1 for the Allan variance and 2 for the
1639
    modified Allan variance.
1640

1641
    NIST [SP1065]_ eqs (65-66), page 73. See also [Benkler2015]_ eqn (23).
1642

1643
    psd2allan() implements the integral by discrete numerical integration via
1644
    a sum.
1645

1646
    Parameters
1647
    ----------
1648
    S_y: np.array
1649
        Single-sided power spectral density (PSD) of fractional frequency
1650
        deviation S_y in 1/Hz^2. First element is S_y(f=0).
1651
    f: np.array or scalar numeric (float or int)
1652
        if np.array: Spectral frequency vector in Hz
1653
        if numeric scalar: Spectral frequency step in Hz
1654
        default: Spectral frequency step 1 Hz
1655
    kind: {'adev', 'mdev'}
1656
        Which kind of Allan deviation to compute. Defaults to 'adev'
1657
    base: float
1658
        Base for logarithmic spacing of tau values. E.g. base= 10: decade,
1659
        base= 2: octave, base <= 1: all
1660

1661
    Returns
1662
    -------
1663
    (taus_used, ad): tuple
1664
          Tuple of 2 values
1665
    taus_used: np.array
1666
        tau values for which ad computed
1667
    ad: np.array
1668
        Computed Allan deviation of requested kind for each tau value
1669
    """
1670
    # determine taus from df
1671
    # first oversample S_y by a factor of 10 in order to avoid numerical
1672
    # problem at tau > 1/2df
1673
    if isinstance(S_y, np.ndarray):
3✔
1674
        if isinstance(f, np.ndarray):  # f is frequency vector
3✔
1675
            df = f[1]-f[0]
×
1676
        elif np.isscalar(f):
3✔
1677
            # assume that f is the frequency step, not frequency vector
1678
            df = f
3✔
1679
        else:
×
1680
            raise ValueError(np.ndarray, float, int)
×
1681
    else:
×
1682
        raise ValueError(np.ndarray)  # raise error
×
1683
    oversamplingfactor = 4
3✔
1684
    df0 = oversamplingfactor * df
3✔
1685
    f0 = np.arange(S_y.size * df0, step=df0)
3✔
1686
    f = np.arange(df, (S_y.size - 1) * df0 + df, df)
3✔
1687
    S_y = interpolate.interp1d(f0, S_y, kind='cubic')(f)
3✔
1688
    f = f / oversamplingfactor
3✔
1689

1690
    tau0 = 1/np.max(f)  # minimum tau derived from the given frequency vector
3✔
1691
    n = 1/df/tau0/2
3✔
1692
    if base > 1:
3✔
1693
        m = np.unique(np.round(np.append(base**np.arange(
3✔
1694
            np.floor(np.log(n)/np.log(base))), n)))
3✔
1695
    else:
×
1696
        m = np.arange(1, n)
×
1697
    taus_used = m*tau0
3✔
1698

1699
    # TODO: In principle, this approach can be extended to the other kinds of
1700
    # Allan deviations, we just need to determine the respective transfer
1701
    # function in the frequency domain.
1702

1703
    if kind[0].lower() == 'a':   # for ADEV
3✔
1704
        exponent = 1.0
3✔
1705
    elif kind[0].lower() == 'm':  # for modADEV
3✔
1706
        exponent = 2.0
3✔
1707

1708
    integrand = np.array([
3✔
1709
        S_y *
3✔
1710
        np.abs(np.sin(np.pi * f * taus_used[idx])**(exponent + 1.0)
2✔
1711
               / (np.pi * f * taus_used[idx])**exponent)**2.0
2✔
1712
        for idx, mj in enumerate(m)])
3✔
1713
    integrand = np.insert(integrand, 0, 0.0, axis=1)
3✔
1714
    f = np.insert(f, 0, 0.0)
3✔
1715
    ad = np.sqrt(2.0 * simps(integrand, f))
3✔
1716
    return taus_used, ad
3✔
1717

1718

1719
########################################################################
1720
#
1721
#  Various helper functions and utilities
1722
#
1723

1724

1725
def input_to_phase(data, rate, data_type):
3✔
1726
    """ Take either phase or frequency as input and return phase
1727
    """
1728
    if data_type == "phase":
3✔
1729
        return data
3✔
1730
    elif data_type == "freq":
3✔
1731
        return frequency2phase(data, rate)
3✔
1732
    else:
1733
        raise Exception("unknown data_type: " + data_type)
1734

1735

1736
def tau_generator(data, rate, taus=None, v=False, even=False, maximum_m=-1):
3✔
1737
    """ pre-processing of the tau-list given by the user (Helper function)
1738

1739
    Does sanity checks, sorts data, removes duplicates and invalid values.
1740
    Generates a tau-list based on keywords 'all', 'decade', 'octave'.
1741
    Uses 'octave' by default if no taus= argument is given.
1742

1743
    Parameters
1744
    ----------
1745
    data: np.array
1746
        data array
1747
    rate: float
1748
        Sample rate of data in Hz. Time interval between measurements
1749
        is 1/rate seconds.
1750
    taus: np.array
1751
        Array of tau values for which to compute measurement.
1752
        Alternatively one of the keywords: "all", "octave", "decade".
1753
        Defaults to "octave" if omitted.
1754
        
1755
        +----------+--------------------------------+
1756
        | keyword  |   averaging-factors            |
1757
        +==========+================================+
1758
        | "all"    |  1, 2, 3, 4, ..., len(data)    |
1759
        +----------+--------------------------------+
1760
        | "octave" |  1, 2, 4, 8, 16, 32, ...       |
1761
        +----------+--------------------------------+
1762
        | "decade" |  1, 2, 4, 10, 20, 40, 100, ... |
1763
        +----------+--------------------------------+
1764
        | "log10"  |  approx. 10 points per decade  |
1765
        +----------+--------------------------------+
1766
    v:
1767
        verbose output if True
1768
    even:
1769
        require even m, where tau=m*tau0, for Theo1 statistic
1770
    maximum_m:
1771
        limit m, where tau=m*tau0, to this value.
1772
        used by mtotdev() and htotdev() to limit maximum tau.
1773

1774
    Returns
1775
    -------
1776
    (data, m, taus): tuple
1777
        List of computed values
1778
    data: np.array
1779
        Data
1780
    m: np.array
1781
        Tau in units of data points
1782
    taus: np.array
1783
        Cleaned up list of tau values
1784
    """
1785

1786
    if rate == 0:
3✔
1787
        raise RuntimeError("Warning! rate==0")
3✔
1788

1789
    if taus is None:  # empty or no tau-list supplied
3✔
1790
        taus = "octave"  # default to octave
3✔
1791
    elif isinstance(taus, list) and taus == []:  # empty list
3✔
1792
        taus = "octave"
3✔
1793

1794
    # numpy array or non-empty list detected first
1795
    if isinstance(taus, np.ndarray) or isinstance(taus, list) and len(taus):
3✔
1796
        pass
3✔
1797
    elif taus == "all":  # was 'is'
3✔
1798
        taus = (1.0/rate)*np.linspace(1.0, len(data), len(data))
3✔
1799
    elif taus == "octave":
3✔
1800
        maxn = np.floor(np.log2(len(data)))
3✔
1801
        taus = (1.0/rate)*np.logspace(0, int(maxn), int(maxn+1), base=2.0)
3✔
1802
    elif taus == "log10":
3✔
1803
        maxn = np.log10(len(data))
1804
        taus = (1.0/rate)*np.logspace(0, maxn, int(10*maxn), base=10.0)
1805
        if v:
1806
            print("tau_generator: maxn %.1f"%maxn)
1807
            print("tau_generator: taus="%taus)
1808
    elif taus == "decade":  # 1, 2, 4, 10, 20, 40, spacing similar to Stable32
3✔
1809
        maxn = np.floor(np.log10(len(data)))
3✔
1810
        taus = []
3✔
1811
        for k in range(int(maxn+1)):
3✔
1812
            taus.append(1.0*(1.0/rate)*pow(10.0, k))
3✔
1813
            taus.append(2.0*(1.0/rate)*pow(10.0, k))
3✔
1814
            taus.append(4.0*(1.0/rate)*pow(10.0, k))
3✔
1815

1816
    data, taus = np.array(data), np.array(taus)
3✔
1817
    rate = float(rate)
3✔
1818
    m = []  # integer averaging factor. tau = m*tau0
3✔
1819

1820
    if maximum_m == -1:  # if no limit given
3✔
1821
        maximum_m = len(data)
3✔
1822
    # FIXME: should we use a "stop-ratio" like Stable32
1823
    # found in Table III, page 9 of
1824
    # "Evolution of frequency stability analysis software"
1825
    # max(AF) = len(phase)/stop_ratio, where
1826
    # function  stop_ratio
1827
    # adev      5
1828
    # oadev     4
1829
    # mdev      4
1830
    # tdev      4
1831
    # hdev      5
1832
    # ohdev     4
1833
    # totdev    2
1834
    # tierms    4
1835
    # htotdev   3
1836
    # mtie      2
1837
    # theo1     1
1838
    # theoH     1
1839
    # mtotdev   2
1840
    # ttotdev   2
1841

1842
    m = np.round(taus * rate)
3✔
1843
    taus_valid1 = m < len(data)
3✔
1844
    taus_valid2 = m > 0
3✔
1845
    taus_valid3 = m <= maximum_m
3✔
1846
    taus_valid = taus_valid1 & taus_valid2 & taus_valid3
3✔
1847
    m = m[taus_valid]
3✔
1848
    m = m[m != 0]       # m is tau in units of datapoints
3✔
1849
    m = np.unique(m)    # remove duplicates and sort
3✔
1850

1851
    if v:
3✔
1852
        print("tau_generator: ", m)
1853

1854
    if len(m) == 0:
3✔
1855
        print("Warning: sanity-check on tau failed!")
1856
        print("   len(data)=", len(data), " rate=", rate, "taus= ", taus)
1857

1858
    taus2 = m / float(rate)
3✔
1859

1860
    if even:  # used by Theo1
3✔
1861
        m_even_mask = ((m % 2) == 0)
3✔
1862
        m = m[m_even_mask]
3✔
1863
        taus2 = taus2[m_even_mask]
3✔
1864

1865
    return data, m, taus2
3✔
1866

1867

1868
def tau_reduction(ms, rate, n_per_decade):
3✔
1869
    """Reduce the number of taus to maximum of n per decade (Helper function)
1870

1871
    takes in a tau list and reduces the number of taus to a maximum amount per
1872
    decade. This is only useful if more than the "decade" and "octave" but
1873
    less than the "all" taus are wanted. E.g. to show certain features of
1874
    the data one might want 100 points per decade.
1875

1876
    NOTE: The algorithm is slightly inaccurate for ms under n_per_decade, and
1877
    will also remove some points in this range, which is usually fine.
1878

1879
    Typical use would be something like:
1880
    (data,m,taus)=tau_generator(data,rate,taus="all")
1881
    (m,taus)=tau_reduction(m,rate,n_per_decade)
1882

1883
    Parameters
1884
    ----------
1885
    ms: array of integers
1886
        List of m values (assumed to be an "all" list) to remove points from.
1887
    rate: float
1888
        Sample rate of data in Hz. Time interval between measurements
1889
        is 1/rate seconds. Used to convert to taus.
1890
    n_per_decade: int
1891
        Number of ms/taus to keep per decade.
1892

1893
    Returns
1894
    -------
1895
    m: np.array
1896
        Reduced list of m values
1897
    taus: np.array
1898
        Reduced list of tau values
1899
    """
1900
    ms = np.int64(ms)
3✔
1901
    keep = np.bool8(np.rint(n_per_decade*np.log10(ms[1:])) -
3✔
1902
                    np.rint(n_per_decade*np.log10(ms[:-1])))
3✔
1903
    # Adjust ms size to fit above-defined mask
1904
    ms = ms[:-1]
3✔
1905
    assert len(ms) == len(keep)
3✔
1906
    ms = ms[keep]
3✔
1907
    taus = ms/float(rate)
3✔
1908

1909
    return ms, taus
3✔
1910

1911

1912
def remove_small_ns(taus, devs, deverrs, ns):
3✔
1913
    """ Remove results with small number of samples.
1914
        If n is small (==1), reject the result
1915

1916
    Parameters
1917
    ----------
1918
    taus: array
1919
        List of tau values for which deviation were computed
1920
    devs: array
1921
        List of deviations
1922
    deverrs: array or list of arrays
1923
        List of estimated errors (possibly a list containing two arrays :
1924
        upper and lower values)
1925
    ns: array
1926
        Number of samples for each point
1927

1928
    Returns
1929
    -------
1930
    (taus, devs, deverrs, ns): tuple
1931
        Identical to input, except that values with low ns have been removed.
1932

1933
    """
1934
    ns_big_enough = ns > 1
3✔
1935

1936
    o_taus = taus[ns_big_enough]
3✔
1937
    o_devs = devs[ns_big_enough]
3✔
1938
    o_ns = ns[ns_big_enough]
3✔
1939
    if isinstance(deverrs, list):
3✔
1940
        assert len(deverrs) < 3
3✔
1941
        o_deverrs = [deverrs[0][ns_big_enough], deverrs[1][ns_big_enough]]
3✔
1942
    else:
×
1943
        o_deverrs = deverrs[ns_big_enough]
3✔
1944
    if len(o_devs) == 0:
3✔
1945
        print("remove_small_ns() nothing remains!?")
×
1946
        raise UserWarning
×
1947

1948
    return o_taus, o_devs, o_deverrs, o_ns
3✔
1949

1950

1951
def trim_data(x):
3✔
1952
    """
1953
    Trim leading and trailing NaNs from dataset
1954
    This is done by browsing the array from each end and store the index of the
1955
    first non-NaN in each case, the return the appropriate slice of the array
1956
    """
1957
    # Find indices for first and last valid data
1958
    first = 0
3✔
1959
    while np.isnan(x[first]):
3✔
1960
        first += 1
3✔
1961
    last = len(x)
3✔
1962
    while np.isnan(x[last - 1]):
3✔
1963
        last -= 1
3✔
1964
    return x[first:last]
3✔
1965

1966

1967
def three_cornered_hat_phase(phasedata_ab, phasedata_bc,
3✔
1968
                             phasedata_ca, rate, taus, function):
×
1969
    """
×
1970
    Three Cornered Hat Method
×
1971

1972
    Given three clocks A, B, C, we seek to find their variances
×
1973
    :math:`\\sigma^2_A`, :math:`\\sigma^2_B`, :math:`\\sigma^2_C`.
×
1974
    We measure three phase differences, assuming no correlation between
×
1975
    the clocks, the measurements have variances:
×
1976

1977
    .. math::
×
1978

1979
        \\sigma^2_{AB} = \\sigma^2_{A} + \\sigma^2_{B}
×
1980

1981
        \\sigma^2_{BC} = \\sigma^2_{B} + \\sigma^2_{C}
×
1982

1983
        \\sigma^2_{CA} = \\sigma^2_{C} + \\sigma^2_{A}
×
1984

1985
    Which allows solving for the variance of one clock as:
×
1986

1987
    .. math::
×
1988

1989
        \\sigma^2_{A}  = {1 \\over 2} ( \\sigma^2_{AB} +
×
1990
        \\sigma^2_{CA} - \\sigma^2_{BC} )
×
1991

1992
    and similarly cyclic permutations for :math:`\\sigma^2_B` and
×
1993
    :math:`\\sigma^2_C`
×
1994

1995
    Parameters
×
1996
    ----------
×
1997
    phasedata_ab: np.array
×
1998
        phase measurements between clock A and B, in seconds
×
1999
    phasedata_bc: np.array
×
2000
        phase measurements between clock B and C, in seconds
×
2001
    phasedata_ca: np.array
×
2002
        phase measurements between clock C and A, in seconds
×
2003
    rate: float
×
2004
        The sampling rate for phase, in Hz
×
2005
    taus: np.array
×
2006
        The tau values for deviations, in seconds
×
2007
    function: allantools deviation function
×
2008
        The type of statistic to compute, e.g. allantools.oadev
×
2009

2010
    Returns
×
2011
    -------
×
2012
    tau_ab: np.array
×
2013
        Tau values corresponding to output deviations
×
2014
    dev_a: np.array
×
2015
        List of computed values for clock A
×
2016

2017
    References
×
2018
    ----------
×
2019
    http://www.wriley.com/3-CornHat.htm
×
2020
    """
×
2021

2022
    (tau_ab, dev_ab, err_ab, ns_ab) = function(phasedata_ab,
3✔
2023
                                               data_type='phase',
3✔
2024
                                               rate=rate, taus=taus)
3✔
2025
    (tau_bc, dev_bc, err_bc, ns_bc) = function(phasedata_bc,
3✔
2026
                                               data_type='phase',
3✔
2027
                                               rate=rate, taus=taus)
3✔
2028
    (tau_ca, dev_ca, err_ca, ns_ca) = function(phasedata_ca,
3✔
2029
                                               data_type='phase',
3✔
2030
                                               rate=rate, taus=taus)
3✔
2031

2032
    var_ab = dev_ab * dev_ab
3✔
2033
    var_bc = dev_bc * dev_bc
3✔
2034
    var_ca = dev_ca * dev_ca
3✔
2035
    assert len(var_ab) == len(var_bc) == len(var_ca)
3✔
2036
    var_a = 0.5 * (var_ab + var_ca - var_bc)
3✔
2037

2038
    var_a[var_a < 0] = 0  # don't return imaginary deviations (?)
3✔
2039
    dev_a = np.sqrt(var_a)
3✔
2040
    err_a = [d/np.sqrt(nn) for (d, nn) in zip(dev_a, ns_ab)]
3✔
2041

2042
    return tau_ab, dev_a, err_a, ns_ab
3✔
2043

2044
########################################################################
2045
#
2046
# simple conversions between frequency, phase(seconds), phase(radians)
2047
#
2048

2049

2050
def frequency2phase(freqdata, rate):
3✔
2051
    """ integrate fractional frequency data and output phase data
2052

2053
    Parameters
2054
    ----------
2055
    freqdata: np.array
2056
        Data array of fractional frequency measurements (nondimensional)
2057
    rate: float
2058
        The sampling rate for phase or frequency, in Hz
2059

2060
    Returns
2061
    -------
2062
    phasedata: np.array
2063
        Time integral of fractional frequency data, i.e. phase (time) data
2064
        in units of seconds.
2065
        For phase in units of radians, see phase2radians()
2066
    """
2067
    dt = 1.0 / float(rate)
3✔
2068
    # Protect against NaN values in input array (issue #60)
2069
    # Reintroduces data trimming as in commit 503cb82
2070
    freqdata = trim_data(freqdata)
3✔
2071
    # Erik Benkler (PTB): Subtract mean value before cumsum in order to
2072
    # avoid precision issues when we have small frequency fluctuations on
2073
    # a large average frequency
2074
    freqdata = freqdata - np.nanmean(freqdata)
3✔
2075
    phasedata = np.cumsum(freqdata) * dt
3✔
2076
    phasedata = np.insert(phasedata, 0, 0)  # FIXME: why do we do this?
3✔
2077
    # so that phase starts at zero and len(phase)=len(freq)+1 ??
2078
    return phasedata
3✔
2079

2080

2081
def phase2radians(phasedata, v0):
3✔
2082
    """ Convert phase in seconds to phase in radians
2083

2084
    Parameters
2085
    ----------
2086
    phasedata: np.array
2087
        Data array of phase in seconds
2088
    v0: float
2089
        Nominal oscillator frequency in Hz
2090

2091
    Returns
2092
    -------
2093
    fi:
2094
        phase data in radians
2095
    """
2096
    fi = [2*np.pi*v0*xx for xx in phasedata]
×
2097
    return fi
×
2098

2099

2100
def phase2frequency(phase, rate):
3✔
2101
    """ Convert phase in seconds to fractional frequency
2102

2103
    Parameters
2104
    ----------
2105
    phase: np.array
2106
        Data array of phase in seconds
2107
    rate: float
2108
        The sampling rate for phase, in Hz
2109

2110
    Returns
2111
    -------
2112
    y:
2113
        Data array of fractional frequency
2114
    """
2115
    y = rate*np.diff(phase)
3✔
2116
    return y
3✔
2117

2118

2119
def frequency2fractional(frequency, mean_frequency=-1):
3✔
2120
    """ Convert frequency in Hz to fractional frequency
2121

2122
    Parameters
2123
    ----------
2124
    frequency: np.array
2125
        Data array of frequency in Hz
2126
    mean_frequency: float
2127
        (optional) The nominal mean frequency, in Hz
2128
        if omitted, defaults to mean frequency=np.mean(frequency)
2129

2130
    Returns
2131
    -------
2132
    y:
2133
        Data array of fractional frequency
2134
    """
2135
    if mean_frequency == -1:
3✔
2136
        mu = np.mean(frequency)
×
2137
    else:
×
2138
        mu = mean_frequency
3✔
2139
    y = [(x-mu)/mu for x in frequency]
3✔
2140
    return y
3✔
2141

2142
# end of file allantools.py
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc