Defining new types

Defining new types is done by implementing new reading methods in the corresponding modules (pygeodyn/inout/priors.py for priors and pygeodyn/inout/observations.py for observations).

Warning

For the modifications in the source code to be taken into account, the package must have been installed in editable mode (pip install -e) !

See the installation section for more info.

Priors

For examples, you can refer to the priors embarked with the program (that are outputs from geodynamo simulations):

The reading methods are found in pygeodyn/inout/priors.py . To use you own data, you must implement your reading method in the same file and design it with the same format.

Namely:

# The method name must have this name format and signature
def read_<new_prior_type>(data_directory, dt_sampling, return_U=True):
    if return_U
        # Read your prior data...
        # ...and returns the 2D arrays of realisations (nb_realisations x nb_coeffs) for U
        return U
    else:
       # Read your prior data...
       # ...and returns the 1D array of times (nb_realisations) and the 2D arrays of realisations (nb_realisations x nb_coeffs) for B and Er
       return times, B, ER

If your prior does not supply time data, return None instead. It will only prevent the use of dense matrices in the Auto-Regressive processes.

Now, all is left is to set prior_type in the config file to the name <new_prior_type> that you chose for it (and prior_data_dir to the directory where the prior data are stored).

Then, try to launch a computation and check if any errors are raised (NotImplementedError if your prior is not associated to a reading function, IOError if the files were not found…). If not, your prior data will be used as average data in the AR-1 process and to compute the covariance matrices.

Observations

Examples of valid reading methods are given for the embarked observations:

The strategy to use another type of observations follows the same paradigm as for priors: you must define a new observation type by implementing its reading method in pygeodyn/inout/observations.py. In more details, the format must be:

# The method name must have this name format and signature
def build_<new_obs_type>_observations(cfg, nb_realisations, measure_type):
    # Read your observation data for measure_type='MF' or 'SV'...
    # ...and returns a dict of Observation objects with numpy.datetime64 objects as keys
    return observations

The implementation is trickier in this case and several points need to be highlighted:

  • Your new reading method must take the measure_type as arg that will be equal to either ‘MF’ (for magnetic field observations) or ‘SV’ (for secular variation observations).

  • The returned object must be a dictionary with numpy.datetime64 objects as keys that correspond to the date of the observations

  • The said dictionary must contain Observation objects that are a handy way to store all the observation parameters:
    • The data \(Y\) as 2D numpy.ndarrays (nb_realisations x nb_observed_coefs)

    • The observation operator \(H\) (nb_observed_coefs x nb_coefs) that verifies \(Y = HX\) where \(X\) is a vector containing the spectral coefficients of the magnetic field or secular variation (size: nb_coefs).

    • The observation errors matrices \(R\) (nb_observed_coefs x nb_observed_coefs)

For reference, here is the basic way to create an Observation object and the observations dictionary:

observations = {}
# ...
# Assuming you have extracted all the aforementioned parameters
observation_at_t = Observation(obs_data, H, R)
date = np.datetime64('1997-03') # Ex: create a datetime64 for Mar 1997
observations[date] = observation_at_t
# ... Do this for all observation dates and return the observations dict

After having implemented your new method, set obs_type in the config file to the name <new_obs_type> that you chose for it (and obs_dir to the directory where the data are stored).

Then, try to launch a computation and check if any errors are raised or logged (if no reading function are found, you will get a warning in the logs saying that all analyses will be skipped). If not, your observation data will be used as observations in the analysis step of the algorithm.

Example

Work in progress

Write an example with all the steps

References

AFF13

Aubert, J., Finlay, C. C. & Fournier, A. Bottom-up control of geomagnetic secular variation by the Earth’s inner core. Nature 502, 219–223 (2013). doi:10.1038/nature12574

AGF17

Aubert, J., Gastine, T. & Fournier, A. Spherical convective dynamos in the rapidly rotating asymptotic regime. Journal of Fluid Mechanics 813, 558–593 (2017). doi:10.1017/jfm.2016.789

BHF18

Barrois, O., Hammer, M. D., Finlay, C. C., Martin, Y. & Gillet, N. Assimilation of ground and satellite magnetic measurements: inference of core surface magnetic and velocity field changes. Geophysical Journal International (2018). doi:10.1093/gji/ggy297

GBF15

Gillet, N., Barrois, O. & Finlay, C. C. Stochastic forecasting of the geomagnetic field from the COV-OBS.x1 geomagnetic field model, and candidate models for IGRF-12. Earth, Planets and Space 67, (2015). doi:10.1186/s40623-015-0225-z

GJF13

Gillet, N., Jault, D., Finlay, C. C. & Olsen, N. Stochastic modeling of the Earth’s magnetic field: Inversion for covariances over the observatory era. Geochemistry, Geophysics, Geosystems 14, 766–786 (2013). doi:10.1002/ggge.20041

Hammer18

Hammer, M. D. Local Estimation of the Earth’s Core Magnetic Field. (Technical University of Denmark (DTU), 2018).