The interaction of ocean waves that run into each other is the major source of seismic noise all over the Earth. This animation shows the pressure fluctuations
and dominant frequency that these ocean wave interactions generate from 2003 - 2015. The data shown in this video is computed from the wave model WAVEWATCH III.

# Pythology Blog

I am a geophysics researcher at the Institute de Physique du Globe de Paris. This blog is intended to provide space to share random things with the world. Due to my work these random things are mostly computer, especially Python (hence the title) related.

## Tuesday, February 21, 2017

## Sunday, January 29, 2017

### Visualize your DICOM CT Scan in 3D with Paraview

CT Scan examinations often come with a data CD that contains the raw images of the scan. These can be easily visualized in 3D using freely available software such as Paraview. The steps to do this are as follows:

**Preparation:**- Find the raw images. These can have an ending .dcm but sometimes the files have no ending at all. The folder structure that I received on a CD from my particular examination looked as follows:

IMAGES/

└── PAT00000

└── ST000000

├── SE000000

│ └── CT000000

├── SE000001

│ ├── CT000000

│ ├── CT000001

│ ├── CT000002

│ ├── CT000003

│ ├── CT000[...]

│ └── CT000213

└── SE000002

└── CT000000

The folder SE000001 contains the set of sections that you want to visualize. Alternatively you can explore the OsiriX database that contains publicly available datasets. - Copy all images to a new folder on your computer and change the file endings to .dmc
- dcm images can be raw or compressed (as is the case for the OsiriX database). If the files are compressed, you might get an error later in paraview. In this case, you can use the tool
*gdcmconv*to uncompress them in the terminal (bash) using the line "*for file in *.dcm; do gdcmconv -i $file --raw -o $file; done*". With

**Open the files in Paraview**- open Paraview and click on file -> open. Browse to the folder that contains the set of sections. The files should show up as a collapsed list, e.g. as CT...dcm . Open them and select DICOM Files (directory) as reader that should be used. If you get an error (warnings are often OK), check that the files are really uncompressed, as explained above.
- If everything went well, you should see an object appearing in the paraview "pipeline browser" (usually on the left).
- Further down in the "Properties" window, click on "Apply" to display the data
- Once the data is loaded, you can select "Volume" rendering in the top toolbar instead of "Outline".
- In the Properties window click on "Edit" under the "Coloring" tab. You should see a new window on the right hand side that you can use to adapt the opacity and color of the volume.
- Click the colorbar to add new points. Lower them to make regions with certain data values more transparent. (see image below)

## Wednesday, May 18, 2016

### Designing 2D colormaps and using them with matplotlib

UPDATE: check out the github project with blender scripts and python module

A 2d colormap is designed to represent two dimensional data on a two dimensional domain. Common examples are for example wind direction and magnitude, signal amplitude and frequency or a complex functions magnitude and argument. Typically, direction or frequency are hereby represented by the color hue and magnitude by the lightness or/and saturation of the colors. (check out for some really nice explanations and examples) A simple 2d colormap can easily be designed in the HSV (hue, saturation, value) color space. Unfortunately, HSV is not perceptually uniform. This means, equal distances in the HSV domain can be visually perceived differently. It therefore distorts the represented data. The python module colorspacious (pip install colorspacious) provides access to a colorspace, in which similar distances map to similar perceived distances. I have written three short blender scripts (available on ) that allow you to visualize this colorspace in blender, and export color values on paths and surfaces to colormaps. In contrast to a 1d colormap, that can be represented as a path in the 3d colorspace, a 2d colormap can be represented as a surface. In particular a surface with a topology that can easily be mapped to a rectangle is useful, such that the two dimensions are clear. Blender offers so called NURBS surfaces, that are defined from a rectangular grid of spline nodes that are perfect for this purpose. The simplest and obvious choice of a 2d colormap is to use lightness as one axis and color hue as the other. The color saturation can be adapted for each lightness to span the maximum possible saturation range. However, this approach has several disadvantages. The uniform colorspace has 6 corners that allow to have the most saturated colors: blue, cyan, green, yellow, red, magenta. This used in hexagon representations of the colorspace (). To make use of the extent of the colorspace we should therefore try to adapt the saturation to the possible values of different colors and to go into these corners. On the other hand, this shouldn't distort distances too much, because the perceived color differences will become less uniform. More problematic is, that each corner has a different lightness. Blue and red are darker colors than yellow and cyan for example. The following two surfaces try to find a good balance between all of these constraints. The topology of the surface is represented as a wireframe mesh. Both surfaces are not fully uniform which is good because this allows us to better explore the color space. This colormap goes from black to white and has a maximally saturated area in the middle (like HSL).

This colormap goes from black to white and has a maximally saturated area in the middle (like HSV):

Here is a comparison of these colormaps, showing a complex polynomial. In the case of poles and zeros, the black-to-white colormap is the best. For other applications, other colormaps will be better. The main point is that they are more uniform than HSV, which is shown in the last row. HSV enhances a few colors and therefore distorts the data:

A simple way to map data to these colors in python is shown in the following code snippet. data is a [2, nwidth, nheight] array with the data and cmap a [ndim1, ndim2, 3] array with the rgb values of the colormap. The resulting rgb [nwidth, nheight, 3] image can be displayed with plt.imshow(rgb).

A 2d colormap is designed to represent two dimensional data on a two dimensional domain. Common examples are for example wind direction and magnitude, signal amplitude and frequency or a complex functions magnitude and argument. Typically, direction or frequency are hereby represented by the color hue and magnitude by the lightness or/and saturation of the colors. (check out for some really nice explanations and examples) A simple 2d colormap can easily be designed in the HSV (hue, saturation, value) color space. Unfortunately, HSV is not perceptually uniform. This means, equal distances in the HSV domain can be visually perceived differently. It therefore distorts the represented data. The python module colorspacious (pip install colorspacious) provides access to a colorspace, in which similar distances map to similar perceived distances. I have written three short blender scripts (available on ) that allow you to visualize this colorspace in blender, and export color values on paths and surfaces to colormaps. In contrast to a 1d colormap, that can be represented as a path in the 3d colorspace, a 2d colormap can be represented as a surface. In particular a surface with a topology that can easily be mapped to a rectangle is useful, such that the two dimensions are clear. Blender offers so called NURBS surfaces, that are defined from a rectangular grid of spline nodes that are perfect for this purpose. The simplest and obvious choice of a 2d colormap is to use lightness as one axis and color hue as the other. The color saturation can be adapted for each lightness to span the maximum possible saturation range. However, this approach has several disadvantages. The uniform colorspace has 6 corners that allow to have the most saturated colors: blue, cyan, green, yellow, red, magenta. This used in hexagon representations of the colorspace (). To make use of the extent of the colorspace we should therefore try to adapt the saturation to the possible values of different colors and to go into these corners. On the other hand, this shouldn't distort distances too much, because the perceived color differences will become less uniform. More problematic is, that each corner has a different lightness. Blue and red are darker colors than yellow and cyan for example. The following two surfaces try to find a good balance between all of these constraints. The topology of the surface is represented as a wireframe mesh. Both surfaces are not fully uniform which is good because this allows us to better explore the color space. This colormap goes from black to white and has a maximally saturated area in the middle (like HSL).

This colormap goes from black to white and has a maximally saturated area in the middle (like HSV):

Here is a comparison of these colormaps, showing a complex polynomial. In the case of poles and zeros, the black-to-white colormap is the best. For other applications, other colormaps will be better. The main point is that they are more uniform than HSV, which is shown in the last row. HSV enhances a few colors and therefore distorts the data:

A simple way to map data to these colors in python is shown in the following code snippet. data is a [2, nwidth, nheight] array with the data and cmap a [ndim1, ndim2, 3] array with the rgb values of the colormap. The resulting rgb [nwidth, nheight, 3] image can be displayed with plt.imshow(rgb).

```
def cmap2d_to_rgb(data, cmap):
from scipy.ndimage.interpolation import map_coordinates
data_dim, nrows, ncols = data.shape
idata = np.copy(data) # index coordinates of the data
idata[0] *= cmap.shape[0]
idata[1] *= cmap.shape[1]
idata = idata.reshape(data_dim, nrows, ncols)
r = map_coordinates(cmap[:, :, 0], idata, order=1, mode='nearest')
g = map_coordinates(cmap[:, :, 1], idata, order=1, mode='nearest')
b = map_coordinates(cmap[:, :, 2], idata, order=1, mode='nearest')
rgb = np.array([r, g, b])
rgb = rgb.reshape(3, nrows, ncols).transpose(1, 2, 0)
return rgb
```

## Wednesday, March 23, 2016

### Non stationary, modulated discrete wavelet statistics

This is a simple test that I have done that plots wavelet statistics for a stationary white noise random model realisation. In this case, the expected statistics of the model amplitudes is a Gaussian (normal) distribution. Due to the orthogonality of the daubechies wavelet basis (using PyWavelets), the coefficient power of one wavelet follows a Chi square distribution with 1 degree of freedom. This is simply the distribution of the square of normal distributed independent random variables. Finally, the distribution of the power of the average of all wavelets at one time follows a Chi square distribution with the number of wavelets as degrees of freedom (16 in this case). The plot and the attached script shows these prediction compared to a measured distribution of synthesized white noise.

Now imagine that the Gaussian distributed White noise is modulated (multiplied) in time by another (smooth) oscillatory function. This will not necessarily be visible in the distribution of the model itself, because the overall amplitudes might stay unchanged. Different statistics, however, do change: first, modulation in time will affect

Now imagine that the Gaussian distributed White noise is modulated (multiplied) in time by another (smooth) oscillatory function. This will not necessarily be visible in the distribution of the model itself, because the overall amplitudes might stay unchanged. Different statistics, however, do change: first, modulation in time will affect

*all*frequencies in the same manner. This means that we introduce a correlation between frequencies that Gaussian white noise doesn't have. This correlation in turn means, that the averages of the 16 wavelets at each time are*not*chi square distributed anymore, which can be tested. This example is shown in the image below. Another possible statistic would be to check whether wavelet components that follow each other are correlated and not chi square distributed anymore. This could be another hint for a modulating background function.```
#!/usr/bin/env python
"""
This script demonstrates wavelet statistics of a white noise stationary random model
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import chi2, norm
import pywt
def main():
# prepare time grid:
#np.random.seed(0)
tmax = 1.
npts = 2**15
times = np.linspace(-tmax, tmax, npts)
dt = times[1] - times[0]
# generate real stationary time seris:
freqs = np.fft.rfftfreq(npts, d=dt)
nfreqs = len(freqs)
coeffs = np.random.normal(loc=0., scale=1., size=nfreqs) + \
1j * np.random.normal(loc=0., scale=1., size=nfreqs)
power = np.abs(coeffs)**2
coeffs /= np.sqrt(2. * np.sum(power))
coeffs *= npts
model = np.fft.irfft(coeffs)
coeffs = np.fft.fft(model)
# get discrete wavelet packet transform
wavelet = 'db4'
level = 4
order = "freq"
# Construct wavelet packet
wp = pywt.WaveletPacket(model, wavelet, 'symmetric', maxlevel=level)
nodes = wp.get_level(level, order=order)
labels = [n.path for n in nodes]
wl_coeffs = np.array([n.data for n in nodes], 'd')
wl_coeffs = np.abs(wl_coeffs)**2
nlevels, nnodes = wl_coeffs.shape
# statistics:
# 1a) model histogram
model_bins = np.linspace(-5., 5., 30)
model_hist, _ = np.histogram(model, bins=model_bins)
model_width = model_bins[1] - model_bins[0]
model_hist = model_hist.astype(np.float) / model_width / npts
# 1b) model distribution
values = np.linspace(-5., 5., 100)
sigma = np.sum(np.abs(coeffs)**2) / npts**2
model_pdf = norm.pdf(values, loc=0, scale=sigma)
# 2a) wavelet histogram
wl_bins = np.linspace(0., 3., 50)
wl_hist1, _ = np.histogram(wl_coeffs[0, :], bins=wl_bins)
wl_hist2, _ = np.histogram(np.mean(wl_coeffs, axis=0), bins=wl_bins)
wl_width = wl_bins[1] - wl_bins[0]
wl_hist1 = wl_hist1.astype(np.float) / nnodes / wl_width
wl_hist2 = wl_hist2.astype(np.float) / nnodes / wl_width
# 2b) wavelet distribution
chi2_pdf1 = chi2.pdf(wl_bins, 1)
chi2_pdf2 = chi2.pdf(wl_bins * nlevels, nlevels) * nlevels
# plotting
fig, axes = plt.subplots(2, 2, figsize=(14, 7))
axes[0, 0].plot(times, model, label='stationary random model')
axes[0, 0].set(xlabel='time', ylabel='amplitude',
title='stationary Gaussian model white noise realisation')
axes[1, 0].bar(model_bins[:-1], model_hist, alpha=0.5, width=model_width,
color='green')
axes[1, 0].plot(values, model_pdf, lw=2)
axes[1, 0].set(xlabel='model amplitude', ylabel='frequency',
title=r'amplitude distribution ($\mathcal{N}$)')
axes[0, 1].imshow(wl_coeffs, interpolation='nearest', cmap='viridis',
aspect="auto", origin="lower", extent=[-tmax, tmax, 0,
len(wl_coeffs)])
axes[0, 1].set(xlabel='time', ylabel='wavelet number',
title='discrete wavelet coefficient power')
axes[1, 1].bar(wl_bins[:-1], wl_hist1, width=wl_width, alpha=0.5,
color='blue')
axes[1, 1].bar(wl_bins[:-1], wl_hist2, width=wl_width, alpha=0.5,
color='green')
axes[1, 1].plot(wl_bins, chi2_pdf1, lw=2, c='red',
label=r'isolated wavelets ($\chi^2_{:d}$)'.format(1))
axes[1, 1].plot(wl_bins, chi2_pdf2, lw=2, c='magenta',
label=r'averaged wavelets ($\chi^2_{%d}$)'%nlevels)
axes[1, 1].set(xlabel='variance', ylabel='frequency',
title='wavelet variance distribution')
axes[1, 1].legend()
fig.tight_layout(pad=0.1)
plt.show()
if __name__ == "__main__":
main()
```

## Wednesday, March 9, 2016

### Seismic Waves: Visualization of Different Phases

This is an illustration of different paths that seismic waves take when they propagate through the Earth.

This video shows the individual phases in an animation:

This video shows the individual phases in an animation:

## Monday, November 23, 2015

### Earthquake Moment Tensor Radiation Pattern

This post is about the radiation pattern of elastic waves due to different sources that can be expressed as a moment tensor (see https://en.wikipedia.org/wiki/Focal_mechanism).

I have used mayavi for the visualization, equations from the Aki&Richards Seismology book for the radiation pattern, and the script MoPaD (included in obspy) to compute the nodal lines of the moment tensor. The complete script is available here: github

UPDATE: This script is included in the obspy library

####

I have used mayavi for the visualization, equations from the Aki&Richards Seismology book for the radiation pattern, and the script MoPaD (included in obspy) to compute the nodal lines of the moment tensor. The complete script is available here: github

UPDATE: This script is included in the obspy library

#### Explosive Source

This is a simple explosive source. P-wave energy radiates equally in all directions.#### Double Couple Source

Double Couple Sources correspond to fractures. P-wave radiation is shown on the left side and S-wave radiation on the right. Green lines correspond to the 'nodal' line where the polarity of the P-wave radiation changes from positive to negative. This is a parameter that is relatively simple to observe. In this plot, one of the green lines furthermore corresponds to a fault plane that fractured. Due to its symmetry, it is not possible to determine from the radiation pattern alone which of the two possible fault planes fractured.

#### Egg source

This source corresponds to an expansion in horizontal direction and a retraction in

vertical (in the image coordinate system). Such a source can be found, for example, in volcanoes (see http://www.ldeo.columbia.edu/~ekstrom/Projects/EQS/Iceland/).

#### Moment Tensor Inversion

From the elastic vibrations that are recorded on stations around the globe, source mechanisms of Earthquakes are routinely determined. This is an example of an Earthquake that occured 1996 in the Philippines. It resembles an imperfect double couple source. A complete catalogue can be found here http://www.globalcmt.org/ . The 3D focal spheres that I show here are typically projected on a 2D circle to plot their distribution on a map ('beachballs').

## Friday, October 2, 2015

### Spherical Harmonic Polar and Equatorial Asymptotics

Spherical Harmonics are eigenfunctions of the Laplace-Beltrami Operator in spherical coordinates. As such they are similar to the Laplace-Beltrami eigenfunctions in Cartesian coordinates (sine/cosine), or Polar coordinates (Bessel functions). The spherical coordinate system can be asymptotically understood in terms of these two 'flat' coordinate systems. Correspondingly, the shape of the lateral part of the spherical harmonics (the associated Legendre functions) behaves like the 'flat' Eigenfunctions, as shown in the two Figures below:
From this perspective, many

*apparently*particular features of the spherical harmonics transform, such as the rotations/Wigner symbols, response to windowing and others have simple analogues in the classical (2D) Fourier transform. Of course, a major difference to flat space are the particular periodic boundary conditions of the spherical coordinate system that induce some of the special spherical properties (e.g. antipodal symmetry). Asymptotics are important because they allow us to understand spherical harmonics operations on the sphere intuitively in terms of 'flat' space operations. I have used the simplest asymptotics in this post, which treat the equator and the poles as perfectly flat regions. More accurate, higher order, approximations cover a larger region of the sphere but are more difficult to understand in terms of the flat Fourier transform properties.
Subscribe to:
Posts (Atom)