Advanced Nabu usage¶

  • Modify flats/darks used in the processing pipeline
  • Using nabu through the Python API

Modify flats/darks¶

Usually, a full-field acquisition consists in

  • A series of darks (eg. 50)
  • A series of flats (eg. 100)
  • A series of projections
  • ...

When it comes to flats/darks, the "first step" is to do a reduction of darks and flats.

  • This reduction can be median, mean, or a more complicated function.
  • Each series of flats/darks gives one final flat/dark
  • There can be several series of flats/darks, therefore several "final" flats/darks.
    • An interpolation is done between them for flat-field normalization

The flats/darks reduction step occurs at the very beginning of nabu processing.
This "flats/darks reduction" step is the "DKRF widget" in Tomwer

By default, flatfield is enabled:

[preproc]
flatfield = 1

Nabu will

  • Load the final flats/darks from an existing file, if possible
    • Always the case if using Tomwer
  • Compute them otherwise

Exercise (optional)¶

Run a slice reconstruction and examine the beginning of logs.
Something along these lines should appear:

Loading darks and refs from ['/scisoft/tomo_data/nabu/bambou_hercules_0001_onefile_darks.hdf5', '/scisoft/tomo_data/nabu/bambou_hercules_0001_onefile_flats.hdf5']

Forcing re-compute of flats/darks¶

In some cases (?), the final flats/darks should be re-computed (ex. invalid darks.hdf5 and flats.hdf5).
To do so, use the parameter flatfield = force-compute:

[preproc]
flatfield = force-compute

These final flats-darks can be saved in a custom location by specifying processes_file:

[preproc]
processes_file = /path/to/my_darks_flats.h5

Specifying custom flats/darks¶

There are two ways of providing flats/darks that you created.

Method 1: one single file

You can save both flats and darks in a single file, and use the processes_file parameter:

[preproc]
processes_file = /path/to/my_darks_flats.h5

Method 2: two files

By default, nabu looks in the same directory as the dataset to find files named dataset_name_flats.hdf5 and dataset_name_darks.hdf5.
If you can override these files, then nabu will use them for flat-fielding.

In order to create 'automatically' those file from python you can use the following piece of code:

from tomwer.core.scan.hdf5scan import HDF5TomoScan
# from tomwer.core.scan.edfscan import EDFTomoScan

# create a HDF5TomoScan
scan = HDF5TomoScan("my_nexus_file.nx", "my_nx_tomo_entry")

# create one dark at the beginning of the acquisition
dark_frame = ... # must be a 2D numpy array
scan.save_reduced_darks(
    {
        0: dark_frame
    }
)

# create one flat at the beginning and one at position 1000
flat_frame_1 = ... # must be a 2D numpy array
flat_frame_1000 = ... # must be a 2D numpy array
scan.save_reduced_flats(
    {
        1: flat_frame_1,
        1000: flat_frame_1000,
    }
)

Using Nabu API¶

This is an overview of Nabu API usage.
For a comprehensive API documentation, please refer to the API reference

Nabu provides building blocks that aim at being easy to use:

  • Input parameters are numpy arrays
  • Each function/class has a clear and decoupled role

Reconstruction¶

A frequent use of nabu API is to simply reconstruct a sinogram.

import numpy as np
from nabu.testutils import get_data
from nabu.reconstruction.fbp import Backprojector

sino = get_data("mri_sino500.npz")["data"]
# Backprojection operator
B = Backprojector(sino.shape)
rec = B.fbp(sino)

Phase retrieval¶

from nabu.preproc.phase import PaganinPhaseRetrieval

P = PaganinPhaseRetrieval(
    radio.shape,
    distance=0.5, # meters
    energy=20, # keV
    delta_beta=50.0,
    pixel_size=1e-06, # meters
)
radio_phase = P.apply_filter(radio)

Pipeline processing from a configuration file (from within Python !)¶

from nabu.pipeline.fullfield.processconfig import ProcessConfig
from nabu.pipeline.fullfield.local_reconstruction import ChunkedReconstructor

# Parse the configuration file, build the processing steps/options
conf = ProcessConfig("/path/to/nabu.conf")

# Spawn a "reconstructor"
R = ChunkedReconstructor(conf)
print(R.backend) # cuda or numpy
R._print_tasks() # preview how the volume will be reconstructed

# Launch reconstruction
R.reconstruct()