Last modified: 24 October 2019

URL: https://cxc.cfa.harvard.edu/csc/proc/source.html

Source Properties Pipeline


The Source Properties pipeline is run for each master source produced by the Master Match pipeline, using the source and background regions, as opposed to full-field data used in previous steps of catalog processing.

[NOTE]
Note on Errors

All values calculated in the Source Properties pipeline, except for spatial quantities (position and size), are reported with two-sided confidence limits.

Rebundle sources

Regions for source bundles and assignment of sources to bundles is repeated following the final master match step.

Per-stack aperture photometry

Calculate probability density functions (PDFs) for count rate and flux for each band, per observation and per stack.

Per-stack associated properties

Calculate remaining source properties per observation and per stack.

  • Measure the source and PSF size; flag extended sources.
  • Create light curves and look for variability.
  • Perform spectral fitting to get flux.
  • Calculate model-independent fluxes.
  • Calculate additional source properties (e.g. position errors).

An example of the data products is shown in Figure 1.

[ACIS counts image, exposure map, and PSF image]

Figure 1. Counts image, exposure map, and PSF image of an ACIS detection.

Estimate compact source extent

A Mexican-Hat optimization method is used to measure the apparent source and PSF size, based on the raw extent of a source, i.e. the extent of a source before subtraction of overlapping source regions. This is a refinement of the wavdetect results. The method is described in detail in the Measuring Detected Source Extent Using Mexican-Hat Optimization memo.

Create light curves and look for variability

For each source, the events across all chips are reduced to a common set of valid time intervals (GTIs). Then the time-resolved fraction of aperture area is calculated; e.g. how much time did the source "lose" by dithering off the chip or across a bad column.

[Plot of dither: fraction of aperture area vs time offsets [s]]

Figure 2. Plot of dither: fraction of aperture area vs time offsets [s].

Several variability tests are run on the data, taking the dither information into account:

[Gregory-Loredo light curve (top) compared to the same light curve with dither removed (bottom)]

Figure 3. Gregory-Loredo light curve (top) compared to the same light curve with dither removed (bottom).

The Gregory-Loredo light curve (lc3.fits) is included in the data products.

Perform spectral fitting to get flux

For each source, a PI spectrum and the corresponding ARF and RMF calibration files are created; these files are included in the distributed data products.

If the spectrum has at least 150 net counts in the 0.5-7.0 keV range, several spectral models are fit to the data: a black body model, a power law and a thermal plasma model. Corrections for the PSF aperture fraction, livetime, and ARF are applied when fitting the models. For more information on spectral fitting, refer to the Spectral Properties page.

The free parameters in the power law fit are the total integrated flux, total neutral Hydrogen absorbing column density, and power law photon index. In the black body model fit, the free parameters are the total integrated flux, total neutral Hydrogen absorbing column density, and black body temperature. The initial value of the Hydrogen column density (nH) is input from Colden, the CXC's galactic neutral Hydrogen density calculator. Note that spectral fit parameters may be unreliable for sources at large off-axis angles, where background levels can be high. A background-fitting approach will be considered for future releases of the Catalog.

Calculate additional source properties

A number of additional source properties are calculated in the Source Properties pipeline. These are described in detail in the Column Descriptions section.

Determine Bayesian blocks

Compare fluxes between observations and define groups of observations ('blocks') for which the fluxes are consistent with each other (i.e. consistent with no variability between the observations in the block.) Blocks may overlap in time—for example, if the time ordered observations are 1,2,3,4,5, there may be block 1 consisting of observations 1,4 and block 2 consisting of observations 2,3,5.

Define the 'best block' as the one with the greatest exposure time. Most source properties in the main master source table will be calculated for the best block; source properties for other blocks are stored in associated data products.

The method used in the Bayeasian Blocks analysis is explained in this memo.

Determine per-block source properties for each master source

  • Calculate combined flux PDFs for each block, for each master source.
  • Calculate other source properties for each block.
  • Store properties for all blocks in data products.
  • Store properties for best block in master source table.

Determine master flux averages

In addition to the 'best block' values a master flux for each source averaged over all observations in the ensemble is calculated.

Output data products

The pipeline produces these data products: