Chandra Source Catalog 2.0 Processing
Version 2.0 of the CSC is created by processing each Chandra dataset with a series of automated data analysis pipelines. Collectively, the pipelines are known as "Level 3 Processing" and the data products reflect that in their filenames—e.g. the event file suffix is evt3.fits. For more on this nomenclature, see Chandra Standard Data Processing, which also describes the Level 1 and 2 Chandra data products.
This page provides an overview of the various pipeline stages. More details on a particular pipelibe can be found by following the links from each of the headings linked to below:
- Observation Selection
- Pre-Calibrate/Pre-Detect Pipeline
- Fine Astrometry Pipeline
- Calibrate Pipeline
- ComboDet Pipeline
- Source Validation Pipeline
- MLE Pipeline
- MLE Pipeline run 2
- Stacker Pipeline
- Master Match Pipeline
- Source Properties Pipeline
- Convex Hull Source Properties Pipeline
- Limiting Sensitivity Pipeline
There are a number of common terms and concepts that are used in the pipelines, including:
- an observation interval (OBI) which represents the period of time when Chandra was observing a single part of the sky with the same instrument;
- a stack of observations which represent all the observations of a patch of sky (as discussed below the observation intervals must be closely aligned for them to form a stack);
- detections, which are regions in the stack where the local event density has been identified by our detection algorithms to be significantly higher than the background;
- and sources, which are the objects that cause the X-ray emission, and are identified by matching detections in overlapping stacks.
- compact sources are associated with detections which show at most a small extent when compared to the Chandra Point Spread Function (these form the bulk of the catalog);
- and convex-hull sources, which are associated with detections which are much-larger than the Chandra PSF.
There are two different types of extended sources in CSC 2.0, each associated with a different characterization approach. Sources that are small enough to be characterized by fitting them with an elliptical source model (after detection using either wavdetect or mkvtbkg) can be extended if their deconvolved size (after accounting for the Chandra PSF) is larger than 0. We call these simply 'extended sources'. On the other hand, there are sources that are larger and more irregular in shape, and cannot be described by an elliptical model. These sources are detected as large polygons using the mkvtbkg algorithm, and later characterized by transforming the polygons into convex hulls that surround the extended emission. We therefore call these sources 'convex hull sources' (CHS).
The CSC 2.0 release is the first to include detecting and reporting the properties of large X-ray sources. The convex-hull sources should therefore considered to be a preliminary release.
The Observation Selection page describes which observation intervals (OBIs) are chosen for catalog processing.
Each observation interval is assigned to a 'stack' such that all coaligned (within 1 arcmin) observations from the same instrument (e.g. ACIS or HRC) are in the same stack and can therefore be processed as a group. Stacks may therefore contain one or more observation intervals, and there can be multiple stacks which cover the same part of the sky.
The Pre-Calibrate pipeline is run for each OBI that's a member of a stack with more than one OBI.
The Pre-Detect step uses a run of the wavdetect program with conservative parameter settings to identify bright point sources suitable for astrometrically matching the observations that comprise each observation stack.
- Reprocess the selected datasets with the same CALDB.
- Run wavdetect to create bright source list.
Fine Astrometry Pipeline
The Fine Astrometry pipeline is run to compute the astrometric corrections needed to align each observation in a stack to the same astrometric frame. It is run on the observations that went through the Pre-Calibrate/Pre-Detect pipeline.
- Calculate astrometric translations for each observation (usually less than 1 pixel).
- Update aspect solution files to be consistent with the correction.
The Calibrate pipeline is run for each OBI in the catalog.
- Reprocess the selected datasets with the same CALDB.
- Identify and remove background flares.
- Create data products for use in the other pipelines.
- Create background maps.
The ComboDet (combine and detect) pipeline is run for each calibrated OBI from the Calibrate pipeline to create combined data products and identify candidate detections for both the compact and convex-hull source lists.
- Reproject observations to common tangent plane for stack.
- Detect faint source candidates with wavdetect.
- Combine wavdetect detections at different scales.
- Detect additional candidates with mkvtbkg.
- Calculate the source and background regions.
- Calculate the limiting sensitivity for each OBI and then stack.
Source Validation Pipeline
The Source Validation pipeline is run to reconcile the detections from wavdetect and mkvtbkg that form the list of compact sources, and to review the convex-hull detections.
- Define 'bundles' of source detections which are overlapping or nearly so (this is only for compact sources).
- Flag detection problems (such as pileup).
- Perform QA to inspect, add, remove, modify, flag sources as needed.
- Perform QA to inspect, add or modify convex-hull sources.
MLE Pipeline Run 1
The MLE (Maximum Likelihood Estimator) pipeline takes the candidate compact sources in each bundle and assesses them using a source region significantly larger than the PSF, updating the source positions and evaluating their likelihood values.
- Create ray trace PSF models in each energy band for each candidate source in the bundle.
- Create background model for source neighbourhood using the adaptively smoothed backgrounds.
- Perform maximum likelihood simultaneous fit for combined data in source bundle to derive best fit source positions, possible extent, and corresponding model likelihood.
- Classify detection as true, marginal or false based on likelihood thresholds from simulations.
- For true or marginal detections, compute MCMC confidence intervals for parameters.
- Generate per-detection data products.
The Rebundle step checks the new source positions and recalculates the assignment of sources to bundles.
MLE Pipeline Run 2 (Recenter)
The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each reassigned bundle and assesses them, using smaller source regions. The source positions are further updated. The steps are the same as for the first run.
After the run, QA is performed to inspect and adjust bundle positions where needed.
The Stacker pipeline creates a merged detection list for the compact sources in an observation stack.
- Combine outputs for all MLE pipelines in stack.
- Generate per-stack detection list.
- Perform QA to reject or flag problem sources.
Master Match Pipeline
The Master Match pipeline reconciles detections of the same compact source in different stacks. The method is similar to that used in Release 1.
- Identify stacks which overlap despite being either (a) from different instruments or (b) with pointings more than 1 arcminute apart.
- Taking different PSF sizes into account, identify detections of the same astronomical source in different stacks and define 'master sources'.
- Assign 2CXO catalog names to master sources.
- Perform manual QA to handle complex match cases.
Source Properties Pipeline
The Source Properties pipeline is run for each master (compact) source and energy band.
- Rebundle sources (again)
- Calculate aperture photometry properties (flux PDFs) per observation and stack
- Calculate remaining source properties per observation and stack
- Group observations which are consistent with each other into 'blocks'.
- Calculate source properties per block
- Calculate master average flux properties
The convex hull pipeline is run for each band and set of overlapping stacks to complete the analysis of highly extended sources (these are called convex-hull sources to disambiguate them from the compact-source pipeline).
- Augment the detection list to include additional convex-hull polygons.
- Create a set of master-source candidates by matching the detections across overlapping stacks, and combining the polygons to come up with a master-match convex hull.
- Perform quality assurance to remove invalid detections and to adjust the master-match convex hull where necessary.
- Calculate source properties for the convex-hull sources. Note that the set of properties is limited to flux, likelihood, and position, and many other values, such as variability and extent, are not calculated for convex-hull sources.
- Assign 2CXO source name to the master source and add to catalog (to differentiate them from compact sources, the names of convex-hull sources end in an X character).
Limiting Sensitivity Pipeline
The limiting sensitivity pipeline calculates the sensitivity, in each band, for point-source detection for each location covered by the catalog. It handles overlapping stacks by taking the lowest value from all stacks that cover the same point on the sky.