The CSC is created by processing each Chandra dataset with a series of automated data analysis pipelines. Collectively, the pipelines are known as "Level 3 Processing" and the data products reflect that in their filenames—e.g. the event file suffix is evt3.fits. For more on this nomenclature, see Chandra Standard Data Processing, which also describes the Level 1 and 2 Chandra data products.
The pipelines run in order:
Observation Selection » Pre-Calibrate/Pre-Detect Pipeline » Fine Astrometry Pipeline » Calibrate Pipeline » ComboDet Pipeline » Source Validation Pipeline » MLE Pipeline » Rebundle » MLE Pipeline run 2 » Stacker Pipeline » Master Match Pipeline » Source Properties Pipeline
Each observation interval is assigned to a 'stack' such that all coaligned (within 1 arcmin) observations are in the same stack and can therefore be processed as a group. Stacks may therefore contain one or more observation intervals.
The Pre-Calibrate pipeline is run for each OBI that's a member of a stack with more than one OBI.
The Pre-Detect step uses a run of the wavdetect program with conservative parameter settings to identify bright point sources suitable for astrometrically matching the observations that comprise each observation stack.
- Reprocess the selected datasets with the same CALDB.
- Run wavdetect to create bright source list.
The Fine Astrometry pipeline runs to compute the astrometric corrections needed to align each observation in a stack to the same astrometric frame. It is run on the observations that went through the Pre-Calibrate/Pre-Detect pipeline.
- Calculate astrometric translations for each observation (usually less than 1 pixel).
- Update aspect solution files to be consistent with correction.
The Calibrate pipeline is run for each OBI chosen by the observation selection process.
- Reprocess the selected datasets with the same CALDB.
- Identify and remove background flares.
- Create data products for use in the other pipelines.
- Create background maps.
The ComboDet (combine and detect) pipeline is run for each calibrated OBI from the Calibrate pipeline to create combined data products and identify candidate source detections.
- Reproject observations to common tangent plane for stack.
- Detect faint source candidates with wavdetect.
- Combine wavdetect detections at different scales.
- Detect extended source candidates with mkvtbkg.
- Calculate the source and background regions.
- Calculate the limiting sensitivity.
The Source Validation pipeline is run to reconcile the source lists.
- Define 'bundles' of source detections which are overlapping or nearly so.
- Flag detection problems (pileup, etc).
- Perform QA to inspect, add, remove, modify, flag sources as needed.
- Perform QA to inspect, add or modify convex-hull extended sources.
The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each bundle and assesses them using a source region sigificantly larger than the PSF, updating the source positions and evaluating their likelihood values.
- Create ray trace PSF models in each energy band for each candidate source in the bundle.
- Create background model for source neighbourhood using the adaptively smoothed backgrounds.
- Perform maximum likelihood simultaneous fit for combined data in source bundle to derive best fit source positions, possible extent, and corresponding model likelihood.
- Classify detection as true, marginal or false based on likelihood thresholds from simulations.
- For true or marginal detections, compute MCMC confidence intervals for parameters.
- Generate per-detection data products.
The Rebundle step checks the new source positions and recalculates the assignment of sources to bundles.
MLE Pipeline Run 2 (Recenter)
The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each reassigned bundle and assesses them, using smaller source regions. The source positions are further updated. The steps are the same as for the first run.
After the run, QA is performed to inspect and adjust bundle positions where needed.
The Stacker pipeline creates a merged detection list for an observation stack.
- Combine outputs for all MLE pipelines in stack.
- Generate per-stack detection list.
- Perform QA to reject or flag problem sources.
The Master Match pipeline reconciles detections of the same source in different stacks. The method is similar to that used in Release 1.
- Identify stacks which overlap despite being either (a) from different instruments or (b) with pointings more than 1 arcminute apart.
- Taking different PSF sizes into account, identify detections of the same astronomical source in different stacks and define 'master sources'.
- Assign 2CXO catalog names to master sources.
- Perform manual QA to handle 'too hard' match cases.
The Source Properties pipeline is run for each master source and energy band.
- Rebundle sources (again)
- Calculate aperture photometry properties (flux PDFs) per observation and stack
- Calculate remaining source properties per observation and stack
- Group observations which are consistent with each other into 'blocks'.
- Calculate source properties per block
- Calculate master average flux properties
Convex Hull Pipeline
The convex hull pipeline will be run for each band and ensemble to complete the analysis of highly extended sources. It is still under development.
- Perform quality assurance on the per-stack convex hull regions
- Run master match algorithm to match sources across stacks within an ensemble, creating master sources.
- Assign 2CXO source name to master source and add to catalog.
- Calculate certain source properties (flux, likelihood, positions). Variability and extent properties are not calculated for convex hull sources.
Limiting Sensitivity Pipeline
The limiting sensitivity pipeline will calculate the sensitivity in each band for each location covered by the catalog. This will be provided to the community as a separate data product. The pipeline is under development.