About Chandra Archive Proposer Instruments & Calibration Newsletters Data Analysis HelpDesk Calibration Database NASA Archives & Centers Chandra Science Links

Skip the navigation links
Last modified: December 2006

URL: http://cxc.harvard.edu/ciao3.4/dmextract.html
Hardcopy (PDF): A4 | Letter
AHELP for CIAO 3.4 dmextract Context: tools

Synopsis

Make a histogram table file (e.g. PHA file, lightcurve file) from a table column. Generate count histogram on supplied regions for a spatial table or image file.

Syntax

dmextract  infile outfile [bkg] [error] [bkgerror] [bkgnorm] [exp]
[bkgexp] [sys_err] [opt] [defaults] [wmap] [clobber] [verbose]

Description

`dmextract' creates a histogram from a column of data in a table. Both "scalar" (PHA, TIME, etc.) and "vector" (DET, SKY, etc.) columns are supported. dmextract thus includes the capability to create PHA files, lightcurves, and spatial radial profiles.

In the case of scalar columns, a simple linear binning of the data is available, while for vector columns ('spatial extraction') the user can define the bins via a set of regions. Support for background correction is available for both scalar and vector columns. Vector column extraction also supports exposure corrections and images as input.

1. HISTOGRAM OF A 'SCALAR' COLUMN

`dmextract' takes as input a table or set of tables, often an event list. It generates an `extraction', a histogram of the data binned on any one column in the table. The resulting file is also a table, containing the histogram and associated errors and rates. For a scalar column, the extraction is defined using the CXC Data Model (DM) binning syntax (see "ahelp dmsyntax") in the input file string (see examples).

1.1 'PHA' FILE EXTRACTION

The default mode of dmextract ("opt=pha1") is to generate a HEASARC/OGIP-compatible Type I PHA file. For X-ray event data, the PHA (pulse height) is an instrumental quantity related to the photon energy; the mapping from PHA to energy varies with detector position and sometimes time, and we also use the PI (pulse invariant) value, which is the PHA corrected to a standard energy scale. The PHA files are used in combination with response matrices (RMF files), which are made for either PHA or PI binning. See the threads on spectral fitting for advice on how to use PHA files.

1.2 'TYPE 2 PHA' FILE EXTRACTION

Using "opt=pha2" makes a Type II PHA file, in which each line of the file contains a complete spectrum with errors. This option is used when the input is a stack of virtual files. One line of the output file is created for each input extraction, so the infile in this case will usually be a stack. For columns other than pha and pi, opt=generic2 makes a similar file without PHA specific keywords.

1.3 LIGHTCURVE EXTRACTION

Using "opt=ltc1" is appropriate when creating a lightcurve by binning on the TIME or EXPNO columns. Normally, COUNT_RATE is taken to be counts per bin unit per total observational good time. Choosing opt=ltc1 ensures that COUNT_RATE is counts per total time associated with the bin, rather than total time associated with the observation.

If one is binning on either TIME or EXPNO, the exposure per bin is properly calculated based upon the selection criteria. For example, if one has an observation with the nominal 3.24104 sec frame time and 3.2 sec live time, and then chooses to bin in units of 10 exposure numbers, the count rate per bin will be the total counts per bin divided by 32 seconds.

If one is binning on the TIME column, the exposure is further merged with the intersecting good time intervals, taken from the first GTI in the case of multiple GTIs (e.g., for regions that span multiple ACIS chips).

Choosing "opt=ltc2" is the lightcurve equivalent of "opt=pha2" or "opt=generic2" in that a list of input files can be extracted into multiple rows of a single Type II fits file. The individual lightcurves, however, must be binned on the same grid.

The bkg parameter definition (below) has information on creating background-subtracted lightcurves.

1.4 GENERIC HISTOGRAM EXTRACTION

Using "opt=generic" is appropriate when binning on a column other than PHA, PI or TIME. This generates a file which is similar to a Type I PHA file, but doesn't contain keywords specific to PHA data. For instance, you can bin an aspect solution file on RA to see the histogram of the RA distribution.

1.5 BINNING SPECIFICATION

The complete binning specification is "[bin col=min:max:step]". For instance,

  unix% dmextract "evt1.fits[bin pha=1:2048:2]" out.pha

You can leave out any of the min,max,step, for instance

  [bin pha=:2048], [bin pha=::2], [bin pha=5::] 

Default values will be filled in from the maximum valid range of the column (in the case of a FITS file, the values of the TLMINn, TLMAXn columns) and the default binning (the CXC specific keyword TDBINn, if present). If all values are omitted, e..g [bin pha], and the optional "defaults" parameter is supplied, dmextract will look in the defaults file for a suitable binning for that column.

All types of output files will have the columns CHANNEL, COUNTS, and COUNT_RATE. Type II output files will also have the columns SPEC_NUM and TOTCTS.

1.6 WMAP IN HISTOGRAM FILES

It can be useful to encode extra information about the extraction in the output file. For the ASCA mission, the idea of a WMAP (weight map) was introduced: associated with the extraction table is an image of the extraction region. Specifically, the PHA FITS file has in its primary header an image containing a low resolution map of the source in detector coordinates. This allows downstream software to determine the appropriate weighting for calibrations which depend on detector position (for instance, effective areas may depend on the off-axis angle). The dmextract wmap parameter allows the user to define such a wmap; we recommend 'wmap="det=8"' to create a map in DETX,DETY coordinates binned by a factor 8, if you are making a PHA file which will be used with mkwarf (see "ahelp mkwarf"). One can also use an alternate syntax to specify additional filters in the creation of a wmap. A standard example involves adding an energy filter to the binning command:

  wmap="[energy=500:2000][bin det=8]"

Using this filter, the wmap will better represent the event distribution from 500-2000 eV. The wmap option is currently only useful when making a PHA file, although in principle it could be used by software making exposure calculations for other quantities.

1.7 SYSTEMATIC ERRORS

`dmextract' optionally writes a SYS_ERR keyword to the file header, using the value of the sys_err parameter provided by the user. The SYS_ERR keyword denotes the fractional systematic error associated with the data; thus "sys_err=0.05" indicates a 5 percent error. This error may be added in quadrature by downstream software to the statistical errors provided in the extraction.

1.8 THE DEFAULTS FILE

The dmextract defaults file is compatible with the XSELECT mission database (mdb) file. It is a text file containing lines of the form telescop:instrume:key value where telescop and instrume are the values of the FITS TELESCOP and INSTRUME keywords, and dmextract recognizes keys of the form colname_binning. Example:

  Chandra:ACIS:pha_binning 1:4096:2

For example, to add a default for HRC time binning of 20s bins, one might add

  Chandra:HRC:time_binning  ::20.0

Error conditions:

  • If either the requested column does not exist, the minimum bin number exceeds the maximum bin number, or the step size is negative, an appropriate error message is displayed and the program halts.
  • If the input file does not contain a LIVETIME keyword, livetime is set to 1.0.

2. EXTRACTION ON A 'VECTOR COLUMN' (SPATIAL EXTRACTION)

The second method of extraction for `dmextract', binning on a 2-dimensional quantity like 'sky(x,y)' or 'det(detx,dety)' can take an event or image file and a set of regions to extract and sum the counts in each of those regions. The user can supply a background file or regions to remove from the source to create a total count rate.

2.1 SPATIAL EXTRACTION SYNTAX

There are several options for the extraction syntax.

  filename[bin sky=shape(x0,y0,...)]

indicates a count extraction on the sky column with a single bin specified by the shape (for example circle(4096,4096,20)). The user may supply a stack of regions, with each region a separate line in an ASCII file region.lis:

  filename[bin sky=@region.lis]

or, for the special case of the ANNULUS shape, can define a set of nested annuli with

  filename[bin sky=annulus(x0,y0,rmin:rmax:step)]

(NOTE: this syntax will only work in a command line, not in a region file).

Note that in no cases are default binning parameters allowed for vector column extraction (e.g., filename[bin sky] or filename[bin det] are not allowed and will result in a fatal error). A valid region specifier must be included (e.g. sky=circle...).

2.2 BACKGROUND AND EXPOSURE FILES

The source extraction will count up the number of events within each region. Errors will be calculated on the counts based on either Gehrels or gaussian option. The count rate is determined by using the exposure time for the file. If a user supplies a background file with regions, or a fixed background (counts/pixel/s), this value will be used to perform a background correction on the source counts. The information on the background counts will be included in the output as well as net counts and net rates. Background information can be normalized. The default is to normalize the counts with respect to the relative exposure times and geometric areas in the input and background files.

The user can optionally supply an exposure map that corresponds to the input file. The regions chosen in the input will be extracted and a weighted exposure correction over that region will be applied to the counts. The background file can also have an exposure correction applied to it. An EXPOSURE (and BG_EXPOSURE if relevant) column will be added to the output table.

2.3 ERROR CALCULATION

The are three different methods of error calculations. Errors are generally calculated using the gaussian errors:

error= sqrt(N)

Alternatively the Gehrels approximation can be used for low counts data:

error = 1 + sqrt(N + 0.75)

This option is equivalent to the "poisson" option in CIAO 3.2 and earlier. If the user has a variance image at the same pixel scale as the input file, this can be used instead of other error methods. The region from the input file is extracted from the variance image, and the sum of the variances in the region is used for error calculations.

The region areas are expressed in units of physical pixels and accounts for the binning in the case of an input image file.

Example 1

dmextract "input.fits[sky=circle(4095.0,4096.0,43.0)][bin
pha=1:3000:10]" output.pha clobber=yes verbose=2

Perform a scalar extraction on the single input file input.fits, taking photons from a circle around a particular source position specified in sky pixel coordinates, and binning on column PHA from channels 1 to 3000 with a step size of 10, and output to a type I PHA output file output.fits.

Example 2

dmextract "input.fits[sky=annulus(4095.0,4096.0,60.0,100.0)][bin
pi=::100]" output.pha clobber=yes verbose=1

Perform a scalar extraction on the single input file input.fits, binning on column PI from channels TLMIN to TLMAX (both read from input.fits) with a step size of 100, and output to a type I PHA output file output.fits. Include only counts within the specified sky-coordinate annulus.

Example 3

dmextract "input.fits[sky=annulus(4095.0,4096.0,60.0,100.0)][bin
pi=::100]" output.pha wmap="det=8"

As with example 2, but create a WMAP block in the PHA file for use by mkwarf. The WMAP is created by binning the detector coordinates by a factor of 8 of those photons within the specified sky-coordinate annulus.

Example 4

dmextract "input.fits[sky=annulus(4095.0,4096.0,60.0,100.0)][bin
pi=::100]" output.pha wmap="[energy=500:2000][bin det=8]"

As with example 3, but only use those photons with energies between 500 and 2000 eV when creating the WMAP.

Example 5

dmextract "input.fits[sky=annulus(4095.0,4096.0,60.0,100.0)][bin pi]"
output.pha clobber=yes verbose=1 defaults=cxo.mdb

Same as example 2, but using the mission default from the file cxo.mdb.

Example 6

dmextract @inlist @outlist clobber=yes opt=generic

Perform extractions on the input files in1.fits and in2.fits listed within the ASCII file inlist, binning on column DETX from channels 1000 to 1500 with a step size of 10, and output to corresponding type I output files out1.fits and out2.fits listed within the file outlist. The contents of the two "stack" files are:

  unix% cat inlist
  in1.fits[bin detx=1000:1500:10]
  in2.fits[bin detx=1000:1500:10]

and

  unix% cat outlist
  out1.fits
  out2.fits

Example 7

dmextract @inlist out.pha2 clobber=yes opt=generic2

Identical to the previous example except output to the single type II file out.fits.

Example 8

dmextract "input.fits[bin sky=circle(4095.0,4096.0,43.0)]" output.fits
bkg=none error=gehrels clobber=yes verbose=2

Performs a single source count (vector) extraction in a circle centered at 4095,4096 with a radius of 43. There is no background correction specified and the errors reported are calculated with the Gehrels approximation. Note that because sky is a vector (2D) column, using the syntax "bin sky" means this is a count extraction. Since only one region is specified, the output table will have a single row, the counts within that region.

Example 9

dmextract "input.fits[bin sky=annulus(4095.0,4096.0,0.0:100.0:10)]"
output.fits bkg=none error=gehrels clobber=yes verbose=2

Performs a spatial source count (vector) extraction in a set of annuli centered at 4095,4096 with radii 0, 10, 20, 30,.. 100 as specified by the 0.0:100.0:10 binning declaration. There is no background correction specified and the errors reported are calculated using the Gehrels approximation. The result is a radial intensity profile table.

Example 10

dmextract "input.fits[bin sky=circle(4095.0,4096.0,43.0)]" output.fits
bkg="input.fits[bin sky=circle(4000.0,4096.0,40)]" error=gaussian
bkgerror=gaussian bkgnorm="1.0" clobber=yes verbose=2

Perform a single source count extraction on the input file input.fits, counting photons from a circle around a particular source position specified in sky pixel coordinates, and background subtracting the normalized counts from the background region. Gaussian errors are reported for the counts and rates.

Example 11

dmextract "input.fits[bin sky=@region.lis]" output.fits bkg=none
exp="expmap.fits" clobber=yes verbose=2

If region.lis is a file that contains 1 region per line, e.g.:

  circle(4233.5,3753.5,28)
  circle(4233.5,3753.5,18)
  circle(4233.5,3753.5,10)
  circle(4233.5,3753.5,5)

then dmextract will perform a spatial source count extraction on the input file into bins specified by this list. The exposure map is opened and a weighted exposure correction is found for each region and applied in the calculation of the net counts.

Example 12

dmextract "input.fits[bin sky=@region.lis]" output.fits
error="variance.fits" clobber=yes verbose=2

Performs a spatial source count extraction on the input file. Errors for the counting are determined by the variance image "variance.fits". For each region, the variance in the total counts extracted is the sum of the variances in the corresponding pixels in the variance map. The number of output bins is equal to the number of regions in region.lis.

Example 13

dmextract "input.fits[sky=circle(4114,4173,4)][bin time=::3.24104]"
output.fits opt=ltc1

Extracts data from a circular region, and then creates a lightcurve spanning the entire observation that is binned on a 3.24104 second time scale (i.e., the nominal frame time for reading out a full ACIS chip).

Note that the same command using the lightcurve tool would have been: lightcurve "input.fits[sky=circle(4114,4173,4)]" output.fits nbins=0 binlength=3.24104

Example 14

dmextract @input.list @output.list opt=ltc1

with input.list being an ascii file with the lines:

  acis_evt2.fits[sky=region(ps.reg)][bin time=::1000.]
  acis_evt2.fits[sky=region(back.reg)][bin time=::5.]

and output.list being an ascii file with the lines:

  ps_output.fits
  back_output.fits

Extracts lightcurves from two separate regions defined by the region files ps.reg and back.reg (for example, for a point source and background region), and places the output lightcurves into Type I fits files called ps_output.fits and back_output.fits, respectively. The first lightcurve uses 1000 second bins, while the second uses 5 second bins.

Example 15

dmextract @input.list output.fits opt=ltc2

with input.list being an ascii file with the lines:

  acis_evt2.fits[sky=region(ps.reg)][bin time=::1000.]
  acis_evt2.fits[sky=region(back.reg)][bin time=::1000.]

Extracts lightcurves from two separate regions defined by the region files ps.reg and back.reg (for example, for a point source and background region), bins the data into 1000 second time bins, and places the two lightcurves into separate rows of a Type II fits file. The lightcurves have the same binning, but one is not subtracted from the other; they remain as individual lightcurves.

Example 16

dmextract "acis_evt2.fits[sky=region(ps.reg)][bin time=::200]"
ps_bsub.fits error=gaussian bkg="acis_evt2.fits[sky=region(back.reg)]"
bkgerror=gehrels opt=ltc1

Extracts a lightcurve from a region defined by the region file ps.reg, bins it into 200 second time bins, and applies gaussian errors. A background lightcurve is also generated using the region file back.reg, and errors using the Gehrels approximation are calculated. Note that binning criteria are not input for the background, as dmextract uses the same criteria as for the infile.

The ps.reg lightcurve and background lightcurve use the same GTI information, as they are extracted from the same event file. If the regions are taken from different chips (for example, the ps.reg lightcurve is extracted from the backside illuminated S3 chip, which is serving as the aimpoint chip, while the background lightcurve is extracted from the other backside illuminated chip, S1), then a CCD_ID filter should first be applied to the event file being used for extraction of the background lightcurve. The background lightcurve is then subtracted from the ps.reg lightcurve, and the errors from the two lightcurves are combined. The results are placed in the Type I fits file ps_sub.fits.

Example 17

dmextract "acis_evt2.fits[sky=region(ps.reg)][bin time=::0.2]"
bad_idea.fits opt=ltc1

Extracts a lightcurve from a region defined by the region file ps.reg, and bins it into 0.2 second bins. The output is placed in the fits file bad_idea.fits. If acis_evt2.fits is not a data file taken in CC-mode, but rather is a Timed Event (TE) mode file, 0.2 seconds is shorter than the frame exposure time. In this case, dmextract will generate a warning that the lightcurve is being extracted with bin times shorter than the exposure time.

Parameters

name type ftype def min max reqd stacks
infile file input       yes yes
outfile file output       yes yes
bkg file input       no yes
error string   gaussian     no  
bkgerror string   gaussian     no  
bkgnorm real   1.0     no  
exp file input       no yes
bkgexp file input       no yes
sys_err real   0        
opt string   pha1        
defaults file ARD          
wmap string            
clobber boolean   no        
verbose integer   0 0 5    

Detailed Parameter Descriptions

Parameter=infile (file required filetype=input stacks=yes)

The input virtual file or stack, e.g. event list, modified by a dmextract binning command.

Any table or stack of tables is valid input, with the

table[bin scalar_col=min:max:step]

type of extraction or the

table[bin vector_col=region_list]

type of extraction.

Images can also be input with the latter type of extraction:

image[bin vector_axis=region_list]

There are three methods of stack inputs allowed.

PHA/Histogram extraction TYPE I files:

For each file in the input, there should be a file specified in the outfile stack.

PHA/Histogram extraction TYPE II files:

There can be multiple input files, but only one output file. This will create a type II file as described above.

Radial Profile/Source Extraction:

For each file in the input, there should be a file specified in the outfile stack. However, if only 1 output file is specified, dmextract will place all output information in the single file with the header information for the first input file.

Parameter=outfile (file required filetype=output stacks=yes)

The output histogram file.

The output file. This is a histogram of counts and rates for each bin (a bin is a PHA channel in the case of a pha file, histogram bin in the case of a generic scalar extraction, or a region in the case of a vector, spatial, extraction).

Please see the infile parameter for a detailed description of outfile stacks.

Parameter=bkg (file not required filetype=input default= stacks=yes)

Background file with regions or a numeric value.

This is a string that is one of:
  • a background virtual file (defining a single background region that will be used for each extraction bin)
  • a background file stack (defining a set of regions to be used for each extraction bin; the number of entries in the stack must be the same as for the input stack in this case)
  • a numeric value that is in units count/pixel/s

This parameter may also be used to create a background-subtracted lightcurve. Not only is background area correctly accounted for, but also different GTIs are used for the source and background datasets (though it may be necessary to add a ccd_id filter to get the correct background GTI, if the background is taken from a different chip). The lightcurve is corrected bin-by-bin for the background rate during that time interval; the background binning is forced to be the same as the infile binning. Columns are created in the output file for NET_COUNTS, NET_RATE, and NET_ERR. The latter combines the statistical error associated with both the source and background lightcurves.

If instead of supplying a file name you supply a constant value in units of counts/sec, that number is used as the constant background rate for the lightcurve.

Parameter=error (string not required default=gaussian)

The error method for determining error on the counted items.

These are the methods for error determination. The current options are "gaussian", "gehrels", or a variance image. A variance image is not yet supported in histogram or pha binning, instead gaussian will be used.

Parameter=bkgerror (string not required default=gaussian)

The error method for the background error on the background count.

These are the methods for background error determination. The current options are "gaussian", "gehrels", or a variance image. A variance image is not yet supported in histogram or pha binning, instead gaussian will be used.

Parameter=bkgnorm (real not required default=1.0)

Background normalization for spatial count extraction.

This fudge factor allows the user to adjust the background normalization relative to the source normalization in the calculation of net counts, for a spatial extraction. When the factor is 1.0, the background counts are normalized by the ratio of exposure times and spatial areas. If bkgnorm is different from 1, the normalized background counts are further multiplied by the value of bkgnorm.

Note that the value of bkgnorm is not saved as a separate keyword in the output fits file header; however, it will be present in the header as part of the output fits file's history.

Parameter=exp (file not required filetype=input default= stacks=yes)

Exposure map for the input file for spatial extraction.

The exposure map that corresponds to the input file will have the information in the input regions extracted and the weighted exposure correction over this region is applied to the counts. If a stack of exposure maps is supplied, there must be the same number of files as the infile stack. The tool will report an error otherwise. The units of the exposure map are either cm**2 sec, cm**2 (normalized by time), or dimensionless. Exposure map and background exposure map units must be compatible. Otherwise, an error will be reported and the calculation will continue without the exposure correction. For Chandra exposure maps generated with the mkinstmap and mkexpmap, the source and background exposure maps must BOTH be either unnormalized (units of cm**2-sec) or normalized (units of cm**2).

Parameter=bkgexp (file not required filetype=input default= stacks=yes)

Exposure map for the background file for spatial extraction.

The exposure map that corresponds to the background file will have the information in the input regions extracted and the weighted exposure correction over this region is applied to the counts. If a stack of background exposure maps is supplied, there must be the same number of files as the bkg stack. The tool will error otherwise. The units of the exposure map are either cm**2 sec, cm**2 (normalized by time), or dimensionless. Exposure map and background exposure map units must be compatible. Otherwise, an error will be reported and the calculation will continue without the exposure correction. For Chandra exposure maps generated with mkinstmap and mkexpmap, the source and background exposure maps must BOTH be either unnormalized (units of cm**2-sec) or normalized (units of cm**2).

Parameter=sys_err (real default=0)

Fixed systematic error

Parameter=opt (string default=pha1)

The output file type format: one of pha1, pha2, generic, generic2, ltc1 or ltc2.

Each of the following options is discussed in detail in the DESCRIPTION section (above).

There are three main kinds of output generated by dmextract. The 'generic' output is a histogram table with each row corresponding to a different bin. The 'generic2' output is a set of histograms, one per row, with the histograms stored in array columns.

In ltc modes, a lightcurve will be generated. Binning should be on the 'TIME' column to be sure to get the correct GTIs. Only data from the first GTI will be used if multiple GTIs are present, e.g. in ACIS datasets. Other "time" quantitities, e.g., TIME_RO (for "Continuous Clocking" mode files where event times have been corrected) or PHASE (e.g., if such a column has been created with dmtcalc) will not automatically be treated like the 'TIME' column, even when using the ltc modes.

Most users will want 'opt=generic' for most applications. However, when doing a spectral extraction on PHA or PI, you need to use 'opt=pha1' or 'opt=pha2' to generate a compliant PHA type I or II file for use with Sherpa, ISIS, or XSPEC.

Parameter=defaults (file filetype=ARD default=)

The mission database file

The mission database file gives default binning instructions for certain extractions; for instance, it knows that PI extractions on ACIS files should use the binning 1:1024:1, instead of the default binning 0:1024:1 that dmextract would assume from the TLMIN/TLMAX values in the file header if no mission database file were specified.

The MDB file is just a text file, and you can generate your own MDB file to define your preferred defaults for various extractions.

Parameter=wmap (string default=)

The filtering/binning to use to make a WMAP

If you specify a wmap binning with "infile=inval wmap=wmapval", dmextract will make an image in the output file header which will be equivalent to the image generated by 'dmcopy "inval[bin wmapval]" wmapfile' (note that any binning specified in inval - such as '[bin pi]' - will be removed when creating the WMAP image). If you specify "wmap=default", a value from the mission database file specified by the "defaults" parameter will be used. For Chandra ($ASCDS_CALIB/cxo.mdb) this is equivalent to 'wmap="chip=32"'. We suggest that 'wmap="det=8"' be used if you wish to use the file as the input to mkwarf (the WMAP must be in detector coordinates for mkwarf). One can also apply filtering commands in the creation of a wmap, which are appended to those applied to the 'infile'. This requires a slightly more complete syntax, which is identical to the standard datamodel filtering syntax, e.g:

  wmap="[energy=500:2000][bin det=8]" 

This filter will create a wmap which reflects the 500-2000 eV emission of the region used to create the PHA file.

Parameter=clobber (boolean default=no)

Specifies if an existing output file should be overwritten.

Parameter=verbose (integer default=0 min=0 max=5)

Specifies the level of verbosity (0-5) in displaying diagnostic messages.

CHANGES IN CIAO 3.3

Modified parameter names for calculating the statistical errors

The parameter value "poisson" for calculating the statistical errors has been renamed to "gehrels" to reflect the Gehrels equation used with this option. The default setting for the error and bkgerror parameters is now error="gaussian".

Changes to the header keywords in PHA output files

The POISSERR keyword is linked to the error parameter setting. For error="gaussian", the PHA output file has POISSERR=TRUE and a STAT_ERR column IS NOT created. For error="gehrels", the header in the PHA output file is set to POISSERR=FALSE and a STAT_ERR column IS created.

CHANGES IN CIAO 3.2

Enhanced support for lightcurve extraction

`dmextract' now also supports lightcurve extraction based upon EXPNO when using either opt=ltc1 or opt=ltc2. The appropriate time filters will be correctly applied in these modes.

CHANGES IN CIAO 3.1

Default error determination

In CIAO 3.1, the default value for the "error" and "bkgerror" parameters was changed from gaussian to poisson.

CHANGES IN CIAO 3.0

Support for lightcurve extraction

'dmextract' now provides support for lightcurves created from the TIME column of an event file. In versions prior to CIAO 3.0, if one binned a 10 ksec observation over 10 sec intervals, for example, the resulting COUNT_RATE column would contain value of counts per 10 sec bin divided by 10 ksec, i.e., the full duration of the observation. By choosing 'opt=ltc1' or 'opt=ltc2', COUNT_RATE will contain the proper rate for the duration of the bin, and furthermore, the rate will include deadtime and good time interval corrections. This functionality essentially replaces one of the binning options found in the 'lightcurve' tool (i.e., binning by bin length in seconds), and users are recommended to use 'dmextract' with the ltc option in lieu of 'lightcurve'. Note that the lightcurve functionality only works for extractions based on the TIME column directly and explicitly, and does not work on other columns derived from the time column (e.g., TIME_RO, PHASE).

Simplified output columns

The meanings of various output columns generated in spatial extraction have been simplified in CIAO 3.0.

The definitions of the output "COUNTS" columns (e.g. COUNTS, NET_COUNTS, etc.) and "RATE" columns (COUNT_RATE, NET_RATE, etc.) when exposure maps are used have been changed. Now, the units of the "COUNTS" columns are always simply "counts", and the "RATE" columns units are always "counts/sec", regardless of whether an exposure map is used. The exposure time used in rate calculations is taken from the value of the EXPOSURE keyword in the header.

When exposure maps are used, additional "FLUX" columns (e.g., FLUX, NET_FLUX, etc.) are added to the output table. These columns always have units of "photons/cm**2/sec". MEAN_SRC_EXP and MEAN_BG_EXP columns are also added to the output in these cases. These columns represent the count-weighted averages of the exposure map values in the source and background regions, in units of cm**2. If unnormalized exposure maps have been used, these values are converted to cm**2 by dividing by the exposure time from the EXPOSURE keyword in the header. A SUR_FLUX column is also added in these cases, with units of "photons/cm**2/sec/pixel".

Normalized (units of cm**2), unnormalized (units of cm**2-sec), and dimensionless exposure maps are now supported. Exposure map units should be the same for both source and background exposure maps. If dimensionless exposure maps are used, they are treated as unnormalized and the program issues a warning. In such cases, users should proceed with caution in interpreting the meanings of the "FLUX" columns.

OUTPUT COLUMNS: RADIAL PROFILES

When you calculate region counts, or radial profile, using dmextract with a "[bin sky=...]" directive and an exposure map, there are a lot of output columns, many of which are simply related to each other. We can regard the basic output values as:

Value Definition
EXPOSURE The effective exposure time, corrected for deadtime but not for spatial sensitivity variations
AREA Source region area in pixel units
MEAN_SRC_EXP Mean value of exposure map in source region
COUNTS Total source counts
ERR_COUNTS Error on COUNTS; sqrt-N or 1+sqrt(N+3/4)
BG_AREA Background region area in pixel units
MEAN_BG_EXP Mean value of exposure map in background region
BG_COUNTS Total background region counts
BG_ERR Error on BG_COUNTS; sqrt-N or 1+sqrt(N+3/4)

From these we can derive the other output values. (Here we assume that the exposure times for background and source are the same, and there is no extra background correction factor. If this is not true, simply modify BG_AREA to compensate.)

COUNT_RATE = COUNTS / EXPOSURE
BG_RATE = BG_COUNTS / EXPOSURE

NET_COUNTS = COUNTS - (AREA / BG_AREA) * BG_COUNTS
NET_ERR = sqrt(ERR_COUNTS^2 + ((AREA/BG_AREA) * BG_ERR)^2)
NET_RATE = NET_COUNTS / EXPOSURE
ERR_RATE = NET_ERR / EXPOSURE

FLUX = COUNTS / (EXPOSURE * MEAN_SRC_EXP)
NET_FLUX = FLUX - (AREA / BG_AREA) * BG_COUNTS / (EXPOSURE * MEAN_BG_EXP)
NET_FLUX_ERR = sqrt((ERR_COUNTS / MEAN_SRC_EXP)^2 +
	       ((AREA / BG_AREA) * (BG_ERR / MEAN_BG_EXP))^2) / EXPOSURE

SUR_BRI = NET_COUNTS / AREA
SUR_BRI_ERR = NET_ERR / AREA
BG_SUR_BRI = BG_COUNTS / BG_AREA
BG_SUR_BRI_ERR = BG_ERR / BG_AREA

SUR_FLUX = NET_FLUX / AREA
SUR_FLUX_ERR = NET_FLUX_ERR / AREA

OUTPUT COLUMNS: LIGHTCURVES

When you calculate a lightcurve with opt="ltc1" or "ltc2", the basic output values are:

Value Definition
COUNTS Total source counts
STAT_ERR Error on COUNTS; sqrt-N or 1+sqrt(N+3/4)
EXPOSURE The effective exposure time in the source area
BG_AREA Background region area in pixel units
BG_EXPOSURE The effective exposure time in the background area
BG_COUNTS Total background region counts
BG_ERR Error on BG_COUNTS; sqrt-N or 1+sqrt(N+3/4)

[It is a known bug that the source area and the background counts in the source area (i.e. the counts subtracted by COUNTS to get NET_COUNTS) are not currently reported in the output of dmextract in lightcurve mode.] From these we can derive the other output values.

COUNT_RATE = COUNTS / EXPOSURE
COUNT_RATE_ERR = STAT_ERR/EXPOSURE

BG_RATE = BG_COUNTS / EXPOSURE

NET_COUNTS = COUNTS - [(BG_COUNTS/BG_AREA/BG_EXPOSURE)*(SRC_AREA)*(EXPOSURE)] 
NET_ERR = sqrt(ERR_COUNTS^2 + ((AREA/BG_AREA) * BG_ERR)^2)
NET_RATE = NET_COUNTS / EXPOSURE
ERR_RATE = NET_ERR / EXPOSURE

EXPOSURE TIME FOR MULTI-CHIP OBSERVATIONS

Setting the exposure time to the value of the EXPOSURE keyword in the header to calculate "RATE" and "FLUX" columns may lead to erroneous results in spatial extraction with multi-chip images and exposure maps, if the exposure times of the various chips differ significantly. If unnormalized exposure maps are used, the "FLUX" columns will be correct, but the "RATE" and "MEAN_SRC_EXP" columns may not be correct if they refer to regions on chips other than the one used to set the EXPOSURE keyword, and the exposure on that chip differs from that used to set the EXPOSURE keyword. For normalized exposure maps, the "MEAN_SRC_EXP" column will be correct, but the "RATE" and "FLUX" columns may not. A simple, albeit tedious, work-around is to generate image and exposure maps separately for each chip, so that the EXPOSURE keyword accurately reflects the exposure time for that chip.

Bugs

See the bugs page for this tool on the CIAO website for an up-to-date listing of known bugs.

Hardcopy (PDF): A4 | Letter
Last modified: December 2006



The Chandra X-Ray Center (CXC) is operated for NASA by the Smithsonian Astrophysical Observatory.
60 Garden Street, Cambridge, MA 02138 USA.    Email: cxcweb@head.cfa.harvard.edu
Smithsonian Institution, Copyright © 1998-2004. All rights reserved.