'get_src_region' determines a
background threshold value from an image or event file and
outputs regions with counts higher than that threshold.
If the input is an event file, the user must supply a Data
Model filter so that get_src_region can create a virtual
image; see "ahelp dmsyntax" for more information on
DM filtering.
From the bins of the image, the average and standard deviation
of background counts are determined.
The tool can be made to
iterate, excluding pixels more than N-sigma from the mean.
The background threshold value is set to be:
average background counts + standard deviation of
background counts * sigma_factor.
sigma_factor is a user input parameter (default=5).
Any region that has counts higher than the background
threshold value is classified as source region. A list
of source "box" regions will be written to the output file.
The following algorithm is used to determine the average
and standard deviation of background counts:
1) Using the binning specification, create a histogram of the
pixel values in a 2-D image.
2) Sum up the number of counts in the bins of the input image.
3) Divide the sum by the total number of bins in the image
to get the average number of counts.
4) Calculate the standard deviation.
5) Set min_value to:
(int)(average number of counts - sigma_factor * standard deviation
+ 0.5)
(Note: sigma_factor is a user input parameter and its default
value is 5)
If min_value is smaller than 0, reset it to 0.
6) Set max_value to:
(int)(average number of counts + sigma_factor * standard deviation
+ 0.5)
If max_value >
maximum counts in the bins, set max_value to
maximum counts in the bins.
7) If niter=1 (i.e number of iteration, a user input
parameter), then we are done calculating the average and
standard deviation of the background counts. The average
number of counts obtained in step 3) will be considered
as the average background counts, while the standard
deviation obtained in step 4) will be considered as
the standard deviation of the background counts.
If niter > 1, go to step 8).
8) Recalculate the average number of counts but this time,
only include the bins that have counts greater than
or equal to min_value and counts smaller than or equal to
max_value. Repeat steps 4 to 6 to obtain a new
standard deviation, new min_value and new max_value.
9) Repeat step 8 until the new min_value is equal to the
previous min_value and the new max_value is equal to the
previous max_value, or when the number of iterations we
did is equal to niter. The average background counts
will be the last calculated average number of counts,
and the standard deviation of background counts will
be the last calculated standard deviation.
get_src_region no longer excludes zero-valued pixels by default in
calculating background mean and sigma. That means that for images
dominated by off-detector area, the mean and sigma on iteration 1
will be << 1. Further iterations will throw out all
non-zero pixels before calculating mean and sigma, leading to a
nonsense result (sigma=0, mean=NaN).
The correct thing to do in these cases is to run the program with a
restricted subspace that eliminates the off-detector area. You can
determine the approximate region using ds9, for example. Using a
command such as:
unix% get_src_region \
infile=image.fits"[sky=rotbox(4089.3704,3773.4914,5199.7363,1006.885,306.01803)]" \
outfile=get_src_region_1.fits sigma_factor='5' niter='5' \
kernel='default' clobber='yes' verbose='5' mode='ql'
yields reasonable results (although not exactly the same as in CIAO
3.1, since the mean and sigma will still be different).
get_src_region now supports non-integer input images.