sherpa> COVARIANCE [<dataset_range> | ALLSETS] [ <arg_1> , ... ]
where <dataset range> = #, or more generally #:#,#:#,..., such that #
specifies a dataset number, and #:# represents an inclusive range of
datasets; one may specify multiple inclusive ranges by separating them
with commas. The default is to estimate limits using data from all
appropriate datasets.
The command-line arguments <arg_n> may be:
COVARIANCE Command Arguments
<sherpa_modelname>.{<paramname> | <#>} |
A specified model component parameter (e.g., GAUSS.pos). |
<modelname>.{<paramname> | <#>} |
A specified model component parameter (e.g., g.pos). |
The user may configure COVARIANCE via
the Sherpa state object structure cov.
The current values of the fields of this structure may be
displayed using the command print(sherpa.cov),
or using the more verbose Sherpa/S-Lang module
function list_cov().
The structure field is:
cov Structure Field
sigma |
Specifies the number of sigma
(i.e., the change in statistic).
|
Field values may be set using directly, e.g.,
sherpa> sherpa.cov.sigma = 2.6
NOTE: strict checking of value inputs is not done,
i.e., the user can errantly change arrays to scalars,
etc. To restore the default settings of the structure
at any time, use the Sherpa/S-Lang module function
restore_cov().
The confidence interval estimates are computed quickly, as
described below, but are
generally more accurate than those found using the command UNCERTAINTY; see
also PROJECTION.
Because COVARIANCE estimates confidence intervals for each
parameter independently, the relationship
between sigma and the change in statistic value
delta_S can be particularly simple:
sigma = the square root of delta_S
for statistics sampled from the chi-square
distribution and for the Cash
statistic, and is approximately equal to
the square root of (2 * delta_S)
for fits based on the general log-likelihood.
Confidence Intervals for the covariance command
68.3% | 1.0 | 1.00 | 0.50 |
90.0% | 1.6 | 2.71 | 1.36 |
95.5% | 2.0 | 4.00 | 2.00 |
99.0% | 2.6 | 6.63 | 3.32 |
99.7% | 3.0 | 9.00 | 4.50 |
There are a number of computations associated with the
COVARIANCE command, which are described in detail in
the Sherpa manual.
Output files include the information and covariance matrices, along with
the eigenvectors and eigenvalues of the covariance matrix.
These are recorded in three temporary ASCII files in the
$ASCDS_WORK_PATH directory: ascfit.inf_matrix.<number>,
ascfit.cov_matrix.<number>, and
ascfit.eig_vector.<number>,
where <number> refers to the process ID (pid) number for
the Sherpa run. These files may be saved by copying them
from the $ASCDS_WORK_PATH directory during the Sherpa
session. The files are deleted from the working directory when the
Sherpa session is finished.
The default setting for this variable may be determined as follows:
unix% echo $ASCDS_WORK_PATH
An estimated confidence interval is accurate if and only if:
-
the chi-square or log(L) surface in parameter space is
approximately shaped like a multi-dimensional paraboloid, and
-
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold by plotting
the fit statistic as a function of each parameter's values (the curve should
approximate a parabola) and by examining contour plots of the fit statistics
made by varying the values of two parameters at a time (the contours
should be elliptical, and parameter space boundaries should be no closer
than approximately 3-sigma
from the best-fit point).
Note that these conditions are the same as those which dictate
whether the use of PROJECTION
will yield
accurate errors. While PROJECTION is more general
(e.g. allowing the user to examine the parameter space away from the
best-fit point), it is in the strictest sense no more accurate
than COVARIANCE for determining confidence intervals.
If either of the conditions given above does not hold,
then the output from
COVARIANCE may be meaningless except to give an
idea of the scale of the confidence intervals. To accurately
determine the confidence intervals, one would have to reparameterize
the model, or use Monte Carlo simulations or Bayesian methods.