12-13 January 2004

Chandra User's Committee

Members Attending: M. Arnaud,  J. P. Henry, J.P. Hughes (chair), W. Latter, 
 J. Lee, K. Long, K. Mitsuda, G. Taylor, C. Reynolds, and S. Snowden

Members absent: Y.-H. Chu and V. Kalogera

Others: A. Tennant, H. Tananbaum, B. Wilkes, F.Seward, P. Green, D. Kniffen 
(by telecon) 

During Roger Brissenden's report the following issues were raised and
addressed.  In response to questions about the IRU swap Roger reported
that the gyros on-board Chandra were not of the HST type and that on
other spacecraft the Chandra-type gyros lasted for 17 years. There is
currently 1 spare gyro that can be cross-strapped if necessary in the future 
and an additional one available for short-term backup.  The CXC is working on 
other cross-strapping scenarios and looking into additional fail-safe modes, 
e.g., using a single gyro and the aspect camera.  Operational constraints 
being implemented to limit the temperature of the EPHIN require that 
continuous observations at pitch angles of approximately 90 degrees be kept 
below 50 ks.  It was noted that this will impact simultaneous Chandra and 
XMM-Newton observations, since XMM-Newton prefers to observe at pitch angles 
near 90 degrees.  This information needs to be communicated to the Chandra
user community. Due to the "fail to insert" anomaly it was decided to
disallow use of the HRC-Y shutter. This was rarely used anyway.
Chandra was apparently struck by a meteoroid during the last Leonid
shower (Nov 15, 2003).  No failure was noted in any S/C or SI system
and, based on estimates of where the impact could have occurred, it is
believed that no important S/C component was damaged.  Cooling of the
aspect camera has reduced the dark current and the number of warm
pixels, resulting in improved centroiding.  Capability for further
cooling (down one more level, 5 C) is available.

Harvey Tananbaum spoke about the Chandra/XMM-Newton Duplicate Targets
issue.  In 2003 the Chandra Cycle 5 and XMM-Newton Cycle 3 reviews
were held at approximately the same time so that information from one
review was not available to the other.  It was decided by Harvey
Tananbaum and Fred Jansen after the reviews that identical targets
awarded by both peer review would be further reviewed to ensure that
duplications were scientifically justified.  The CUC was not consulted
on this issue at the time it originated, although a survey of the
members after the fact indicated strong support for some sort of a
review of duplicate awarded targets.  Ultimately 5 investigators (1
was voluntary and 4 were decided by the Observatory Directors) dropped
one or the other observation while 10 investigators retained both,
which resulted in a net savings of 80 ks of Chandra time.  All things
considered, the results of the post-review review were satisfactory.

   [1] For future observing cycles, the CUC recommends that target
   conflict information (e.g., whether the target was previously
   observed by Chandra or XMM-Newton) needs to be made more visible to
   the peer review.  The large printout of targets and conflicts given
   to the review is not very readable and it generally falls upon each
   individual reviewer to take note of conflicts, which can vary
   depending on the diligence of the reviewer. The CXC should
   investigate how to make target conflict information more visible to
   the peer review. Furthermore, the CUC would like the CXC to consider
   implementing a more structured proposal form in the future with,
   among others, a category for duplications that proposers should
   complete.  This is not intended to replace the target checking done
   by the CXC (which still must be done), but rather to focus
   proposers on the need to address previous observations.  To make
   this easier for the user community there should be links on the
   Chandra Proposal Information web page to the XMM-Newton search
   facility.  If possible, the proposer should also be given access to
   the same checking software that will be used by the CXC for the
   peer review.

Fred Seward reported on the cycle 5 peer review. He noted that 5% of
the GO budget goes to theory proposals and 10% to archive
proposals. Since the oversubscription factor for these proposal
categories is similar to that for observing proposals, it seems that
the fraction of funding going to these programs is appropriate for
now.  Fred noted that 6 institutions win roughly 1/2 of all Chandra
time through the peer review. However the success rate for CXC
proposers is roughly the same as that for GO's outside the Center.
Fred announced plans to retire next year.  All responsibilities for
cycle 6 have been handed over to Belinda Wilkes.

The CUC acknowledges the tremendous service that Fred Seward has
provided over the years to the Chandra user community.  He has carried
out his responsibilities selflessly, graciously, and with humor and for
this we are extremely grateful. We are pleased to learn that he will
be devoting more time to his research and we wish him great success
with it.

The summary of the Chandra Cycle 5 "chair and pundit" questionnaire by
Belinda was informative.  It seems like a useful activity to carry
forward to future reviews.

   [2] The CUC considered whether to suggest modifications to the peer
   review for the upcoming cycle 6 review.  It was agree that the most
   important change would be to lengthen the review by 1/2 day. Based
   on the questionnaire summary there did not seem to be enough time
   to properly read and review the LP and VLP proposals in the
   interval between the individual panel reviews and the Big Panel
   review the next day. All CUC members who have served on past
   reviews admitted that they would be willing to serve an extra 1/2
   day in order to review these properly.  Therefore the CUC
   recommends that CXC consider adding an extra 1/2 day for the panel
   chairs for the Big Panel review.

The CXC is taking appropriate steps in responding to the IIR
observation regarding control of observer's confidential proposal
information.  In addition the CUC feels that the phase II budget
proposal process is not onerous and therefore does not need to be
modified at this time.

The CUC was asked to consider whether Chandra should implement a new
class of proposals for "high risk science."

   [3] The CUC feels that now would be a good time to implement such a
   category of proposals.  Chandra is a mature observatory.  We also
   note that HST has such a separate proposal category. We think it
   would be best that these proposals be reviewed in the individual
   science panels first.  Then a subset of the reviewers (perhaps the
   deputy chairs from each panel - but NOT the panel chairs who are
   too busy otherwise) should convene to decide on which ones to do by
   merging the various science categories.

Paul Green presented a summary of the performance metrics being
developed to judge Chandra science across various proposal types.
Harvey cautioned all that the metrics are not yet finalized nor have
they been optimized.  

   [4] The CUC appreciates that the presentation summarized work in
   progress.  Performance metrics are the main tool to assess the
   science return of Chandra and have been needed for several years
   now, so we urge the CXC to establish a functional set of metrics
   quickly enough that modifications to the Chandra observing program,
   if indicated, can be implemented for cycle 7.

   Several different metrics need to be studied. For the question of
   VLP, LP, and GO proposals it is essential to track science
   productivity/impact against the proposals that were approved by the
   peer review (i.e., citations per total approved time for a
   proposal).  The CXC needs to take care to include sufficient
   information on the sources of observations so that these
   comparisons can be made.

   The CXC should report back to the CUC on the several joint
   Chandra/XXX proposal opportunities (where XXX= NOAO, XMM-Newton,
   etc.) with basic information on the programs, such as proposal
   pressure, breadth of community involvement, and some assessment of
   the efficacy of these programs.

Martin Elvis reviewed the CXC Science Data Systems (SDS) priorities,
scope, status, and plans.  The CUC was generally pleased with the
material presented and the priorities for software development. We
applaud the SDS group's clear willingness and desire to be responsive
to user concerns.  

The CUC had requested software demos in an honest attempt to learn
about cutting edge analyses of Chandra data from the experts.
Overall, the demos were well done and generally well received although
there was some perception by several members of the CUC that S-Lang
was over-emphasized.  Nevertheless, the committee would be interested
in seeing more demos in the future with more concentrated emphasis on
the details of the analysis tool specifically relating to the science
and less emphasis on the scripting language itself.

This committee has raised concerns about S-Lang for the last several
meetings and the SDS group has responded each time.  It is clear that
the SDS group stands by its selection and use of S-Lang and has
presented evidence to support this choice. Furthermore the committee
itself is split on the importance of the issue. A survey of the CUC
members present at the meeting revealed that all used a scripting
language and that a few of us used several.  Nine used some shell
scripting language, seven used IDL, three Perl, and then one each used
S-Lang, Python, awk, or glish.  One other CUC member has tried to use
S-Lang.  If these numbers are representative of the Chandra user
community in general, then they offer a warning, but also hope, to the
SDS group.  The warning is that the SDS group has not done a
particularly good job at getting the community to embrace and use
S-Lang. Some effort (along the lines of workshops, etc.) might be
helpful.  The hope is that since S-Lang is "IDL-like" it might attain
wider importance in the community, which already utilizes IDL

In our discussions about scripting languages and software, the CUC
realized that there was a deeper, more fundamental, underlying issue
at stake that can be put in stark relief merely by examining the user
interface for analysis software for Chandra and XMM-Newton, the two
active imaging X-ray observatories.  Although all of the functional
activities of the software are essentially identical (time filtering,
extracting images or spectra, and so on), the software and user
interface are entirely different. And we realize that Astro-E2
software and its user interface will be different still.  The user
communities for these missions are generally the same people, who
suffer from a lack of commonality in software in two major ways: (1)
project funds that are used to build an extensive new software
infrastructure from the ground up are not available to users for
analysis or to the community at large for the development of new
missions, and (2) each new mission requires its users to learn new
tasks for performing the same old routine analysis steps.

   [5] As one of the principal X-ray data centers in the US, the CXC
   in cooperation with the HEASARC should take the lead at planning
   for commonality of software, scripting languages, etc. across
   future NASA high energy astrophysics missions.  Working toward
   commonality with ESA and JAXA/ISAS high energy astrophysics
   missions would be extremely valuable as well.

Dan Schwartz presented plans for trends analysis oversight in response
to a previous CUC request.  The stated mission goal (to identify any
changes in instrument performance before they occur) and the plans
presented fully address the CUC concerns.

Paul Plucinsky gave an update in the ACIS contamination issue and
discussed the possibility for a bake-out.  He asked for input from the
CUC on whether the decreased low energy quantum efficiency had a
deleterious impact on our own science investigations. Eight members
agreed that it did have such an impact, one said no, and another

   [6] The CUC endorses the CXC's policy to avoid rushing into a hasty
   decision to carry out a bakeout, since the effect seems to be
   leveling off.  The calibration plan (presented by Larry David) is
   improved from before and generally looks appropriate.

Larry David reviewed the new calibration web pages and summary
document. We encourage the calibration group to update this document
as they work toward the Chandra calibration goals.

   [7] The current status of calibration has now been well represented
   by the calibration document and the CUC wishes to acknowledge the
   efforts of the calibration group to produce this document.  This is
   a much welcome resource that should prove invaluable to users as
   well as to the larger Chandra community. At our next meeting the
   CUC would like to hear a presentation that describes the
   prioritized plan for how to go from the current status to the goals
   in order that calibration group resources are directed toward
   efforts that produce new calibration results for the broadest range
   of Chandra users.

Diab Jerius spoke about efforts to model the on-axis Chandra
point-spread-function.  This work is impressive.  Understanding the
detailed imaging properties of the Chandra X-ray mirrors would result
in very useful information that could be applied widely.  It is clear
that getting to this goal will require a concerted effort over some
time and that a well formulated plan would help guide the upcoming

   [8] The CXC is clearly drawn to pursue studies of the Chandra PSF
   and with good reason - these are the finest X-ray mirrors ever
   built. Still the CUC would like to see a plan that assesses the
   value and direction of further detailed modeling of the Chandra PSF
   and that lays out some practical goals for the work.  Questions to
   consider include: What is the precise science value to Chandra
   users of pursuing further work in this area?  What specific
   science studies are not possible now and would be made feasible
   with further effort? Realistic observational effects (aspect
   reconstruction errors, event reconstruction errors, pixelation,
   pile-up, sub-pixel structure, source energy distributions, etc.)
   should be included so that the results of the work will have
   application to the widest range of Chandra investigations.

Kester Allen gave an interesting presentation on parameterizing the
Chandra PSF as a function of energy and off-axis angle.  During this
presentation the CUC learned that this work was largely motivated by a
request from an individual Chandra user.  Although the work will
surely have value for many users, the way it originated caused the CUC
to wonder about priorities and access to resources.

   [9] The CXC should look at the "big picture" questions of how
   priorities among various calibration activities are set and 
   how to best use the available CXC resources for calibration and 
   analysis activities.

Jeremy Drake talked about improvements in the LETGS Dispersion relation.
   [10] The CUC would like to see the CXC take a similar approach to
   that outlined in recommendation [8] above before putting further
   effort into improving the LETG/HRC-S wavelength scale.

The CUC would like to hear about the following topics at its next

 (1) EPO efforts
 (2) Chandra/XMM-Newton cross calibration
 (3) Constrained and coordinated observations: How many observations
  are actually constrained and in what category (e.g., approved by
  peer review, preferences expressed on forms, constraints added by
  users after their programs were approved, DDT observations, etc.).
  What is the appropriate overall number of constrained observations
  (health and safety vs. scientific value), and how many should be
  given out by the peer review vs. DDT.  This should include discussion
  of the load on CXC planning and operating staff. Can the constraints be
  changed for the ease of users (e.g., allow setting time constraints
  in GST). 

Written by Jack Hughes, Chair CUC (20 February, 2004)