Roger (at our request) gave a detailed analysis of the time lag between observations and access by the user. The mean lag is 7 days with turnarounds as fast as one day (TOOs) and maximum delays of 13 days. We thank Roger and his crew for this outstanding work.
[1] Occasionally a few datasets are delivered well beyond the mean lag of 7 days. We suggest that each OBSID has a built-in alarm so that the appropriate scientists are informed when an OBSID has been unduly delayed.
We understand that the Director keeps a keen eye on results that may also result in potentially interesting press releases. While CXC regularly informs the users that it is interested in their results not all PIs are well informed regarding what results could potentially be news worthy.
[2] We recommend that more channels be opened by which either the Director or the CXC Outreach office can learn of potentially interesting results. Possibilities include charging CUC members to take some responsibility for identifying potentially appropriate science results and sensitizing PIs of successful large proposals of this issue.
The DD program appears to have been responsive to a wide range of unanticipated (or at least not proposed) TOOs.
The original contract clearly states the fraction of time allocated to the instrument teams (this is the origin of the GTO allocation).
However, the document (as we understand it) is subject to interpretation about the GTO allocation beyond prime phase. Don Kniffen, representing NASA HQ, sought our opinion on this matter.
Clearly the CUC cannot (and will not) comment on the legal issues. The instrument teams have intimate knowledge of the instruments and provide service to the CXC under a contract from HQ. It is not entirely clear how the continuation or termination of GTO beyond prime phase would affect the instrument teams and Chandra operations. The CUC attempted to begin addressing this question by using CXC statistics provided by Fred Seward to examine GTO versus GO success in AO-3. The statistics indicate that GTO proposals won targets with approximately three times higher success rates than GO proposals. This suggests that the fraction of Chandra time going to the instrument teams might not change much if GTO were terminated at the end of prime phase. Additionally, there is broad agreement that the fiscal and scientific fitness of the instrument teams must be maintained as the Chandra mission moves beyond prime phase.
The CUC agreed to gather additional information before the next meeting regarding the effects of terminating GTO at the end of prime phase. A recommendation on GTO beyond prime phase will be issued following that discussion.
The proposed motion did not find much support among CUC members. It was argued that users from smaller colleges (with heavier teaching loads) and young researchers (who do not have the luxury of a large standing research team) would be particularly disadvantaged by a shorter proprietary period. Furthermore, proper analysis of many datasets is still limited by the significant calibration uncertainties (see below).
[3] We concur and reaffirm the current practice.
As mentioned above, Fred Seward provided statistics on this first round of GTO versus GO competition. The success rate of the GTO proposals is difficult to compare to the GO proposal success rate, because of the small numbers. However, an analysis of the target success rate is possible, because the number of conflicted targets is much higher. An analysis of the statistics indicates that the GTO success rate in winning a conflicted target is three times higher than the GO success rate in winning a conflicted target. Possible explanations for this large difference in success include: (1) the current process of target selection favors GTO proposals and (2) GTO teams are much stronger, on average, than GO teams.
Comparing the scientific strength of GO and GTO teams is clearly beyond our means, so the CUC discussed components of the current target selection process, which might favor GTOs. It was suggested that the fact that a panel that accepts a GTO proposal is not "charged" with any time for that observation may bias panels toward GTO proposals. It was pointed out that self selection by GTO teams (i.e. GTO teams only choose to write proposals for those conflicted targets which they presumably believe they have a high chance of winning) would enhance GTO success rates.
[4] We examined possible changes to this process. Having GTO teams write proposals for all their targets might provide the most equitable treatment for GOs and GTOs. However, this seems wasteful, given that a large fraction of the GTO targets will be unconflicted and automatically granted time. However, review panels do receive GO proposals from GTO team leaders in addition to the GTO proposals, and these GO proposals from GTO team leaders are treated no differently from the bulk of GO proposals. Moreover, it simply is not necessary for the review panels to know which proposals are GTO and which are GO, because determining whether to charge the time in a given proposal to the GO or GTO pool can all be handled afterwards. Therefore, we recommend that the GTO proposals be handled in the same way as GO proposals. Review panels will simply rank GO and GTO proposals with the expectation that a certain number of the proposals are indeed GTO proposals. After the panel rankings are complete, the bookkeeping to determine whether the time for a given observation is charged to the GO pool or a particular GTO pool will be carried out by the CXC.
[5] The CUC whole heartedly supports the suggestion by B. Wilkes of having all future Chandra proposals require the PI to list previous Chandra observations they have been granted together with a list of their publications that use those data.
Wilkes raised the prospect of allowing for multi-year proposals in Cycle 5. The CUC did not have time to discuss the proposal and understand the full scope of the proposed option. However, there are a number of observations that benefit from this option, e.g. astrometry, timing and monitoring. We would like to hear further details in the next CUC meeting.
[6] Our main suggestions regarding the archive interfaces are: to add the capability of searching by target category, to add NED to the choice of target resolvers, and to add a "suggestions for improvements" type of link to interfaces. An "advanced" version of the WebChaser interface should allow searches which query all parameters associated with the data headers and the proposal inputs. We also request that Chaser provide a web service that would allow retrievals and basic data searches via alternative interfaces, such as the HEASARC interface and StarView. Such services are not difficult to implement given the proper contacts and coordination, and is a basic implementation expected for a 21st-century archive center. (Niall Gaffney, gaffney@stsci.edu, is the Java developer for StarView.) Finally, the CUC asks that the direct data access (i.e. a direct data deposit to the users' disk) be made available through WebChaser. If both interfaces access the same backend archive services, it does not seem impossible to open an ftp connection from those services, regardless of whether the Java interface or WebChaser is used to make the request.
The calibration group believes that a few other observations are necessary. An additional 3 E0102 positions (24 ksec total) on I3 node 0 would provide better measurements of the effects of the radiation damage at the position it was most severe. A 100 ksec observation of 3C273 or Her X-1 would provide a better measurement of the wings of the PSF. A 40 ksec HRC-I integration on the Vela remnant would map the low energy QE uniformity on an arcminute scale. Finally, an additional 50 ksec of HETG/ACIS-S observations of 3C273 at different SIM_Z offsets would provide better QE versus wavelength measurements at pointing positions used by a number of guest observers.
Dick Edgar presented various aspects of the ACIS calibration. A few minor problems have turned up in the released S3 -120 response. Examination of the E0102 calibration observations shows a +16 eV zero point gain shift is required to fit the oxygen emission lines. The current gain relation doesn't include a jump at the Si edge and this, in combination with sparse sampling, causes spurious features in high S/N spectra in the 1-2 keV region. Both these problems should be fixed in new FEFs, hopefully included in the planned March CALDB release. The same release should also include FEFs for S3 at -110. The response for S1 has been improved. This is better than the current response for HETG order sorting but is not good enough for imaging spectroscopy.
An attempt to generate FEFs for non-CTI corrected data from the FI chips was not successful. Work has now started on generating FEFs for CTI-corrected data with the aim of a possible release in the summer. However, there are node-to-node variations in the E0102 observations that are not understood at present. [Following the completion of the report we became aware that the node-to-node variations in E0102 are now understood to be in PI space and not the instrument-produced PHA.]
The spectrum of the ACIS particle background has been determined using event histogram data accumulated when HRC-S is in focus and ACIS is screened from cosmic X-rays and its calibration source. The resulting spectrum is consistent with that obtained from the dark moon observation (incidentally showing that the ROSAT detection of X-rays from the dark moon was actually geocoronal). A memo with details is available from the calibration section of the web site.
Chandra ACIS-S3 and XMM-Newton EPIC spectra of G21.5, E0102, and MS1054.4-0321 (a high redshift cluster) have been compared in collaboration with the XMM-Newton project. The relative fluxes are in excellent agreement, with differences at the level of 5% and lower. The spectral shapes are broadly similar with the largest discrepancy being in the column measured to G21.5. The ACIS-S3 value is about 2E21 larger than that for XMM-Newton EPIC.
Terry Gaetz reported on progress on the calibration of the on-axis PSF. Observations of 3C273 (ACIS-S3), LMC X-1 (ACIS-I), and AR Lac (HRC-I) have been compared with ground calibration results. The core is well understood but there are still residual uncertainties in the wings probably due to ground calibration systematics. An additional 100 ksec ACIS-I observation of 3C273 would calibrate the PSF wings more accurately and improve the ability to measure dust scattering halos and low surface brightness emission around bright point sources.
Herman Marshall described the state of the HETGS calibration. The main issue is the QE versus energy for the ACIS chips. The ratio of BI/FI data shows that systematic errors are <10% above 1 keV but rise to around 20% at 0.6 keV. The plan is to correct the BI QE and publish the corrections. Comparing MEG and HEG data has led to small corrections in the MEG efficiencies which will be tested using the observations of PKS2155 and 3C273.
Hank Donnelly presented recent work on the HRC imaging calibration. The HRC team continues to make incremental improvements. The only major issue is in the QE uniformity at low energies. An observation of the Vela remnant has been proposed to characterize this.
Jeremy Drake reported on the status of the LETG calibration. The ground calibration left more uncertainties in the calibration for the LETG than the HETGS. The CXS team have taken a conservative approach and not adopted the technique advocated by the LETG team at SRON of modifying the efficiencies based on observations. The two calibrations differ by <15% when averaged over wavelength bands however there are larger differences over small wavelength ranges.
The dispersion relation for the LETG+ACIS-S observations of Capella indicates a different dispersion relation for the ACIS-S and HRC-S detectors. This may mean an error in the current pixel size for either or both detectors.
The line response function of the LETG is well described by a beta model I(lambda) = (1 + (lambda/lambda_0)2)^-beta with beta=-2.5+/-0.2.
[7] We were concerned about the science justification for additional calibration observations. It was not always clear to us whether the proposed observations were necessary for scientific results or would just allow some aspect of the system to be measured more precisely.
[8] We were very pleased to hear that a variation of the PSU CTI-corrector is now implemented within CIAO. We note that it would be useful to release this even if the calibration information is not yet available because it can be used to make narrow energy-band images in ACIS-I. We also note that the PSU CTI-corrector works on both parallel and serial CTI and hope that the CXC version will also do so.
[9] We appreciate the tremendous amount of work going into generating the responses for the ACIS chips. However, we do wonder whether resources are being used in an optimum manner and are concerned that the timescale to generate new responses is approaching that on which the response itself changes due to further radiation damage. Fitting multiple gaussians to simulation output is intrinsically unstable and requires a lot of manual intervention to give usable results. Alternative methods would include using a more physically-motivated model or building the response directly from the simulations.
[10] We would like to see some thought given to making calibration information available in tabular form on the web. The spirit of this request is that some products are of immediate value to the community and putting out such these products would save an interested user from re-reducing the calibration data. A case in point is PSF wings which is not in CALDB but is a by-product of Gaetz's analysis. Thus an ASCII table of the data would be a quick solution. Longer term fixes include adding to CALDB and/or the proposed web interface to the SAOSAC ray trace.
[11] Andrea Prestwich presented plans for a new version of the POG. This version will include many examples primarily as a teaching aid for new researchers. We applaud Andrea's enthusiasm and the hardwork she is already putting in into the effort. We understand that the new POG will be web based. We have one request: the POG should not be changed substantially from AO to AO, except of course to update performance and other numbers.