Since infancy (or the first year of grad school, whichever comes first), astronomers are taught one basic rule: "If it takes statistics to prove it, you can't believe it". A laudable attitude, wisdom distilled and passed along from advisor to student over the generations, and one guaranteed to ensure that referees don't snicker at your manuscripts. On the other hand, the same capricious referee will refuse to let you publish a result if you don't put an error bar around it. And nowadays, in these trying times when we don't live in the Gaussian regime, it is all too easy to put the wrong error bar on a critical result. Life isn't fair (± 0.314?). We have all experienced those sources at the edge of detectability, jets that look like fluctuations, spectral lines that ought to be there, pulsing signals that are clearly evident to the naked eye and yet remain tantalizingly consistent with zero; and conversely, absorption features that turn out to be Poisson fluctuations, and abundance anomalies due to a misplaced continuum, and so on, and so forth. Now, Chandra data are pushing the envelope on how far astronomers can go without having to care about the underlying statistics. We can no longer turn the crank on that black box and afford to blindly trust the results that pop out.
In an effort to coherently address these issues, the CXC has established a collaboration with the Statistics Department at Harvard University, led by David van Dyk (Harvard University) and Aneta Siemiginowska (CXC)1. The AstroStatistics Working group maintains a WWW site that contains details of the group's activities, including preprints and journal articles2.
It took a while for us to get past: (1) the language difficulties: "λ ... umm.. you mean wavelength?" "NO! That is the Poisson model intensity!''; (2) the horror of the statisticians at how the Typical Astronomer wields the statistical axe: a search through ApJ's of the past 5 years revealed that the vast majority of the 170-odd papers that used the F-test for model comparison did so improperly, inappropriately, incorrectly, or to put it another way, erroneously (see Protassov et al. 2002, ApJ, 571, 545); (3) and the obstinacy of the astronomers looking for a quick fix: "Why can't we use the Χ2? It worked fine yesterday!'' and ``Takes 5 minutes to run that program? -- That is NO GOOD. Should run in 5 seconds. 10, tops!''. The collaboration is now working smoothly, dealing with problems ranging from spectral fitting in the low-counts case to handling pileup in intense sources, to image deconvolution (with error-bars), as well as incorporating atomic data errors in spectral fitting, modeling log(N)-log(S) curves at very low sensitivities, etc.
Recently, we organized a workshop on Current Challenges in Multi-Scale Analysis on Jan 15-16, 2003 at Cambridge, MA, following up on a similar themed Special Session at AAS 201 (Principled ``Model Free Deconvolution'' via Multi-scale Methods). This event was sparked by an unusual confluence: the AAS speakers, their collaborators, and local multi-scale and deconvolution experts from several disciplines were in the Boston area following the AAS and we took this opportunity to have two days of in-depth talks by key speakers, commentaries by visiting experts, and discussions by all. The goals of the workshop included:
Vinay Kashyap & Aneta Siemiginowska
1 Other members of the AstroStat Working Group include Alanna Connors (Wellesley), Peter Freeman (CXC), Vinay Kashyap (CXC), Andreas Zezas (SAO), Margarita Karovska (CXC), Eric Kolaczyk (Boston University), and numerous grad and undergrad students at Harvard University: Rostislav Protassov, David Esch, Yaming Yu, Hosung Kang, Epaminondas Sourlas, et al.