Proceedings Volume 7016

Observatory Operations: Strategies, Processes, and Systems II

cover
Proceedings Volume 7016

Observatory Operations: Strategies, Processes, and Systems II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 24 July 2008
Contents: 14 Sessions, 73 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2008
Volume Number: 7016

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7016
  • Plenary Session
  • General Operations I
  • General Operations II
  • Data Management and Quality Control
  • Observatory Scheduling
  • User Support
  • Operational Process
  • Operational Statistics
  • Posters: Data Management and Quality Control
  • Posters: General Operations
  • Posters: Observatory Scheduling
  • Posters: Operational Statistics
  • Posters: User Support
Front Matter: Volume 7016
icon_mobile_dropdown
Front Matter: Volume 7016
This PDF file contains the front matter associated with SPIE Proceedings Volume 7016, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Plenary Session
icon_mobile_dropdown
High redshift galaxy surveys
A brief overview on the current status of the census of the early universe population is given. Observational surveys of high redshift galaxies provide direct opportunities to witness the cosmic dawn and to have better understanding of how and when infant galaxies evolve into mature ones. It is a much more astronomical approach in contrast to the physical approach of to study the spatial fluctuation of cosmic microwave radiation. Recent findings in these two areas greatly advanced our understanding of the early Universe. I will describe the basic properties of several target objects we are looking for and the concrete methods astronomers are using to discover those objects in early Universe. My talk starts with Lyman α emitters and Lyman break galaxies, then introduces a clever approach to use gravitational lensing effect of clusters of galaxies to detect distant faint galaxies behind the clusters. Finally I will touch on the status and prospects of surveys for quasars and gamma-ray bursts.
General Operations I
icon_mobile_dropdown
W. M. Keck Observatory operations
The operations model of the Keck Observatory and the factors that allow it to operate with unprecedented scientific success while maintaining the lowest operating cost to capital ratio of the 8m-10m class of telescopes are examined. We describe matching of resources to operating requirements and steps taken to optimize the effectiveness of the overall operation. We describe how strategic goals, operating philosophy and detailed planning mesh to match science objectives with technological capability. We conclude by examining how operations design drives both long term operating cost and realization of the potential inherent in the initial capital investment.
AO operations at the W. M. Keck Observatory
Randall D. Campbell, David Le Mignant, Marcos A. van Dam, et al.
Natural Guide Star (NGS) and Laser Guide Star (LGS) Adaptive Optics (AO) have been offered for routine science operations to the W. M. Keck Observatory community since 2000 and late 2004, respectively. The AO operations team is now supporting ~100 nights of AO observing with four different instruments, including over fifty nights of LGS AO per semester. In this paper we describe improvements to AO operations to handle the large number of nights and to accommodate the recent upgrade to the wavefront sensor and wavefront controller. We report on the observing efficiency, image quality, scientific productivity, impact analysis from satellite safety procedures and discuss the support load required to operate AO at Keck. We conclude the paper by presenting our plans for dual LGS AO operations with Keck I - Keck II LGS, starting in 2009.
La Silla Paranal Observatory operations
The European Southern Observatory has recently merged the operations of its three Chilean sites, La Silla, Paranal/VLT, and Chajnantor APEX, into one observatory, the La Silla Paranal Observatory. Recent developments, latest achievements and future challenges at the different sites are presented in this paper.
Maintenance management at La Silla Paranal Observatory
Nelson Montano
From the beginning of the VLT project, the European Southern Observatory (ESO) considered the application of a competent maintenance strategy a fundamental aspect for future operations of the Paranal Observatory. For that purpose, a special maintenance philosophy was developed during the project stage and applied during the initial years of operations. The merging of the La Silla and Paranal Observatories in 2005 added a new managerial challenge to the regular operational requirements (high availability and reliability) which motivated ESO Management to develop a stronger strategy for the operations of the new merged Observatory. Part of the new strategy considered the creation of a dedicated department for the management of all maintenance activities, separating this support from the traditional scheme where the Engineering Department had the responsibility for the entire technical support to operations. In order to keep a competent level of maintenance operations for the new unified Observatory, the La Silla Paranal (LSP) Maintenance Department has been using a well known maintenance management model used in various industrial applications as a guide. Today the operations of the Maintenance Department are concentrated on developing and implementing practices regarding concepts such as Maintenance Tactics, Planning, Data Management, Performance Indicators and Material Management. In addition to that, advances related to Reliability Analysis been taken in order to reach a superior level of excellence. The results achieved by the LSP Maintenance Department are reflected in a reduced rate of functional failures, allowing uninterrupted operations of the Observation sites.
Overview of engineering activities at the SMA
R. D. Christensen, D. Y. Kubo, Ramprasad Rao
The Submillmeter Array (SMA) consists of 8 6-meter telescopes on the summit of Mauna Kea. The array has been designed to operate from the summit of Mauna Kea and from 3 remote facilities: Hilo, Hawaii, Cambridge, Massachusetts and Taipei, Taiwan. The SMA provides high-resolution scientific observations in most of the major atmospheric windows from 180 to 700 GHz. Each telescope can house up to 8 receivers in a single cryostat and can operate with one or two receiver bands simultaneously. The array being a fully operational observatory, the demand for science time is extremely high. As a result specific time frames have been set-aside during both the day and night for engineering activities. This ensures that the proper amount of time can be spent on maintaining existing equipment or upgrading the system to provide high quality scientific output during nighttime observations. This paper describes the methods employed at the SMA to optimize engineering development of the telescopes and systems such that the time available for scientific observations is not compromised. It will also examine some of the tools used to monitor the SMA during engineering and science observations both at the site and remote facilities.
Magellan Telescopes operations 2008
The twin 6.5m Magellan Telescopes have been in routine operations at the Las Campanas Observatory in the Chilean Andes since 2001 and 2002 respectively. The telescopes are owned and operated by Carnegie for the benefit of the Magellan consortium members (Carnegie Institution of Washington, Harvard University, the University of Arizona, Massachusetts Institute of Technology, and the University of Michigan). This paper provides an up to date review of the scientific, technical, and administrative structure of the 'Magellan Model' for observatory operations. With a modest operations budget and a reasonably small staff, the observatory is operated in the "classical" mode, wherein the visiting observer is a key member of the operations team. Under this model, all instrumentation is supplied entirely by the consortium members and the various instrument teams continue to play a critical support role beyond initial deployment and commissioning activities. Here, we present a critical analysis of the Magellan operations model and suggest lessons learned and changes implemented as we continue to evolve an organizational structure that can efficiently deliver a high scientific return for the investment of the partners.
General Operations II
icon_mobile_dropdown
Commissioning and early operations of the Large Binocular Telescope
Richard F. Green, John M. Hill, James H. Slagle, et al.
By June 2008, The Large Binocular Telescope Observatory will have supported two full semesters of observing with prime focus imaging. Interspersed were optical alignment and initiation of binocular mode for the prime focus, as well as installation and initial commissioning of the first bent Gregorian focal station. We examine the lost time statistics and distribution of issues that reduced on-sky access in the context of the limited technical support provided for observing. We also note some of the restrictions imposed by the alternation of engineering and commissioning activities with scheduled observing time. The goal is to apply the lessons learned to the continuing period of observation plus commissioning anticipated as new spectroscopic, adaptive optics, and interferometric capabilities are added through 2010.
Lessons learned from Sloan Digital Sky Survey operations
S. J. Kleinman, J. E. Gunn, B. Boroski, et al.
Astronomy is changing. Large projects, large collaborations, and large budgets are becoming the norm. The Sloan Digital Sky Survey (SDSS) is one example of this new astronomy, and in operating the original survey, we put in place and learned many valuable operating principles. Scientists sometimes have the tendency to invent everything themselves but when budgets are large, deadlines are many, and both are tight, learning from others and applying it appropriately can make the difference between success and failure. We offer here our experiences well as our thoughts, opinions, and beliefs on what we learned in operating the SDSS.
Spitzer Science operations: the good, the bad, and the ugly
We review the Spitzer Space Telescope Science Center operations teams and processes and their interfaces with other Project elements -- what we planned early in the development of the science center, what we had at a launch and what we have now and why. We also explore the checks and balances behind building an organizational structure that supports constructive airing of conflicts and a timely resolution that balances the inputs and provides for very efficient on-orbit operations. For example, what organizational roles are involved in reviewing observing schedules, what constituency do they represent and who has authority to approve or disapprove the schedule.
Spitzer's model for dealing with the end of the cryogenic mission
Suzanne R. Dodd, Lisa Storrie-Lombardi, Charles P. Scott
The Spitzer Space Telescope is a cryogenically cooled telescope operating three instruments in wavelengths ranging from 3.6 microns to 160 microns. Spitzer, the last of NASA's Great Observatories, was launched in August 2003 and has been operating for 4.5 years of an expected 5.5 year cryogen mission. The highly efficient Observatory has provided NASA and the science community with unprecedented data on galaxies, star formation, interstellar medium, exoplanets, and other fundamental astronomical topics. Spitzer's helium lifetime is predicted to end on April 18, 2009, with an uncertainty of +/- 3 months. Planning for this cryogen end involves many diverse areas of the project and is complicated due to the uncertainty in the actual date of helium depletion. This paper will describe how the Spitzer team is accommodating the unknown end date in the areas of observation selection, planning and scheduling, spacecraft and instrument monitoring, data processing and archiving, and finally, budgeting and staffing. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
James Webb Space Telescope: applying lessons learned to I&T
Alan Johns, Bonita Seaton, Jonathan Gal-Edd, et al.
The James Webb Space Telescope (JWST) is part of a new generation of spacecraft acquiring large data volumes from remote regions in space. To support a mission such as the JWST, it is imperative that lessons learned from the development of previous missions such as the Hubble Space Telescope and the Earth Observing System mission set be applied throughout the development and operational lifecycles. One example of a key lesson that should be applied is that core components, such as the command and telemetry system and the project database, should be developed early, used throughout development and testing, and evolved into the operational system. The purpose of applying lessons learned is to reap benefits in programmatic or technical parameters such as risk reduction, end product quality, cost efficiency, and schedule optimization. In the cited example, the early development and use of the operational command and telemetry system as well as the establishment of the intended operational database will allow these components to be used by the developers of various spacecraft components such that development, testing, and operations will all use the same core components. This will reduce risk through the elimination of transitions between development and operational components and improve end product quality by extending the verification of those components through continual use. This paper will discuss key lessons learned that have been or are being applied to the JWST Ground Segment integration and test program.
Thirty Meter Telescope: current operations concepts and plans
The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary mirror located in a remote location. From the start of operations, TMT will provide a rich and diverse mix of seeing-limited and diffraction-limited instrumentation. Initially, only classical observing will be supported, although remote observing will follow almost immediately. Queue (or service) observing may be supported at a later date. TMT users will expect high facility uptime and observing efficiency as well as effective user support for planning and execution of observations. Those expectations are captured in the high-level Operations Concept Definition (OCD) document. The services and staffing needed to implement those concepts are described in the TMT Operations Plan. In this paper, high-level TMT operational concepts are summarized followed by a description of the current operations plan, including staffing model.
Running PILOT: operational challenges and plans for an Antarctic Observatory
We highlight the operational challenges and planned solutions faced by an optical observatory taking advantage of the superior astronomical observing potential of the Antarctic plateau. Unique operational aspects of an Antarctic optical observatory arise from its remoteness, the polar environment and the unusual observing cycle afforded by long continuous periods of darkness and daylight. PILOT is planned to be run with remote observing via satellite communications, and must overcome both limited physical access and data transfer. Commissioning and lifetime operations must deal with extended logistics chains, continual wintertime darkness, extremely low temperatures and frost accumulation amidst other challenging issues considered in the PILOT operational plan, and discussed in this presentation.
Small IRAIT: telescope operations during the polar night
R. Briguglio, G. Tosti, M. Busso, et al.
Small IRAIT is a 25 cm Cassegrain telescope, installed at Dome C, on the high Antarctic plateau, during 2007 winter campaign. It performed a first test of multiband (UBVRI) photometry from Dome C, taking advantage of its remote control system that allowed a 10 days, 98% duty cycle run on a chromospherically active, spotted star in Cen (V841); it also tested multiband acquisition on open clusters, AGB stars, blazars (PKS2155), eclipsing binaries. In situ optimization made the telescope able to operate in the cold, harsh Antarctic environment.
Data Management and Quality Control
icon_mobile_dropdown
The evolving role of a data centre in support of observatory operations and community needs
Séverin Gaudet, Daniel Durand, David Schade
The Canadian Astronomy Data Centre (CADC) manages the data collections of the CFHT, JCMT, HST and Gemini telescopes plus data from several other projects. In the past five years, the role of the CADC has changed. It is now an integral part of telescope operations and provides support for PIs, project teams and the Virtual Observatory. This paper will describe the drivers for this new role, how the CADC has responded to these needs and the operational experience with three major telescope facilities. The advantages and disadvantages of this role for a multi-mission data centre will also be discussed.
The Dark Energy Survey data management system
Joseph J. Mohr, Darren Adams, Wayne Barkhouse, et al.
The Dark Energy Survey (DES) collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at the National Center for Supercomputing Applications (NCSA) and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.
STARS 2.0: 2nd-generation open-source archiving and query software
The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.
Operating a petabyte class archive at ESO
Dieter Suchar, John S. Lockhart, Andrew Burrows
The challenges of setting up and operating a Petabyte Class Archive will be described in terms of computer systems within a complex Data Centre environment. The computer systems, including the ESO Primary and Secondary Archive and the associated computational environments such as relational databases will be explained. This encompasses the entire system project cycle, including the technical specifications, procurement process, equipment installation and all further operational phases. The ESO Data Centre construction and the complexity of managing the environment will be presented. Many factors had to be considered during the construction phase, such as power consumption, targeted cooling and the accumulated load on the building structure to enable the smooth running of a Petabyte class Archive.
NRAO VLA archive survey
Jared H. Crossley, Loránt O. Sjouwerman, Edward B. Fomalont, et al.
The Very Large Array (VLA) radio telescope, operated by the National Radio Astronomy Observatory (NRAO), has been collecting interferometric data (visibilities) since the late 1970's. Converting visibility data into images requires careful calibration of the data, fast Fourier transform processing, and deconvolution methods. To make VLA data accessible to the astronomical community, the NRAO has undertaken the NRAO VLA Archive Survey (NVAS). The goal of NVAS is to produce images, calibrated data, and diagnostics from the visibility data archive and make these data products available to all astronomers. Survey results are obtained from a software pipeline, the details of which are described here.
How to handle calibration uncertainties in high-energy astrophysics
Vinay L. Kashyap, Hyunsook Lee, Aneta Siemiginowska, et al.
Unlike statistical errors, whose importance has been well established in astronomical applications, uncertainties in instrument calibration are generally ignored. Despite wide recognition that uncertainties in calibration can cause large systematic errors, robust and principled methods to account for them have not been developed, and consequently there is no mechanism by which they can be incorporated into standard astronomical data analysis. Here we present a framework where they can be encoded such that they can be brought within the scope of analysis. We describe this framework, which is based on a modified MCMC algorithm, and propose a format standard derived from experience with effective area measurements of the ACIS-S detector on Chandra that can be applied to any instrument or method of codifying systematic errors. Calibration uncertainties can then be propagated into model parameter estimates to produce error bars that include systematic error information.
Scoring: a novel approach toward automated and reliable certification of pipeline products
Reinhard W. Hanuschik, Mark Neeser, Wolfgang Hummel, et al.
By 2010, the Paranal Observatory will host at least 15 instruments. The continuous increase in both the complexity and quantity of detectors has required the implementation of novel methods for the quality control of the resulting stream of data. We present the new and powerful concept of scoring which is used both for the certification process and the Health Check monitor. Scoring can reliably and automatically measure and assess the quality of arbitrarily amounts of data.
Detector monitoring as part of VLT science and data flow operations
Wolfgang Hummel, Lander de Bilbao, Andrea Modigliani, et al.
The ESO Paranal observatory is operating a heterogeneous set of science detectors. The maintenance and quality control of science detectors is an important routine task to retain the technical and science performance of the instrumentation. In 2006 a detector monitoring working group was built devoted with the following tasks: inventory of the currently existing detector calibration plans and monitored quality characteristics, completion and homogenization of the detector calibrations plans, design and implementation of cross-instrument applicable templates and data reduction pipeline recipes and monitoring tools. The instrument calibration plans include monthly and daily scheduled detector calibrations. The monthly calibrations are to measure linearity, contamination and gain including the inter-pixel capacitance correction factor. A reference recipe has been defined to be applicable to all operational VLT instruments and has been tested on archive calibration frames for optical, near- and mid-infrared science detectors. The daily calibrations measure BIAS or DARK level and read-out noise in different ways. This has until now prevented cross detector comparison of performance values. The upgrade of the daily detector calibration plan consists of the homogenization of the measurement method in the existing pipeline recipes.
Solutions for quality control of multi-detector instruments and their application to CRIRES and VIMOS
Quality Control (QC) of calibration and science data is an integral part of the data flow process for the ESO Very Large Telescope (VLT) and has guaranteed continuous data quality since start of operations. For each VLT instrument, dedicated checks of pipeline products have been developed and numerical QC parameters to monitor instrumental behavior have been defined. The advent of the survey telescopes VISTA and VST with multi-detector instruments imposes the challenge to transform the established QC process from a detector-by-detector approach to operations that are able to handle high data rates and guarantee consistent data quality. In this paper, we present solutions for QC of multi-detector instruments and report on experience with these concepts for the operational instruments CRIRES and VIMOS. Since QC parameters scale with the number of detectors, we have introduced the concept of calculating averages (and standard deviations) of parameters across detectors. This approach is a powerful tool to evaluate trends that involve all detectors but is also able to detect outliers on single detectors. Furthermore, a scoring system has been developed which compares QC parameters for new products to those from already existing ones and gives an automated judgment about data quality. This is part of the general concept of information on demand: detailed investigations are only triggered on a selected number of products.
Observatory Scheduling
icon_mobile_dropdown
SkyProbeBV: dual-color absolute sky transparency monitor to optimize science operations
Jean-Charles Cuillandre, Eugene Magnier, Dan Sabin, et al.
Mauna Kea (4200\,m elevation, Hawaii) is known for its pristine seeing conditions, but sky transparency can be an issue for science operations: 25% of the nights are not photometric, a cloud coverage mostly due to high-altitude thin cirrus. The Canada-France-Hawaii Telescope (CFHT) is upgrading its real-time sky transparency monitor in the optical domain (V-band) into a dual-color system by adding a B-band channel and redesigning the entire optical and mechanical assembly. Since 2000, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (30 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 95% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. If the absorption is too high, exposures can be repeated, or the observing can be done for a lower ranked science program. The new dual color system (simultaneous B & V bands) will allow a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools used for handling the CFHT CCD mosaics (CFH12K and MegaCam), from data pre-processing to astrometric and photometric calibration.
Gemini queue planning
Bryan W. Miller, Robert Norris
The Gemini telescopes were designed to be queue scheduled and currently more than 90% of the telescope time is devoted to queue observing. In queue mode observations are done in the conditions that are appropriate for them and it is easier to accommodate programs that require flexible scheduling such as Target of Opportunity observations of gamma ray bursts. Queue observing is most efficient when the number of available options is maximized. A small number of programs usually cannot fill all combinations of RA/Dec and observing conditions constraints. One way to maximize the available options is to allow the use of more than one instrument on a given night. The Gemini telescopes were also designed with this in mind; two or three instruments are usually active on any given night. Large numbers of programs and multiple instruments complicate the processes of planning, managing, and executing the queue. Therefore, Gemini is developing software tools to aid the queue planning. This presentation will outline the Gemini queue planning process and give an overview of the Gemini queue planning tool and the plans for its near-term development.
Chandra mission scheduling on-orbit experience
Sabina Bucher, Brent Williams, Misty Pendexter, et al.
Scheduling observatory time to maximize both day-to-day science target integration time and the lifetime of the observatory is a formidable challenge. Furthermore, it is not a static problem. Of course, every schedule brings a new set of observations, but the boundaries of the problem change as well. As spacecraft ages, its capabilities may degrade. As in-flight experience grows, capabilities may expand. As observing programs are completed, the needs and expectations of the science community may evolve. Changes such as these impact the rules by which a mission scheduled. In eight years on orbit, the Chandra X-Ray Observatory Mission Planning process has adapted to meet the challenge of maximizing day-to-day and mission lifetime science return, despite a consistently evolving set of scheduling constraints. The success of the planning team has been achieved, not through the use of complex algorithms and optimization routines, but through processes and home grown tools that help individuals make smart short term and long term Mission Planning decisions. This paper walks through the processes and tools used to plan and produce mission schedules for the Chandra X-Ray Observatory. Nominal planning and scheduling, target of opportunity response, and recovery from on-board autonomous safing actions are all addressed. Evolution of tools and processes, best practices, and lessons learned are highlighted along the way.
User Support
icon_mobile_dropdown
Operating a wide-area remote observing system for the W. M. Keck Observatory
For over a decade, the W. M. Keck Observatory's two 10-meter telescopes have been operated remotely from its Waimea headquarters. Over the last 6 years, WMKO remote observing has expanded to allow teams at dedicated sites in California to observe either in collaboration with colleagues in Waimea or entirely from the U.S. mainland. Once an experimental effort, the Observatory's mainland observing capability is now fully operational, supported on all science instruments (except the interferometer) and regularly used by astronomers at eight mainland sites. Establishing a convenient and secure observing capability from those sites required careful planning to ensure that they are properly equipped and configured. It also entailed a significant investment in hardware and software, including both custom scripts to simplify launching the instrument interface at remote sites and automated routers employing ISDN backup lines to ensure continuation of observing during Internet outages. Observers often wait until shortly before their runs to request use of the mainland facilities. Scheduling these requests and ensuring proper system operation prior to observing requires close coordination between personnel at WMKO and the mainland sites. An established protocol for approving requests and carrying out pre-run checkout has proven useful in ensuring success. The Observatory anticipates enhancing and expanding its remote observing system. Future plans include deploying dedicated summit computers for running VNC server software, implementing a web-based tracking system for mainland-based observing requests, expanding the system to additional mainland sites, and converting to full-time VNC operation for all instruments.
Users' feedback: What is "good enough"?
Francesca Primas, Stéphane Marteau, Ferdinando Patat
Users' feedback is a vital component of the success of any service organization, but response rates are usually not very comforting and receiving feedback on a regular basis is a rather challenging task. This article presents the main findings of the Feedback Campaign we launched in early 2007 and attempts to analyse its significance. The Campaign targeted all Principal Investigators of ESO Service Mode programmes approved over the period 2001 - 2006. Possible future evolutions of this type of campaigns are briefly discussed, based on the experience we have gained.
ESO's User Portal: lessons learned
A. M. Chavan, L. E. Tacconi-Garman, M. Peron, et al.
ESO introduced a User Portal for its scientific services in November 2007. Registered users have a central entry point for the Observatory's offerings, the extent of which depends on the users' roles - see [1]. The project faced and overcame a number of challenging hurdles between inception and deployment, and ESO learned a number of useful lessons along the way. The most significant challenges were not only technical in nature; organization and coordination issues took a significant toll as well. We also indicate the project's roadmap for the future.
European ALMA operations: the interaction with and support to the users
The Atacama Large Millimetre/submillimetre Array (ALMA) is one of the largest and most complicated observatories ever built. Constructing and operating an observatory at high altitude (5000m) in a cost effective and safe manner, with minimal effect on the environment creates interesting challenges. Since the array will have to adapt quickly to prevailing weather conditions, ALMA will be operated exclusively in service mode. By the time of full science operations, the fundamental ALMA data product shall be calibrated, deconvolved data cubes and images, but raw data and data reduction software will be made available to users as well. User support is provided by the ALMA Regional Centres (ARCs) located in Europe, North America and Japan. These ARCs constitute the interface between the user community and the ALMA observatory in Chile. For European users the European ARC is being set up as a cluster of nodes located throughout Europe, with the main centre at the ESO Headquarters in Garching. The main centre serves as the access portal and in synergy with the distributed network of ARC nodes, the main aim of the ARC is to optimize the ALMA science output and to fully exploit this unique and powerful facility. The aim of this article is to introduce the process of proposing for observing time, subsequent execution of the observations, obtaining and processing of the data in the ALMA epoch. The complete end-to-end process of the ALMA data flow from the proposal submission to the data delivery is described.
Operational Process
icon_mobile_dropdown
End-to-end operations at the National Radio Astronomy Observatory
In 2006 NRAO launched a formal organization, the Office of End to End Operations (OEO), to broaden access to its instruments (VLA/EVLA, VLBA, GBT and ALMA) in the most cost-effective ways possible. The VLA, VLBA and GBT are mature instruments, and the EVLA and ALMA are currently under construction, which presents unique challenges for integrating software across the Observatory. This article 1) provides a survey of the new developments over the past year, and those planned for the next year, 2) describes the business model used to deliver many of these services, and 3) discusses the management models being applied to ensure continuous innovation in operations, while preserving the flexibility and autonomy of telescope software development groups.
Subaru Telescope Network III (STN-III): more effective, more operation-oriented, and more inexpensive solutions for the observatory's needs
Junichi Noumaru, Jun A. Kawai, Kiaina Schubert, et al.
Subaru Telescope has recently replaced most equipment of Subaru Telescope Network II with the new equipment which includes 124TB of RAID system for data archive. Switching the data storage from tape to RAID enables users to access the data faster. The STN-III dropped some important components of STN-II, such as supercomputers, development & testing subsystem for Subaru Observation Control System, or data processing subsystem. On the other hand, we invested more computers to the remote operation system. Thanks to IT innovations, our LAN as well as the network between Hilo and summit were upgraded to gigabit network at the similar or even reduced cost from the previous system. As the result of the redesigning of the computer system by more focusing on the observatory operation, we greatly reduced the total cost for computer rental, purchase and maintenance.
Future perfect: optimal observing and data reduction strategies
This paper describes a new workflow for defining the observation process from initial application through to data reduction. It aims to optimize the time on target for each observation, enabling an observatory to perform more science. It also aims to improve the quality of decision making by ensuring an expert determines each parameter. We also describe a new science product, the instrument function, which is normally left out of most data packages.
Magdalena Ridge Observatory: the start-up of a new observatory
This paper discusses the challenges faced in designing and building a new astronomical observatory. Which factors drive an organization (e.g. university) to invest considerable funding and human resources, and experience considerable risk to establish a new research facility? We identify four main drivers for establishing a new observatory: support for education, research, economic development, and technology development. For public observatories, research is generally the main driver. For nonpublic observatories, the situation is more complex and is for each situation different. A detailed description is presented on the drivers and opportunities that resulted in establishing the Magdalena Ridge Observatory. Three main opportunities are identified: a developed site, surplus equipment, and economic development of the Socorro area.
Project tracking at the Submillimeter Array: from proposals to publication
Charles Katz, Glen Petitpas, Mark Gurwell, et al.
We present a new suite of web-based software tools developed at the Submillimeter Array which allow the tracking of projects from the proposal stage all the way to successful completion of the observations. The web-based nature of these tools allows easy world-wide coordination and collaboration through all aspects of a science project, from proposal writing, time allocation, observing script preparation, scheduling, and finally observations. These tools are based on a project system data-flow which was developed after extensive discussion with proposing scientists, time allocation committee members, support astronomers, and engineers responsible for data quality. This system allows every stage of a project to be tracked, with proposals, time allocation comments, observing scripts, observation schedules, observing logs, data files, data quality reports, etc, all organized in a simple and convenient structure. In addition to making the data more readily accessible to the scientists, this system allows very accurate tracking of other telescope operational parameters, such as efficiency, share-holder's time fractions, and instrument performance, to name just a few. We will present the underlying design for the project system data-flow, and show the software used to ensure each project is tracked completely during its path from proposal to completed science observation.
Operational Statistics
icon_mobile_dropdown
Duty cycle metrics system for the W. M. Keck Observatory
We describe the system to monitor and analyze the duty cycle of observing nights at the W. M. Keck Observatory. The system is almost completely automated, and relies predominantly on existing data. Lists of discrete "events" during the night are compiled (e.g. the start of a science exposure), and the sequence of events is interpreted as an "activity" (e.g. collecting science photons). The metrics system has proven extremely valuable, allowing scientists and engineers to identify the largest causes of inefficiency, and to quantify their impacts. This has led directly to prioritization decisions in upgrades and repairs at the Observatory.
The NASA/IPAC Infrared Science Archive (IRSA) as a resource in supporting observatory operations
IRSA's scaleable and extensible architecture is inherited by new missions and data providers, and thus offers substantial cost savings to missions. It has built archives for the W.M. Keck Observatory & the Spitzer Space Telescope Legacy teams, among others. It provided archiving and databases support for 2MASS, when active, and will provide corresponding support for the forthcoming WISE mission. IRSA acts as a resource to projects and missions by advising on product design and providing tools for validating data products.
Maximizing scientific return for the Hubble Space Telescope in a post-SM4 world
David S. Adler, William M. Workman III
In 2008, the final servicing mission for the Hubble Space Telescope will take place. Replacement of the gyroscopes and batteries, as well as the addition of two new science instruments, will keep Hubble productive well into the next decade. In addition to the hardware upgrades, improvements to the planning and scheduling process will allow for increased observing efficiency and maximization of scientific return over Hubble's remaining lifetime.
Scientific productivity and impact of large telescopes
The primary scientific output from an astronomical telescope is the collection of papers published in refereed journals. A telescope's productivity is measured by the number of papers published which are based upon data taken with the telescope. The scientific impact of a paper can be measured quantitatively by the number of citations that the paper receives. In this paper I will examine the productivity and impact of the CFHT, Gemini, Keck, Magellan, Subaru, UKIRT and VLT telescopes using paper and citation counts.
Posters: Data Management and Quality Control
icon_mobile_dropdown
ESO scalable architecture for operational databases
The European Organisation for Astronomical Research in the Southern Hemisphere (ESO), headquartered in Garching, Germany, operates different state-of-the-art observing sites in Chile. To manage observatory operations and observation transfer, ESO developed an end-to-end Data Flow System, from Phase I proposal preparation to the final archiving of quality-controlled science, calibration and engineering data. All information pertinent to the data flow is stored in the central databases at ESO headquarters and replicated to and from the observatory database servers. In the ESO's data flow model one can distinguish two groups of databases; the front-end databases, which are replicated from the ESO headquarters to the observing sites, and the back-end databases, where replication is directed from the observations to the headquarters. A part of the front-end database contains the Observation Blocks (OBs), which are sequences of operations necessary to perform an observation, such as instrument setting, target, filter and/or grism ID, exposure time, etc. Observatory operations rely on fast access to the OB database and quick recovery strategies in case of a database outage. After several years of operations, those databases have grown considerably. There was a necessity in reviewing the database architecture to find a solution that support scalability of the operational databases. We present the newly developed concept of distributing the OBs between two databases, containing operational and historical information. We present the architectural design in which OBs in operational databases will be archived periodically at ESO headquarters. This will remedy the scalability problems and keep the size of the operational databases small. The historical databases will only exist in the headquarters, for archiving purposes.
Advanced load-testing techniques for a science archive
Performance goals for data archive systems need to be established early in the design process to ensure stability and acceptable response throughput. Load testing is one technique used to measure the progress towards these performance goals. Providing resources for load-test planning is critical, and this planning must include feasibility studies, tool analyses, and generation of an overall load-test strategy. This strategy is much different for science data archives than other systems, including commercial websites and high-volume data centers. This paper will provide an overview of the load testing performed on the Spitzer Space Telescope's science archive, which is part of Science Operations System at the Spitzer Science Center (SSC). Methods used for planning and conducting SSC load tests will be presented, and advanced load-testing techniques will be provided to address runtime issues and enhance verification results. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
James Webb Space Telescope: L2 communications for science data processing
Alan Johns, Bonita Seaton, Jonathan Gal-Edd, et al.
The James Webb Space Telescope (JWST) is the first NASA mission at the second Lagrange point (L2) to identify the need for data rates higher than 10 megabits per second (Mbps). The JWST will produce approximately 235 gigabits (Gb) of science data every day. In order to get this data downlinked to the Deep Space Network (DSN) at a sufficiently adequate date rate, a Ka-band 26 gigahertz (GHz) frequency (as opposed to an X-band frequency) will be utilized. To support the JSWT's utilizations of Ka-band, the DSN is upgrading its infrastructure. The range of frequencies in the Kaband is becoming the new standard for high data rate science missions at L2. Given the Ka-band frequency range, the issues of alternative antenna deployment, off-nominal scenarios, NASA implementation of the Ka-band at 26 GHz, and navigation requirements will be discussed in this paper. The JWST is also using the Consultative Committee for Space Data Systems (CCSDS) standard process for reliable file transfer using CCSDS File Delivery Protocol (CFDP). For the JWST mission, the use of the CFDP protocol enables level zero processing at the DSN site. This paper will address NASA implementation of ground stations in support of Ka-band 26 GHz and lessons learned from implementing a file based protocol (CFDP).
Design and realization of survey strategy system
Hai-Long Yuan, Jian Ren, Jian Wang, et al.
The 4 m large aperture, 5 degrees field of view and 4000 fibers make LAMOST an important optical spectrum astronomical telescope in the world. It will take a survey observation on about 10,000,000 stars in 20,000 square degrees field of view of north celestial sphere within several years. In order to fully utilize the advantages of a large number of goal optic fibers of LAMOST, carry out the survey observation with better efficiency, and economize valuable astronomical observation time, it is very essential and worthy to make a series of observation plans with utilization ratio of optic fiber as high as possible, which is exactly the scientific goal of Survey Strategy System (SSS) of LAMOST. Various kinds of static and dynamic restraint conditions affecting a survey observation are analyzed and modeled. On the basis of looking for tile with the largest density, the Mean-Shift algorithm is adopted, effectively improving the utilization ratio of optic fiber. With the progress of LAMOST project, new restraints and algorithms will be involved.
Hyper Suprime-Cam: data analysis and management system
Hisanori Furusawa, Manobu Tanaka, Yoshiji Yasu, et al.
We report our activity on development of data analysis system dedicated for the Hyper Suprime-Cam (HSC), which is a future wide-field camera at Subaru Telescope. The data analysis system (HSC-ANA) is intended for the following achievements: (1) automated processing of an unprecedentedly huge amount of data frames without frequent human interactions to achieve required depth and area of the key survey projects (2) immediate release of best-effort object catalogs together with calibration information to user communities to maximize scientific outputs. The system also enables general users to efficiently use archive data by providing appropriate meta data describing data quality. We start with constructing a prototype data analysis system which involves minimal functions to process data for the current prime-focus camera (Suprime-Cam). The prototype system is developed based on combination of newly developed and existing software packages for imaging data and the framework middleware which communicates with databases. This system is planned to help observers to perform their observations with Suprime-Cam. Once the prototype system is evaluated, it will be scaled up to the full HSCANA system.
Building up a database of spectro-photometric standard stars from the UV to the near-IR: a status report
J. Vernet, F. Kerber, F. Saitta, et al.
We present a project aimed at establishing a set of 12 spectro-photometric standards over a wide wavelength range from 320 to 2500 nm. Currently no such set of standard stars covering the near-IR is available. Our strategy is to extend the useful range of existing well-established optical flux standards into the near-IR by means of integral field spectroscopy with SINFONI at the VLT combined with state-of-the-art white dwarf stellar atmospheric models. As a solid reference, we use two primary HST standard white dwarfs. This ESO "Observatory Programme" has been collecting data since February 2007. The analysis of the data obtained in the first year of the project shows that a careful selection of the atmospheric windows used to measure fluxes and the stability of SINFONI make it possible to achieve an accuracy of 3- 6% depending on the wavelength band and stellar magnitude, well within our original goal of 10% accuracy. While this project was originally tailored to the needs of the wide wavelength range (320-2500 nm) of X-shooter on the VLT, it will also benefit any other near-IR spectrographs, providing a huge improvement over existing flux calibration methods.
New measures in controlling quality of VLT VISIR
Danuta Dobrzycka, Leonardo Vanzi, Lars Lundin, et al.
The ESO's VISIR instrument at Paranal is dedicated to observations in two mid-infrared (MIR) atmospheric windows: N-band (8-13 micron) and Q-band (16.5-24.5 micron). It is equipped with two DRS (formerly Boeing) 256 × 256 BIB detectors operating at temperatures of about 5 K. As in case of other Paranal instruments VISIR data are regularly transferred to ESO Garching within the standard data flow operation. There, they are classified and pipeline-processed. The products of VISIR technical data are analyzed in order to trend instrument performance, while calibrations and science data are checked for quality and later distributed to the users. Over the three years of VISIR operations we have been constantly gaining more experience in methods of assessing health of the instrument. In particular, we found that dark frames are particularly useful for monitoring the VISIR detectors. We also discuss performance of the "OCLI" silicate filters recently mounted in the instrument.
Reduction of polarimetric data using Mueller calculus applied to Nasmyth instruments
Franco Joos, Esther Buenzli, Hans Martin Schmid, et al.
We present a method based on Mueller calculus to calibrate linear polarimetric observations. The key advantages of the proposed way of calibration are: (1) that it can be implemented in a data reduction pipeline, (2) that it is possible to do accurate polarimetry also for telescopes/instruments with polarimetric non-friendly architecture (e.g. Nasmyth instruments) and (3) that the proposed strategy is much less time consuming than standard calibration procedures. The telescope/instrument will polarimetrically be described by a train of Mueller matrices. The components of these matrices are dependent on wavelength, incident angle of the incoming light and surface properties. The result is, that an observer gets the polarimetrically calibrated data from a reduction pipeline. The data will be corrected for the telescope/instrumental polarisation off-set and with the position angle of polarisation rotated into sky coordinates. Up to now these two calibration steps were mostly performed with the help of dedicated and time consuming night-time calibration measurements of polarisation standard stars.
Changing software philosophy for ACIS operations as Chandra ages
The Chandra X-Ray Observatory is about to start its 10th year of operations. Over the time of the mission, the Science Operations Team, ACIS (Advanced CCD Imaging Spectrometer) Operations group, has participated in spacecraft command load reviews. These reviews ensure the spacecraft commanding is safe for the instrument and the ACIS configuration matches the planned observation. The effectiveness of spacecraft command load reviews for ACIS depends on the ability to adapt the software as operations change in response to the aging of the spacecraft. We have recently rewritten this software to start incorporating other spacecraft subsystems, including maneuvers and hardware commanding, to ensure the safety of ACIS. In addition, operational changes that optimize the science return of the spacecraft have created new constraints on commanding. This paper discusses the reorganization of the code and the multiple changes to the philosophy of the code. The result is stronger, more flexible software that will continue to assist us in protecting ACIS throughout the Chandra mission.
Data taking in Virtual Control Room: the SNfactory example
P. Antilogus, R. C. Thomas, G. Aldering, et al.
Virtual Control Room allows a team of people in various locations to contribute fully to an instrument acquisition: a reduced support is required on site but, due to the large support available off site, the data taking quality can be still better compared to the usual on-site support scheme. The acquisition of the SNfactory spectro-photometric follow-up is based on such data taking model. This acquisition and its performances are presented here.
Applications of the ESO metadata database
Myha Vuong, Alexis Brion, Adam Dobrzycki, et al.
We have designed a metadata database containing all information stored in almost 10 million FITS file headers using Sybase IQ server. This repository includes metadata from raw observation frames and from the science and calibration pipeline products produced by the ESO Quality Control group. We present a few illustrative applications using data stored in this database. One of the applications which is very attractive to the astronomical community is the possibility to access the FITS headers with up-to-date information coming directly from the database using the ESO Archive interface. The keyword repository can also feed local tables and/or views for specific uses, such as instrument specific tables, which contain parameters specific to particular instruments used in archive queries. Finally, the ESO observation keyword repository supports Virtual Observatory applications with the meta-data needed by visualisation tools, such as VirGO or Aladin.
The WISE in-orbit calibration
Beth Fabinsky, Ingolf Heinrichsen, Amy Mainzer, et al.
The Wide-field Infrared Survey Explorer mission will be executed by an earth-orbiting spacecraft carrying an infrared telescope cooled by a solid hydrogen cryostat. The purpose of the mission is to conduct an allsky survey at infrared wavelengths of 3.3, 4.7, 12 and 23 microns. The 7-month period of on-orbit operations includes one month of in-orbit checkout (IOC) and 6 months of all-sky survey scans from a dawn/dusk sun-synchronous orbit. The 30-day IOC is divided into two parts by the ejection of the telescope aperture cover some two weeks after launch. The first half of the IOC phase is primarily allocated to bus characterization; the latter half will be dedicated to cover-off instrument calibrations. In this discussion, we provide a description of the instrument calibrations to be conducted during IOC and how these plans will be carried out efficiently during the limited checkout period. The on-orbit instrument checkout is an extension of the overall WISE calibration plan. The duration of onboard calibration activities is limited by the lifetime of the cryogen and the need to begin the survey quickly. Key activities were selected because they must be done and can only be done in flight.
Planning and developing the Chandra Source Catalog
Ian N. Evans, Janet D. Evans, Guiseppina Fabbiano, et al.
The Chandra Source Catalog, presently being developed by the Chandra X-ray Center, will be the definitive catalog of all X-ray sources detected by the Chandra X-ray Observatory. The catalog interface will provide users with a simple mechanism to perform advanced queries on the data content of the archival holdings on a source-by-source basis for X-ray sources matching user-specified search criteria, and is intended to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. For each detected X-ray source, the catalog will record commonly tabulated quantities that can be queried, including source position, dimensions, multi-band fluxes, hardness ratios, and variability statistics, derived from all of the observations that include the source within the field of view. However, in addition to these traditional catalog elements, for each X-ray source the catalog will include an extensive set of file-based data products that can be manipulated interactively by the catalog user, including source images, event lists, light curves, and spectra from each observation in which a source is detected. In this paper, we emphasize the design and development of the Chandra Source Catalog. We describe the evaluation process used to plan the data content of the catalog, and the selection of the tabular properties and file-based data products to be provided to the user. We discuss our approach for managing catalog updates derived from either additional data from new observations or from improvements to calibrations and/or analysis algorithms.
Posters: General Operations
icon_mobile_dropdown
Evaluating requirements on the Spitzer Mission operations system based on flight operations experience
The Spitzer Space Telescope launched in August 2003, and has been in its nominal operations phase since December 2003. This paper will review some of the pre-launch, high-level project requirements in light of our operations experience. We discuss how we addressed some of those requirements pre-launch, what post-launch development we've done based on our experience, and some recommendations for future missions. Some of the requirements we examine in this paper are related to observational efficiency, completeness of data return, on-board storage of science data, and response time for targets of opportunity and data accountability. We also discuss the bearing that mission constraints have had on our solutions. These constraints include Spitzer's heliocentric orbit and resulting declining telecom performance, CPU utilization, relatively high data rate for a deep space mission, and use of both on-board RF power amplifiers, among others.
The Gemini-South MCAO operational model: insights on a new era of telescope operation
The Gemini Observatory is implementing a Multi-Conjugate Adaptive Optics (MCAO) system as a facility instrument for the Gemini South telescope (GeMS). The system will include 5 Laser Guide Stars, 3 Natural Guide Stars, and 3 deformable mirrors, optically conjugated at different altitudes, to achieve near-uniform atmospheric compensation over a one arc minute square field of view. This setup implies some level of operational complexity. In this paper we describe how GeMS will be integrated into the flow of Gemini operations, from the observing procedures necessary to execute the programs in the queue (telescope control software, observing tools, sequence executor) to the safety implementation needed such as spotters/ASCAM, space command and laser traffic control software.
Service observing management at the APEX telescope
The execution of scientific observations in service observing mode requires an efficient transfer of information about project setup and observing procedures from the PI to the actual observer. At the APEX telescope, we have implemented an efficient, web-based system to manage the service observing of astronomical projects. This system includes the submission of relevant project information through a web form, the monitoring of the observing progress through collaboration tools, and the data handling and archiving. In this paper I give an overview over how service observing is managed and performed at APEX. I explain the implementation of the project submission facility, the information flow from submission to observation, and the various components involved. I conclude highlighting the advantages of this system.
Operating the GONG worldwide network
At the instigation of the international scientific community, the US National Solar Observatory (NSO) began to develop the six-site, semi-autonomously operating, helioseismology Global Oscillation Network Group (GONG) in 1984 and the network was officially established in 1995 when the sixth, and last, station was completed. Funded by the US National Science Foundation and enjoying in-kind support from numerous institutions, the project has become a notable international collaboration. The network provides essentially continuous, extremely sensitive observations of the velocity, intensity, and magnetic field of the Sun's surface every minute. Quick-look data are available in near-real-time for science and for diagnostics (http://gong.nso.edu), and the full data set is shipped to project headquarters weekly where the processed data and science-grade analyses are made available to the international community. As originally proposed, GONG was to have a three-year observing run. Over a number of years of operation however, both GONG and its space-borne sister the ESA/NASA SOHO MDI instrument clearly demonstrated the reality of internal solar-cycle structural changes, and in addition, local helioseismology programs were successfully developed. In 2003, NSO made a decision to add GONG to its Flagship facilities and extended the duration of the observing run indefinitely.
Improving the Wendelstein Observatory for a 2m-class telescope
Ulrich Hopp, Ralf Bender, Claus Goessl, et al.
The Ludwig-Maximilians-Universitat Munchen operates an observatory on the summit of Mt. Wendelstein in the Bavarian alps which will be equipped with a modern 2 m-class, robotic telescope. We did extensive site evaluations and started various monitoring programs on transmission, extinction, and seeing. Implementation and results of these monitorings are reported. We further present our strategy to prepare the observatory for this major upgrade, including hardware installations (besides the telescope), network and software infrastructure upgrades, as well as improvements in the observatory operations. We aim at most efficient observations in a "low-person-power" situation on a site which allows only partial robotic operations. The basic telescope design and the strategy for its first generation of instruments are briefly discussed.
A fast link with Paranal: new operational opportunities
This paper elaborates on how ESO is looking forward to fully exploit all new opportunities that the high bandwidth communication link delivered by the EVALSO project will make available to the ESO Paranal Observatory. EVALSO, a project funded by the Framework Programme 7 of the European Union, stands for 'Enabling Virtual Access to Latin-american Southern Observatories' (more at www.evalso.eu). Its goal is to enable fast access to two European optical astronomical facilities in the Atacama Desert in northern Chile, namely the world-class ESO Paranal Observatory and the one run by the Ruhr Universität Bochum at the neighbouring Cerro Armazones. EVALSO plans to make available the still missing physical infrastructure to efficiently connect these facilities to Europe via the international infrastructures created in the last years with the European Commission support (ALICE, trans-Atlantic link, GEANT2) ESO, as member of the EVALSO Consortium, is involved in the implementation of the link and has already started together with the other members the analysis of the operational opportunities that this new capability will give the European astronomical community, not only in terms of faster access to the collected data, but also opening the door to new and more efficient ways of operating remote facilities.
Posters: Observatory Scheduling
icon_mobile_dropdown
Rapid replacement of Spitzer Space Telescope sequences: targets of opportunity and anomalies
Steven Tyler, JoAnn O'Linger, Susan Comeau, et al.
The Spitzer Space Telescope, the fourth and final of NASA's Great Observatories, was launched in August 2003. It has been a major scientific and engineering success, performing science observations at wavelengths ranging from 3.6 to 160 microns, and operating at present with a roughly 92% science duty cycle. This paper describes the essential role and procedures of the Spitzer Observatory Planning and Scheduling Team (OPST) in providing rapid rebuilds of sequences to enable the scheduling of Targets of Opportunity and to recover from anomalies. These procedures have allowed schedulers to reduce the nominal lead time for science inputs from six weeks to 2 or 3 days. We discuss procedures for modifications to sequences both before and after radiation to the spacecraft and lessons learned from their implementation.
Spitzer scheduling challenges: cold and warm
William A. Mahoney, Susan Comeau, Lisa J. Garcia, et al.
The primary scheduling requirement for the Spitzer Space Telescope has been to maximize observing efficiency while assuring spacecraft health and safety and meeting all observer- and project-imposed constraints. Scheduling drivers include adhering to the given Deep Space Network (DSN) allocations for all spacecraft communications, managing data volumes so the on-board data storage capacity is not exceeded, scheduling faint and bright objects so latent images do not damage observations, meeting sometimes difficult observational constraints, and maintaining the appropriate operational balance among the three instruments. The remaining flexibility is limited largely to the selection of unconstrained observations and optimizing slews. In a few cases, the project has succeeded in negotiating DSN tracks to accommodate very long observations of transiting planets (up to 52 hours to date with even longer requests anticipated). Observational efficiency has been excellent with approximately 7000 hours of executed science observations per year.
Observing distant solar system objects with James Webb Space Telescope (JWST)
The James Webb Space Telescope will provide a unique capability to observe Solar System objects such as Kuiper Belt Objects, comets, asteroids, and the outer planets and their moons in the near and mid-infrared. A recent study developed the conceptual design for a capability to track and observe these objects. In this paper, we describe how the requirements and operations concept were derived from the scientific goals and were distributed among the Observatory and Ground Segment components in order to remain consistent with the current event-driven operations concept of JWST. In the event-driven operations concept, the Ground Segment produces a high-level Observation Plan that is interpreted by on-board scripts to generate commands and monitor telemetry responses. This approach allows efficient and flexible execution of planned observations; precise or conservative timing models are not required, and observations may be skipped if guide star or target acquisition fails. The efficiency of this approach depends upon most observations having large time intervals in which they can execute. Solar System objects require a specification of how to track the object with the Observatory, and a guide star that remains within the field of view of the guider during the observation. We describe how tracking and guiding will be handled with JWST to retain the efficient and flexible execution characteristics of event-driven operations. We also describe how the implementation is distributed between the Spacecraft, Fine Guidance Sensor, On-board Scripts, and Proposal Planning Subsystem, preserving the JWST operations concept.
Observing conditions and mid-IR data quality
Rachel Mason, Andre Wong, Tom Geballe, et al.
Ground-based mid-infrared (mid-IR) observations appear to be widely perceived as esoteric and demanding, and very sensitive to observing conditions. Although the principles of observing in the background-limited regime are well-known, it is difficult for the non-specialist to find specific information on exactly how mid-IR data can be affected by environmental conditions. Understanding these effects is important for the efficiency of mid-IR queue observing, the ability of classical observers to adapt their programs to the prevailing conditions, and the standard of data being delivered. Through operating mid-IR instruments in the queue at Gemini we have amassed a considerable database of standard star observations taken under a wide range of atmospheric conditions and in a variety of instrumental configurations. These data can be used to illustrate the effect of factors such as water vapour column, airmass, cloud cover, etc. on observed quantities like raw sky background, residual background, atmospheric transmission and image FWHM. Here we present some preliminary results from this study, which we hope to be of use to observatory users and staff as a guide to which environmental conditions are truly important to mid- IR imaging observations, and which can safely be neglected.
Target of opportunity observing in queue mode at the Gemini North Observatory
Katherine Roth, Paul Price, Kim Gillies, et al.
The Gemini Observatories primarily operate a multi-instrument queue, with observers selecting observations that are best suited to weather and seeing conditions. Queue operations give higher ranked programs a greater chance for completion than lower ranked programs requesting the same conditions and instrument configuration. Queue observing naturally lends itself to Target of Opportunity (ToO) support since the time required to switch between programs and instruments is very short, and the staff observer is trained to operate all the available instruments and modes. Gemini Observatory has supported pre-approved ToO programs since beginning queue operations, and has implemented a rapid (less than 15 minutes response time) ToO mode since 2005. We discuss the ToO procedures, the statistics of 2+ years of rapid ToOs at Gemini North Observatory, the science that this important mode has enabled, and some recent software modifications which have improved both standard and rapid ToO support in the Gemini Observing Tool.
Analysis of local meteorological conditions in Macón using the MM5 modeling system
Omar Cuevas, Arlette Chacón, Michel Cure
The Macón's zone (24°S, 66°W) is preselected for possible construction of the astronomical observatory ELT (Extremely Large Telescope) by ESO. Preliminary analysis shows that this area is optimal for astronomical activity, therefore data of turbulence has been collected, important factor in seeing and for adaptive optics (AO). In this area, campaigns of measurements of meteorological variables were made, with an automatic station. Simulations with the weather system modeling MM5 were performed on the collected data and were correlated with the corresponding satellite images. In the analysis of the atmospheric conditions it was found that the greatest contribution of atmospheric instability in this area, is produced by high trough and jet stream. The trajectory analysis identified that the air that reaches the summit Macón comes from about 4500 meters above sea level, in average. This confirm that the turbulence that forms on the salt of Arizaro (3500 m.a.s.l.) does not rise to the top of Macón (4500 m.a.s.l.).
Meteorological study of Aklim site in Morocco
Candidate sites of the future European Extremely Large Telescopes (E-ELT) need to be assessed and analytically compared in their observing characteristics. In the site selection process, meteorological, photometric and seeing qualities have to be studied and measured carefully. Aklim site in Morocco is one among four candidates in the ELT project. In this paper, we present meteorological studies of the Aklim site over eleven years. The meteorological parameters include wind speed and direction, relative humidity, air temperature, cloud cover and water vapour content. Most of these data are taken from the National Center for Environmental Prediction/National Center for Atmospheric Research NCEP/NCAR Reanalysis. The meteorological analysis covers the vertical profile as well as surface layer meteorology. Furthermore, in extensive literature, it has been demonstrated that the global circulation of atmospheric wind at 200 mb can be used as a criterion to establish the suitability for the development of adaptive optics techniques. By using the NOAA NCEP/NCAR reanalysis database, we analyse the monthly average wind velocity at 200 mb for eleven years period and compare with famous observatories.
Posters: Operational Statistics
icon_mobile_dropdown
Proposal review rankings: the influence of reviewer discussions on proposal selection
Lisa J. Storrie-Lombardi, Nancy A. Silbermann, Luisa M. Rebull, et al.
The telescope time allocation process for NASA's Great Observatories involves a substantial commitment of time and expertise by the astronomical community. The annual review meetings typically have 100 external participants. Each reviewer spends 3-6 days at the meeting in addition to one-two weeks of preparation time, reading and grading proposals. The reviewers grade the proposals based on their individual reading prior to the meeting and grade them again after discussion within the broad, subject-based review panels. We summarize here how the outcome of the review process for three Spitzer observing cycles would have changed if the selection had been done strictly based on the preliminary grades without having the panels meet and discuss the proposals. The changes in grading during the review meeting have a substantial impact on the final list of selected proposals. Approximately 30% of the selected proposals would not have been included if just the preliminary rankings had been used to make the selection.
The Spitzer Science Center: using metrics analysis to improve system stability
The Spitzer Science Center (SSC) Software Science Operations System (SOS) is a large, complex software system. Over 1.2 million lines of code had been written for the SOS by time of launch (August 2003). The SSC uses a defect tracking tool called GNATS to enter defect reports and change requests. GNATS has been useful beyond just tracking defects to closure. Prior to launch a number of charts and graphs were generated using metrics collected from GNATS. These reports demonstrated trends and snapshots of the state of the SOS and enabled the SSC to better identify risks to the SOS and focus testing efforts. This paper will focus primarily on the time period of Spitzer's launch and In Orbit Checkout. It will discuss the metrics collected, the analyses done, the format the analyses was presented in, and lessons learned. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
Posters: User Support
icon_mobile_dropdown
The system support associate model at Gemini Observatory
At Gemini Observatory, the traditional employment position of telescope operator has been discarded in favor of a more diverse and flexible position known as System Support Associate (SSA). From the very beginning, the operational model of Gemini was designed to involve SSAs in observatory projects well beyond the strict operation of the telescope systems. We describe the motivation behind the original model, how it was eventually implemented and how it has evolved. We describe how the schedule allows SSAs to assume different roles within Gemini and how flexible time allows them to participate to a wide range of projects, increasing their motivation, deepening their knowledge and strengthening communication between groups, as well as allowing management to allocate resources to projects that would otherwise lack manpower. We give examples of such projects and comment on the difficulties inherent in the model.
SPRITE: the Spitzer proposal review website
Megan K. Crane, Lisa J. Storrie-Lombardi, Nancy A. Silbermann, et al.
The Spitzer Science Center (SSC), located on the campus of the California Institute of Technology, supports the science operations of NASA's infrared Spitzer Space Telescope. The SSC issues an annual Call for Proposals inviting investigators worldwide to submit Spitzer Space Telescope proposals. The Spitzer Proposal Review Website (SPRITE) is a MySQL/PHP web database application designed to support the SSC proposal review process. Review panel members use the software to view, grade, and write comments about the proposals, and SSC support team members monitor the grading and ranking process and ultimately generate a ranked list of all the proposals. The software is also used to generate, edit, and email award letters to the proposers. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
MySQL/PHP web database applications for IPAC proposal submission
Megan K. Crane, Lisa J. Storrie-Lombardi, Nancy A. Silbermann, et al.
The Infrared Processing and Analysis Center (IPAC) is NASA's multi-mission center of expertise for long-wavelength astrophysics. Proposals for various IPAC missions and programs are ingested via MySQL/PHP web database applications. Proposers use web forms to enter coversheet information and upload PDF files related to the proposal. Upon proposal submission, a unique directory is created on the webserver into which all of the uploaded files are placed. The coversheet information is converted into a PDF file using a PHP extension called FPDF. The files are concatenated into one PDF file using the command-line tool pdftk and then forwarded to the review committee. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
Remote observing with the Nickel Telescope at Lick Observatory
Bryant Grigsby, Konstantinos Chloros, John Gates, et al.
We describe a project to enable remote observing on the Nickel 1-meter Telescope at Lick Observatory. The purpose was to increase the subscription rate and create more economical means for graduate- and undergraduate students to observe with this telescope. The Nickel Telescope resides in a 125 year old dome on Mount Hamilton. Remote observers may work from any of the University of California (UC) remote observing facilities that have been created to support remote work at both Keck Observatory and Lick Observatory. The project included hardware and software upgrades to enable computer control of all equipment that must be operated by the astronomer; a remote observing architecture that is closely modeled on UCO/Lick's work to implement remote observing between UC campuses and Keck Observatory; new policies to ensure safety of Observatory staff and equipment, while ensuring that the telescope subsystems would be suitably configured for remote use; and new software to enforce the safety-related policies. The results increased the subscription rate from a few nights per month to nearly full subscription, and has spurred the installation of remote observing sites at more UC campuses. Thanks to the increased automation and computer control, local observing has also benefitted and is more efficient. Remote observing is now being implemented for the Shane 3- meter telescope.