Proceedings Volume 7737

Observatory Operations: Strategies, Processes, and Systems III

David R. Silva, Alison B. Peck, B. Thomas Soifer
cover
Proceedings Volume 7737

Observatory Operations: Strategies, Processes, and Systems III

David R. Silva, Alison B. Peck, B. Thomas Soifer
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 July 2010
Contents: 12 Sessions, 62 Papers, 0 Presentations
Conference: SPIE Astronomical Telescopes + Instrumentation 2010
Volume Number: 7737

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7737
  • Site and Facility Operations I
  • Site and Facility Operations II
  • Science Operations Processes
  • Time Domain and Target of Opportunity
  • Transient Events and Observatory Operations
  • Dynamic Observatory Scheduling
  • Remote Robotic and Service
  • Archive Operations and Legacy
  • User Support
  • Operations and Data Quality Control
  • Poster Session
Front Matter: Volume 7737
icon_mobile_dropdown
Front Matter: Volume 7737
This PDF file contains the FrontMatter associated with SPIE volume 7737, including Title page, Copyright information, Table of Contents, and Conference Committee listing.
Site and Facility Operations I
icon_mobile_dropdown
Constructing the EVLA while operating the VLA
Robert Dickman, Mark McKinnon, Claire Chandler, et al.
Begun in 2001 with a total budget of around $100M, the Expanded Very Large Array (EVLA) project is the only major upgrade to the VLA undertaken since the interferometer was dedicated in 1980. The goal of this 11-year long project is to improve all the observational capabilities of the original VLA - except for collecting area and spatial resolution - by at least an order of magnitude. To achieve this, the 28 VLA antennas have been modernized with new digital data transmission systems that link to a new, wideband, fiber optic digital LO/IF system, and eight new sets of cooled receivers are under construction that will offer full frequency coverage from 1 to 50 GHz, with instantaneous bandwidths up to 8 GHz provided by two independent dual polarization frequency pairs. The new WIDAR correlator provided by NRAO's Canadian EVLA partner replaced the old VLA correlator in early 2010 and is currently undergoing commissioning. The long duration of the EVLA construction project coupled with the need to maintain the scientific productivity and user base of the telescope obviously precluded shutting down the old array while new infrastructure was built and commissioned. Consequently, the construction plan was based on the fundamental assumption that the old VLA would continue to operate as new EVLA capabilities gradually came online; in some cases, additional complexity had to be designed into new hardware in order to maintain transitional interoperability between the old analog and new digital systems as the latter were installed and commissioned. As construction has advanced, operations has increasingly had to coexist side by side with EVLA commissioning and verification. Current commissioning plans attempt to balance making new EVLA capabilities available to the user community as soon as they have been installed and verified, and maintaining a stable and robust end-to-end data acquisition and delivery process for the user community.
Mixing completion, commissioning, and operations at the LBT
By June 2010, the Large Binocular Telescope Observatory will have supported six semesters of observing with prime focus imaging, with the addition of IR imaging and spectroscopy in the most recent. Interspersed in the last year were installation and commissioning of one direct and one bent Gregorian focal station and extended commissioning of the first bent Gregorian focal station. We examine the lost time statistics and distribution of issues that reduced on-sky access in the context of the limited technical support provided for observing. We also note some of the restrictions imposed by the alternation of engineering and commissioning activities with scheduled observing time. The goal is to apply the lessons learned to the continuing period of observation plus commissioning anticipated as new spectroscopic, adaptive optics, and interferometric capabilities are added through 2012.
A new La Silla site operations paradigm
G. Ihle, B. Ahumada, J. Duk, et al.
In 2007 ESO Council endorsed a concept to maintain the La Silla Site within the context of a streamlined operational and support scenario. La Silla remains part of the La Silla Paranal Observatory Division, and supports science projects of the ESO community using the 2.2m, NTT and 3.6m telescope. Infrastructure to host externally funded projects at national telescopes is provided. A detailed Site Operations Plan for La Silla 2010+ had been developed, and is been implemented since October 2009. We describe its implications on staffing, infrastructure, and science operations. We report our first experience gathered under this new operations paradigm.
APEX: five years of operations
APEX, the Atacama Pathfinder EXperiment, is being operated successfully, now for five years, on Llano de Chajnantor at 5107m altitude in the Chilean High Andes. This location is considered one of the worlds outstanding sites for submillimeter astronomy, which the results described in this contribution are underlining. The primary reflector with 12 m diameter is cautiously being maintained at about 15 μm by means of holography. This allows to access all atmospheric submillimeter windows accessible from the ground, up to 200 μm. Telescope and instrument performance, operational experiences and a selection of scientific results are given in this publication.
Laser Guide Star operations at the Gemini North Telescope
Anthony C. Matulonis
The Laser Guide Star (LGS) operations at the Gemini North (GN) Telescope have improved the sky coverage far beyond using a Natural Guide Star (NGS) for the high angular resolution Adaptive Optics (AO) science demanded by our astronomical community. An understanding of the current LGS logistics from an operational standpoint is imperative for any facility planning to incorporate the LGS approach. The details of LGS operations will be highlighted, in particular the role of the Systems Support Associate (SSA) who is responsible for the safe and efficient operation of the complex GN AO system, Altair. An overview of the LGS related monitoring tools, system limitations, safety protocols, SSA responsibilities, and staff support required will be included.
Laser operations at the 8-10m class telescopes Gemini, Keck, and the VLT: lessons learned, old and new challenges
Paola Amico, Randall D. Campbell, Julian C. Christou
Laser Guide Star (LGS) assisted Adaptive Optics routine operations have commenced at three of the major astronomical observatories, in 2004 (Keck) and 2006 (VLT and Gemini) respectively. Subaru is also on the verge of putting its LGS facility into operations. In this paper we concentrate on the operational aspect of the laser facilities: we discuss common problems such as weather constraints, beam collisions, aircraft avoidance and optimal telescope scheduling. We highlight important differences between the observatories, especially in view of the valuable lessons learnt. While it is true that the three observatories have made quick progress and achieved important scientific results during the first years of operations, there is much room left for improvement in terms of the efficiency that can be obtained on sky. We compare and contrast the more recently implemented LGS systems of VLT and Gemini operated in service and queue modes to the more mature LGS operation at Keck that employs classical PI scheduled observing.
Site and Facility Operations II
icon_mobile_dropdown
Testing and validation of orbital operations plans for the MESSENGER mission
Alice F. Berman, Deborah L. Domingue, Mark E. Holdridge, et al.
Launched in 2004, the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft continues on its journey to become, in 2011, the first spacecraft to orbit the planet Mercury. The goal of MESSENGER's one-year orbital mission is to answer several key questions about the structure and history of Mercury and its environment. The science and mission operations teams are testing a concept of operations to use the instrument payload most efficiently and to achieve full mission success. To ensure that all essential observations are obtained and to allow for contingencies, an advance science planning (ASP) effort will develop the full yearlong mission baseline plan prior to orbit insertion. To ensure that the plan can be adapted in response to unexpected events over time, an adjusted baseline plan will be regenerated in the ASP process every five weeks during the actual orbital mission. The near-term science planning (NTSP) activity converts weeklong portions of the baseline plan into executable commands to conduct the orchestrated observations. A feedback process from NTSP to ASP will be used to ensure that the baseline observing plan accounts for and reschedules any unsuccessful observations. A testing and validation plan has been developed for the processes and software that underlie both advance and near-term science planning.
Using the Baldrige criteria for observatory strategic and operations planning
Nicole M. Radziwill, Lory Mitchell
In 1987, the U.S. Congress created the Malcolm Baldrige National Quality Award (MBNQA), a program that rewards businesses and nonprofits that demonstrate effective, efficient operations. Underlying the MBNQA are criteria to help organizations integrate seven key areas of operations, including: leadership, strategic planning, customer focus, information management, workforce planning, process management, and results. Independent of the award process, the Baldrige Criteria can be used to guide strategic and operations planning. This presentation includes an example of how the Baldrige Criteria were used to quickly develop a Workforce Management Plan for the National Radio Astronomy Observatory (NRAO), responding to funding agency requests.
Science Operations Processes
icon_mobile_dropdown
ALMA science operations
Lars-Åke Nyman, Paola Andreani, John Hibbard, et al.
The ALMA (Atacama Large Millimeter/submillimeter Array) project is an international collaboration between Europe, East Asia and North America in cooperation with the Republic of Chile. The ALMA Array Operations Site (AOS) is located at Chajnantor, a plateau at an altitude of 5000 m in the Atacama desert in Chile, and the ALMA Operations Support Facility (OSF) is located near the AOS at an altitude of 2900 m. ALMA will consist of an array of 66 antennas, with baselines up to 16 km and state-of-the-art receivers that cover all the atmospheric windows up to 1 THz. An important component of ALMA is the compact array of twelwe 7-m and four 12-m antennas (the Atacama Compact Array, ACA), which will greatly enhance ALMA's ability to image extended sources. Construction of ALMA started in 2003 and will be completed in 2013. Commissioning started in January 2010 and Early Science Operations is expected to start during the second half of 2011. ALMA science operations is provided by the Joint ALMA Observatory (JAO) in Chile, and the three ALMA Regional Centers (ARCs) located in each ALMA region - Europe, North America and East Asia. ALMA observations will take place 24h per day, interrupted by maintenance periods, and will be done in service observing mode with flexible (dynamic) scheduling. The observations are executed in the form of scheduling blocks (SBs), each of which contains all information necessary to schedule and execute the observations. The default output to the astronomer will be pipeline-reduced images calibrated according to the calibration plan. The JAO is responsible for the data product quality. All science and calibration raw data are captured and archived in the ALMA archive, a distributed system with nodes at the OSF, the Santiago central office and the ARCs. Observation preparation will follow a Phase 1/Phase 2 process. During Phase 1, observation proposals will be created using software tools provided by the JAO and submitted for scientific and technical review. Approved Phase 1 proposals will be admitted to Phase 2 where all observations will be specified as SBs using software tools provided by the JAO. User support will be done at the ARCs through a helpdesk system as well as face-to-face support.
Kepler Science Operations processes, procedures, and tools
Jennifer R. Hall, Khadeejah Ibrahim, Todd C. Klaus, et al.
The Kepler Science Operations Center (SOC) is responsible for the configuration and management of the SOC Science Processing Pipeline, processing of the science data, distributing data and reports to the Science Office, exporting processed data for archiving to the Data Management Center at the Space Telescope Science Institute, and generation and management of the target and aperture definitions. We present an overview of the SOC procedures and workflows for the data the SOC manages and processes. There are several levels of reviews, approvals, and processing for the various types of data. We describe the process flow from data receipt through data processing and export, as well as the procedures in place for accomplishing the tasks. The tools used to accomplish the goals of Kepler science operations will be presented and discussed as well. These include command-line tools and graphical user interfaces, as well as commercial products. The tools provide a wide range of functionality for the SOC including pipeline operation, configuration management, and process workflow implementation. For a demonstration of the Kepler Science Operations Center's processes, procedures, and tools, we present the life of a quarter's worth of data, from target and aperture table generation through archiving the data collected with those tables.
The care and feeding of the JWST on-board event-driven system
Vicki Balzano, Dean Zak, William Whitman
The software architecture of the James Webb Space Telescope (JWST) includes an operational layer implemented by on-board JavaScripts that orchestrate event-driven operations. Request files specifying up to ten days of high-level science and engineering tasks and a time-ordered execution list are uploaded periodically to the on-board event-driven system. The processing of these files is dictated by on-board events. The tasks execute within their specified windows or could be skipped due to an isolated anomaly, such as a guide star locate failure. For each high-level task, the necessary flight software commands are constructed on-board according to operational rules, and positive completion confirmation is required before proceeding to the next flight software command. The event-driven nature of JWST operations presents challenges to the Science and Operations Center being constructed at the Space Telescope Science Institute. This paper will outline the design implications on science and engineering operations planning, flight real-time operations, and post-observation data management. Included will be descriptions of how the Operations Center 1) plans time-windowed tasks to ensure that the event-driven system will remain scientifically productive even when anomalies occur, 2) interfaces with and monitors JWST event-driven operations, and 3) records Observatory status information for each science image.
Gemini Observatory: five years of multi-instrument queue operations
Inger Jorgensen, Bernadette Rodgers, Dennis R. Crabtree
Gemini Observatory has operated Gemini North and South in multi-instrument queue mode since 2005. Each telescope has about 85% of the time scheduled for science, of which 90% are in queue mode. More than one instrument is used 75-80% of all science nights. We present on-sky performance data from the last five years: Completion rates for queue programs, open-shutter performance, and acquisition times. Open-shutter performance and acquisition times are competitive with other 8-10 meter-class telescopes for which data are available. We give an overview over how the queue is populated, planned and executed.
Downsizing a great observatory: reinventing Spitzer in the warm mission
Lisa J. Storrie-Lombardi, Suzanne R. Dodd
The Spitzer Space Telescope transitioned from the cryogen mission to the IRAC warm mission during 2009. This transition involved changing several areas of operations in order to cut the mission annual operating costs to 1/3 of the cryogen mission amount. In spite of this substantial cut back, Spitzer continues to have one of the highest science return per dollar ratio of any of NASA's extended missions. This paper will describe the major operational changes made for the warm mission and how they affect the science return. The paper will give several measures showing that warm Spitzer continues as one of the most scientifically productive mission in NASA's portfolio. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
Time Domain and Target of Opportunity
icon_mobile_dropdown
The VLT rapid-response mode: implementation and scientific results
Paul M. Vreeswijk, Andreas Kaufer, Jason Spyromilio, et al.
The Rapid-Response Mode (RRM) at ESO's Very Large Telescope (VLT) allows for rapid automatic observations of any highly variable target such as Gamma-Ray Burst (GRB) afterglows. This mode has been available for various instruments at the VLT since April 2004, and can be easily implemented for any new instumentation. Apart from discussing the operational side of this mode, we also present VLT/UVES GRB afterglow spectra observed using the RRM, which show clear variability of absorption lines at the redshift the GRB host galaxy. Without the RRM this variability would not have been observed. Using photo-excitation and -ionization modelling, we show that this varibility is due to the afterglow flux exciting and ionizing a gas cloud at distances varying from tens of parsecs to kiloparsecs away from the GRB.
Managing target of opportunity (ToO) observations in queue mode at Gemini Observatory
Katherine C. Roth, E. Rodrigo Carrasco, Bryan W. Miller, et al.
Target of opportunity observations (ToO) are an integral part of multi-instrument queue operations at Gemini Observatory. ToOs comprise a significant fraction of the queue (20-25% of the highest ranking band) and with the advent of large survey telescopes (eg. Pan-STARRS, LSST) dedicated to searching for transient events this fraction may reasonably be expected to increase significantly in the coming years. While some important aspects of ToO execution at Gemini Observatory are managed automatically (eg. trigger alerts, data distribution), other areas such as duplications checking, scheduling and relative priority determination still require manual intervention. In order to increase efficiency and improve our commitment to ToOs and queue observing in general, these aspects need to be formalized and incorporated into improved phase 2 checking, automated queue scheduling and on-the-fly nightly plan generation software. We discuss the different flavors of ToOs supported at Gemini Observatory and how each kind is scheduled with respect to existing queue observations. We present ideas for formalizing these practices into a system of dynamical prioritization which automatically self adjusts as new ToO observations are triggered, high priority targets become endangered, and timing windows near expiration.
LCOGT sites and site operations
John J. Martinez, Timothy M. Brown, Patrick Conway, et al.
LCOGT is currently building and deploying a world-wide network of at least twelve 1-meter and twenty-four 0.4-meter telescopes to as many as 4 sites in the Southern hemisphere (Chile, South Africa, Eastern Australia) and 4 in the Northern hemisphere (Hawaii, West Texas, Canary Islands). Our deployment and operations model emphasizes modularity and interchangeability of major components, maintenance and troubleshooting personnel who are local to the site, and autonomy of operation. We plan to ship, install, and spare large units (in many cases entire telescopes), with minimal assembly on site.
Scheduling observations on the LCOGT network
Eric Hawkins, Nairn Baliber, Mark Bowman, et al.
LCOGT is deploying a world-wide telescope network to enable near-continuous coverage of variable or transient sources. We desire the telescopes in this network to be scheduled for efficiency with respect to a coherent set of science goals. To achieve this, we are developing a software structure to carry observing programs from initial proposal through data acquisition and feedback to the schedule. Key elements in this structure are a database of observation requests, requirements, and status, a protocol to describe observations, and a set of planners that work by successive refinement of the schedule.
Transient Events and Observatory Operations
icon_mobile_dropdown
Transient alert operations for the Large Synoptic Survey Telescope
The astronomical time domain is entering an era of unprecedented growth. LSST will join current and future surveys at diverse wavelengths in exploring variable and transient celestial phenomena characterizing astrophysical domains from the solar system to the edge of the observable universe. Adding to the large but relatively well-defined load of a project of the scale of the Large Synoptic Survey Telescope will be many challenging issues of handling the dynamic empirical interplay between LSST and contingent follow-up facilities worldwide. We discuss concerns unique to this telescope, while exploring consequences common to emerging observational time domain paradigms.
Dynamic Observatory Scheduling
icon_mobile_dropdown
New observing concepts for ESO survey telescopes
T. Bierwirth, T. Szeifert, D. Dorigo, et al.
The start of operations of the VISTA survey telescope will not only offer a new facility to the ESO community, but also a new way of observing. Survey observation programs typically observe large areas of the sky and might span several years, corresponding to the execution of hundreds of observations blocks (OBs) in service mode. However, the execution time of an individual survey OB will often be rather short. We expect that up to twelve OBs may be executed per hour, as opposed to about one OB per hour on ESO's Very Large Telescope (VLT). OBs of different programs are competing for observation time and must be executed with adequate priority. For these reasons, the scheduling of survey OBs is required to be almost fully automated. Two new key concepts are introduced to address these challenges: ESO's phase 2 proposal preparation tool P2PP allows PIs of survey programs to express advanced mid-term observing strategies using scheduling containers of OBs (groups, timelinks, concatenations). Telescope operators are provided with effective short-term decision support based on ranking observable OBs. The ranking takes into account both empirical probability distributions of various constraints and the observing strategy described by the scheduling containers. We introduce the three scheduling container types and describe how survey OBs are ranked. We demonstrate how the new concepts are implemented in the preparation and observing tools and give an overview of the end-to-end workflow.
Partner time sharing at the Submillimeter Array
Glen Petitpas, Qizhou Zhang, Charles Katz, et al.
The Submillimeter Array (SMA) is an 8-element interferometer which operates in the 180-700 GHz range located atop Mauna Kea in Hawaii. It is a collaborative project between the Smithsonian Astrophysical Observatory (SAO) and the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA) and is funded by the Smithsonian Institution and the Academia Sinica. The University of Hawaii (UH) receives a fixed percentage of all time on the telescopes of Mauna Kea. As such, the observing time at the SMA is shared among these partners at the SAO:ASIAA:UH levels of 72:15:13. The nature of interferometric observing makes keeping track of these partner shares challenging. Since a typical successful interferometric observation could last anywhere from 3-10 hours for it to have sufficient uv-coverage, it does not necessarily make sense to divide the observing time up simply by counting hours. In this talk I will summarize the strategy devised at the SMA for keeping track of partner time shares as well as the tools used to make these numbers transparent to all affiliations.
JWST planning and scheduling operations and concepts
Wayne M. Kinzel
The James Webb Space Telescope (JWST) will be a large infrared space observatory in orbit about the Sun-Earth second Lagrange Point. This paper provides an overview of the expected operational requirements imposed by the observatory's basic science activities (imaging, spectroscopy, coronography) and the operational issues associated with interleaving periodic engineering activities (Wave Front Sensing & Control activities, Momentum Unloads, and orbit Station Keeping) with the science observations. The planning and scheduling operations must maximize the overall science integration time while meeting the mission and observer specified constraints. The "Observation," "Visit," and Observation Template constructs are explained in the context of providing an interface to the Observer that provides the ability to specify complex observations, such as mosaics and cluster targets, while also minimizing specification errors and allowing planning and scheduling flexibility of the observations. The expected nominal planning and scheduling process including the creation and maintenance of the Long Range Plan (~1.25 year duration), the Short Term Schedules (~ three weeks), and the on-board Observation Plan (< 10 days) is described. The event-driven on-board operations of JWST and how the planning and scheduling process monitors and reacts to the onboard execution of the Observation Plan are described. Finally, the methods employed to allow for robust interfacing of scheduled real-time operations (for example, Station Keeping) with the Observation Plan and unplanned, but expected, modifications to the Observation Plan (for example, Target of Opportunity) are described.
Simulation of autonomous observing with a ground-based telescope: the LSST experience
Stephen Ridgway, Kem Cook, Michelle Miller, et al.
A survey program with multiple science goals will be driven by multiple technical requirements. On a ground-based telescope, the variability of conditions introduces yet greater complexity. For a program that must be largely autonomous with minimal dwell time for efficiency it may be quite difficult to foresee the achievable performance. Furthermore, scheduling will likely involve self-referential constraints and appropriate optimization tools may not be available. The LSST project faces these issues, and has designed and implemented an approach to performance analysis in its Operations Simulator and associated post-processing packages. The Simulator has allowed the project to present detailed performance predictions with a strong basis from the engineering design and measured site conditions. At present, the Simulator is in regular use for engineering studies and science evaluation, and planning is underway for evolution to an operations scheduling tool. We will describe the LSST experience, emphasizing the objectives, the accomplishments and the lessons learned.
Remote Robotic and Service
icon_mobile_dropdown
Switching the Liverpool Telescope from a full-service operating model to self-service
R. J. Smith, Neil R. Clay, Stephen N. Fraser, et al.
The Liverpool Telescope has undergone a major revision of operations model, improving the facility's flexibility and rapid response to targets of opportunity. We switched from a "full service" model where observers submitted requests to the Support Astronomer for checking and uploading into the scheduler database to a direct access model where observers personally load sequences directly into the database at any time, including during the night. A new data model describing the observing specifications has been developed over two years for the back-end operations infrastructure and has been invisible to users until early 2010 when the new graphical user interface was deployed to all observers. The development project has been a success, defined as providing new flexible operating modes to users without incurring any downtime at the change over or interruption to the ongoing monitoring projects in which the observatory specializes. Devolving responsibility for data entry to users does not necessarily simplify the role of observatory staff. Ceding that absolute hands-on control by experienced staff complicates the support task because staff no longer have advance personal knowledge of everything the telescope is doing. In certain cases software utilities and controls can be developed to simplify tasks for both observers and operations staff.
A shared approach to supporting remote observing for multiple observatories
The University of California (UC) began operating the Lick Observatory onMount Hamilton, California in 1888. Nearly a century later, UC became a founding partner in the establishment of theW. M. Keck Observatory (WMKO) in Hawaii, and it is now a founding partner in the Thirty Meter Telescope (TMT) project. Currently, most UC-affiliated observers conduct the majority of their ground-based observations using either the Keck 10-meter Telescopes on Mauna Kea or one or more of the six Lick telescopes now in operation on Mount Hamilton; some use both the Keck and Lick Telescopes. Within the next decade, these observers should also have the option of observing with the TMT if construction proceeds on schedule. During the current decade, a growing fraction of the observations on both the Keck and Lick Telescopes have been conducted from remote observing facilities located at the observer's home institution; we anticipate that TMT observers will expect the same. Such facilities are now operational at 8 of the 10 campuses of UC and at the UC-operated Lawrence Berkeley National Laboratory (LBNL); similar facilities are also operational at several other Keck-affiliated institutions. All of the UC-operated remote observing facilities are currently dual-use, supporting remote observations with either the Keck or Lick Telescopes. We report on our first three years of operating such dual-use facilities and describe the similarities and differences between the Keck and Lick remote observing procedures. We also examine scheduling issues and explore the possibility of extending these facilities to support TMT observations.
Mopra remote observing: a story of innovation and success
The Mopra Radio Telescope is a 22m single-dish radio telescope located near Siding Spring Observatory in New South Wales, Australia. Its receiver systems cover the 3mm, 7mm and 12mm bands for single-dish observing, as well as the 6/3cm and 20/13cm bands used for Very Long Baseline Interferometry (VLBI). The remote location of the telescope, a good day's drive from Sydney, made it a good candidate to implement remote observing capabilities which would no longer require observers to travel to the telescope, but bring the telescope to them. In a first step this was implemented in a controlled environment three years ago. It enabled remote observing from a dedicated workstation at the Australia Telescope Compact Array (ATCA) control building some 160km away from the observatory. In a second step two years ago, remote observing was extended to allow observing from any location in the world for qualifying observers. There were a number of challenges that needed to be addressed, from telescope safety to internet and data link reliability, computer security, and providing the observers with adequate situation awareness tools. The uptake by observers has been very good with over 40% of the observing in 2009 having been executed remotely. Further, many small and unallocated time slices were able to be productively used as they would not have warranted a trip to the observatory in their own merit but were usable thanks to remote observing. This helped push the productivity of the Mopra telescope in 2009 to the highest figure in its 17 year history.
Archive Operations and Legacy
icon_mobile_dropdown
Spitzer Heritage Archive
Xiuqin Wu, Trey Roby, Loi Ly
The Spitzer Heritage Archive1 will host all the raw and final reprocessed science and calibration data products from the observations made by Spitzer Space Telescope. The interactive web interface will give users the tools to search the database and explore their search results interactively. We also reuse the existing software and services and pay close attention to the re-usability of the newly developed system, making it easy to expand and adopt new technology in the future. This paper discusses our design principles, system architecture, reuse of the existing software, and reusable components of the system.
Science data production at ESO: strategy, plans, and lessons learned
Martino Romaniello, Wolfram Freudling, Alain Smette, et al.
ESO aims at supporting the production of science grade data products for all of its Paranal instruments. This serves the dual purpose of facilitating the immediate exploitation of the data by the respective PIs, as well as the longer term one by the community at large through the ESO Science Archive Facility. The production of science grade data products requires an integrated approach to science and calibration observations and the development of software to process and calibrate the raw data. Here we present ESO's strategy to complement the in-house generation of data products with contributions returned by our users. The most relevant lessons we have learned in the process are discussed, as well.
User Support
icon_mobile_dropdown
User support: new ways forward after 10 years of successful VLT operations
User support and operations of a large observatory rely on a well defined infrastructure, which is based on different policies, procedures, and tools. April 1, 2009 marked the 10th anniversary of VLT operations. Our VLT operations and data-flow schemas have proven to be reliable and efficient and users feedback continues to be positive. Thanks to eleven years of day-to-day experience and users feedback, we have evaluated new possible ways forward to make operations even smoother and more efficient. Here, I will review recent developments and new services offered to our VLT users community.
ALMA science operations and user support: software
Mark G. Rawlings, Lars-Åke Nyman, Baltasar Vila Vilaro
An overview will be presented of the various software subsystems currently in development for the support of ALMA Early and Full Science Operations. This will include a description of the software subsystems currently being devised to address the following: Proposal preparation and submission system (ObsPrep); Software systems for tracking the proposal review process, post-acceptance project tracking, plus other miscellaneous components (ObOps); Observation Scheduling (Scheduler); Data Archive (Archive); Data Reduction Pipelines (QuickLook, Pipeline); Quality Assurance and Trend Analysis (AQUA). Additional user support systems (Science Operations Web Pages, User Portal, etc.) will also be outlined.
Handling observation proposals for SALT
Christian Hettlage, David A. H. Buckley, Anne C. Charles, et al.
SALT uses the Principal Investigator Proposal Tool (PIPT) for generating, checking, submitting and editing proposals. The PIPT maps XML into Java classes with immediate error and consistency checking, and thus prevents non-feasible observation requests. Various tools allow the user to simulate SALT observations. These include standard source spectra (e.g. black body, power law, Kurucz model atmospheres), and allow users to add their own library spectra. The PIPT is complemented by the Web Manager for administering submitted proposals. It is discussed how the code of these tools can easily be extended for future instruments and used for other projects.
SMARTS revealed
John P. Subasavage, Charles D. Bailyn, R. Christopher Smith, et al.
The Small and Moderate Aperture Research Telescope System (SMARTS)* consists of four telescopes atop Cerro Tololo Inter-American Observatory (CTIO): the 0.9m, 1.0m, 1.3m, and 1.5m. A consortium of twelve institutions and universities began funding operations in February 2003. Time allocation for these facilities is as follows: ~65% to consortium members, ~25% to the general community, and 10% to Chilean researchers. Thus, resources remain available to the community while providing a unique opportunity for consortium members; the possibility of high temporal cadence monitoring coupled with long time baseline monitoring. Indeed, a number of member programs have benefited from such a schema. Furthermore, two of the four telescopes are scheduled in a queue mode in which observations are collected by service observers. Queue mode investigators have access to spectroscopic observations (both RC and echelle) as well as direct imaging (both optical and near-IR simultaneously). Of the remaining two telescopes, the 1.0m is almost exclusively operated in user mode and contains a 20'×20' FOV optical imager, and the 0.9m is operated both in user and service mode in equal allotments and also has a dedicated optical imager. The latter facilities are frequently used for hands-on student training under the superb sky conditions afforded at CTIO. Currently, three of the partner universities are responsible for managing telescope scheduling and data handling, while one additional university is responsible for some of the instruments. In return, these universities receive additional telescope time. Operations are largely run by a handful of people, with six personnel from the four support universities and seven dedicated personnel in Chile (five observers, one observer support engineer, and one postdoctoral appointee). Thus far, this model has proven to be both an efficient and an effective method for operating the small telescopes at CTIO.
Operations and Data Quality Control
icon_mobile_dropdown
Calibration of the LSST instrumental and atmospheric photometric passbands
David L. Burke, T. Axelrod, Aurélien Barrau, et al.
The Large Synoptic Survey Telescope (LSST) will continuously image the entire sky visible from Cerro Pachon in northern Chile every 3-4 nights throughout the year. The LSST will provide data for a broad range of science investigations that require better than 1% photometric precision across the sky (repeatability and uniformity) and a similar accuracy of measured broadband color. The fast and persistent cadence of the LSST survey will significantly improve the temporal sampling rate with which celestial events and motions are tracked. To achieve these goals, and to optimally utilize the observing calendar, it will be necessary to obtain excellent photometric calibration of data taken over a wide range of observing conditions - even those not normally considered "photometric". To achieve this it will be necessary to routinely and accurately measure the full optical passband that includes the atmosphere as well as the instrumental telescope and camera system. The LSST mountain facility will include a new monochromatic dome illumination projector system to measure the detailed wavelength dependence of the instrumental passband for each channel in the system. The facility will also include an auxiliary spectroscopic telescope dedicated to measurement of atmospheric transparency at all locations in the sky during LSST observing. In this paper, we describe these systems and present laboratory and observational data that illustrate their performance.
Solving the global photometric self-calibration problem in LSST
R. Lynne Jones, Nikhil Padmanabhan, Zeljko Ivezic, et al.
We present an innovative method for photometric calibration of massive survey data that will be applied to the Large Synoptic Survey Telescope (LSST). LSST will be a wide-field ground-based system designed to obtain imaging data in six broad photometric bands (ugrizy, 320-1050 nm). Each sky position will be observed multiple times, with about a hundred or more observations per band collected over the main survey area (20,000 sq.deg.) during the anticipated 10 years of operations. Photometric zeropoints are required to be stable in time to 0.5% (rms), and uniform across the survey area to better than 1% (rms). The large number of measurements of each object taken during the survey allows identification of isolated non-variable sources, and forms the basis for LSST's global self-calibration method. Inspired by SDSS's uber-calibration procedure, the self-calibration determines zeropoints by requiring that repeated measurements of non-variable stars must be self-consistent when corrected for variations in atmospheric and instrumental bandpass shapes. This requirement constrains both the instrument throughput and atmospheric extinction. The atmospheric and instrumental bandpass shapes will be explicitly measured using auxiliary instrumentation. We describe the algorithm used, with special emphasis both on the challenges of controlling systematic errors, and how such an approach interacts with the design of the survey, and discuss ongoing simulations of its performance.
The physical model in action: quality control for X-Shooter
Sabine Moehler, Paul Bristow, Florian Kerber, et al.
The data reduction pipeline for the VLT 2nd generation instrument X-Shooter uses a physical model to determine the optical distortion and derive the wavelength calibration. The parameters of this model describe the positions, orientations, and other physical properties of the optical components in the spectrograph. They are updated by an optimisation process that ensures the best possible fit to arc lamp line positions. ESO Quality Control monitors these parameters along with all of the usual diagnostics. This enables us to look for correlations between inferred physical changes in the instrument and, for example, instrument temperature sensor readings.
Quality control and data flow operations of the survey instrument VIRCAM
Wolfgang Hummel, Reinhard Hanuschik, Lander de Bilbao, et al.
VIRCAM is the wide field infrared camera of the VISTA survey telescope on Paranal. VIRCAM, operated by ESO since Oct. 2009, is equipped with 16 detectors and produces on average 150 Gigabytes of data per night. In the following article we describe the back-end data flow operations and in particular the quality control procedures which are applied to ESO VIRCAM data.
Handling heterogeneous arrays: calibrations and data reduction
Stuartt A. Corder, Melvyn C. H. Wright, Jin Koda
We present the steps taken at the Combined Array for Research in Millimeter-wave Astronomy (CARMA) to handle the heterogeneous nature of the array, from apriori calibrations to data reduction. We outline the steps needed to track relevant variable quantities over time. We discuss methods for combining interferometric visibilities and single dish data in the context of single systems designed to obtain all the necessary data, potentially at the same time. Such an observing approach is available at CARMA and it is the intention of the Atacama Large Millimeter-submillimeter Array (ALMA) to offer this capability as a standard observing mode.
The APEX calibration plan: goals, implementation, and achievements
Michael Dumke, Felipe Mac-Auliffe
The quality of scientific data depends on the accuracy of the absolute intensity calibration. This absolute calibration is especially difficult in ground-based sub-mm astronomy. At the Atacama Pathfinder Experiment (APEX), we take various measures in order to ensure a proper calibration of the final science product, including real-time efforts (e.g. pointing models) and dedicated measurements whose results are applied afterwards (e.g. opacity or efficiencies). In this presentation we will give an overview over the various steps taken at APEX to overcome most calibration challenges. We will explain their implementation as calibration plan, present an analysis of the results obtained, and discuss those results in view of the reliability of the released science product.
Poster Session
icon_mobile_dropdown
Preventive maintenance optimization at Paranal Observatory
Observatories are important for the evolution of astronomical research. Equally important is their maintainability. Of course, the management of our fixed budget as well as assuring reliability, availability and system efficiency is directly related to the maintainability of this center of observation. Can we manage this situation and maintain reliability, availability and efficiency? The answer is, yes. There are new maintenance techniques that allow us to deal with these requirements. PMO, Preventive Maintenance Optimization is one of the new techniques that has recently grown in popularity and it is structured as follows: - Prepare PMO - Define System or Equipment according to Reliability Requirements - Review Existing PM - Screen Task for Removal - Optimize Remaining Tasks - Fill Gaps on PM - Review Manufacture Recommendations - Optimize PM Work Order - Implement Change - Evaluate Improvement. The implementation of PMO is a process that will allow the Observatory to increase the efficiency of the Maintenance plans. The results of this new process will not be evident immediately and will be evaluated in the future.
Reliability culture at La Silla Paranal Observatory
Sergio Gonzalez
The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.
Data management subsystem software architecture for JWST
Daryl A. Swade
At the Space Telescope Science Institute, the Data Management Subsystem (DMS) is responsible for data reformatting from telemetry to FITS, pipeline calibration, and providing the data archive. A DMS has been previously developed for two astronomical telescopes in space, the Hubble Space Telescope and the Kepler Mission. DMS software analysis and design has begun for the James Webb Space Telescope (JWST), which is scheduled for launch in 2014 by the National Aeronautics and Space Administration. Although there will be a great deal of software reuse from the previous missions, differences in the operations concept for JWST will have implications for the DMS software system architecture. A number of the design challenges for the DMS software system architecture that result from the JWST operations concept will be considered. Event-driven operations, which mean the detailed observation schedule cannot be predicted ahead of execution time, will require extensive changes to the data flow at the beginning of DMS science data processing. A scheme for priority processing of exposures must be implemented to insure rapid turn-around on time critical wave front sensing data. JWST science data product design will reflect infrared detectors utilizing up-the-ramp processing. The concept of an observation in program planning will result in a new model to associate exposures and form higher level data products. In addition, the JWST DMS will introduce a new paradigm for reprocessing to meet data user demands and be compatible with the Virtual Observatory protocols.
Queue observing at the Observatoire du Mont-Megantic 1.6-m telescope
Étienne Artigau, Robert Lamontagne, René Doyon, et al.
Queue planning of observation and service observing are generally seen as specific to large, world-class, astronomical observatories that draw proposal from a large community. One of the common grievance, justified or not, against queue planning and service observing is the fear of training a generation of astronomers without hands-on observing experience. At the Observatoire du Mont-Mégantic (OMM) 1.6-m telescope, we are developing a student-run service observing program. Queue planning and service observing are used as training tools to expose students to a variety of scientific project and instruments beyond what they would normally use for their own research project. The queue mode at the OMM specifically targets relatively shallow observations that can be completed in less than a few hours and are too short to justify a multi-night classical observing run.
Recent developments for the SINFONI pipeline
Konstantin Mirny, Andrea Modigliani, Mark J. Neeser, et al.
The SINFONI data reduction pipeline, as part of the ESO-VLT Data Flow System, includes recipes for Paranal Science Operations, and for Data Flow Operations at Garching headquarters. At Paranal, it is used for the quicklook data evaluation. The pipeline is available to the science community for reprocessing data with personalised reduction strategies and parameters. The recipes are implemented with the ESO Common Pipeline Library (CPL). SINFONI is the Spectrograph for INtegral Field Observations in the Near Infrared (1.1-2.45 um) at the ESO-VLT, and was developed and built by ESO and MPE in collaboration with NOVA. It consists of the SPIFFI (SPectrometer for Infrared Faint Field Imaging) integral field spectrograph and an adaptive optics module which allows point spread functions very close to the diffraction limit. The image slicer of SPIFFI chops the SINFONI field of view on the sky into 32 slices which are re-arranged to a pseudo slit. The latter is then dispersed by one of four possible gratings (J, H, K, and H+K). The instrument thus produces a two-dimensional (2D) raw image that contains both spatial (along the pseudo-slit) and spectral information. The ultimate task of the SINFONI pipeline is to reconstruct this frame into a three-dimensional (3D) data cube, with the x and y axes representing the spatial, on-sky dimension, and the corresponding spectrum of each spatial pixel along the z-axis. In the present article we describe two major improvements to the SINFONI pipeline. The first is a development to monitor instrument efficiency and stellar zero-points using telluric standard stars. The second involves the implementation of a semi-empirical algorithm to calibrate and remove the effects of atmospheric refraction, sometimes visible in the 3D cube reconstruction. The latter improves the positional offsets through the wavelength cube to an r.m.s. shift of better than 0.25 pixels.
The GPS water vapor monitor and thermal astronomy at Gemini South
James Radomski, Gelys Trancho, Lucas Fuhrman, et al.
We will discuss the implementation and calibration of a new GPS based water vapor monitor installed at Cerro Pachón for the Gemini Observatory in Chile. The primary goal of this system is the use of GPS signals to monitor the Precipitable Water Vapor (PWV) in the atmosphere in near-realtime. This is vital in maximizing the efficiency of queue observations in the thermal infrared in which atmospheric transmission and sensitivity is highly dependent on PWV. The GPS WV system was calibrated using near-IR spectroscopy of known water lines based on atmosphere models and imaging the thermal mid-IR background. Observations were conducted using the near-IR imager/spectrometer Phoenix for K, L, and M-band spectroscopy (2.2μm, 3.5μm, 4.5μm) and the mid-infrared imager/spectrometer T-ReCS imaging between 8-20 μm.
DTS: the NOAO Data Transport System
The Data Transport System (DTS) provides automated, reliable, high-throughput data transfer between the telescopes, archives and pipeline processing systems used by the NOAO centers in the Northern and Southern hemispheres. DTS is implemented using an XML-RPC1 architecture to eliminate the need for persistent network connections between the sites, allowing each site to provide or consume services within the network only as needed. This architecture also permits remote control and monitoring of each site, and for language-independent client applications (e.g. a web interface to display transfer status or a compiled task to queue data for transport which is more tightly coupled with the acquisition system being used). The resulting system is a highly multi-threaded distributed application able to span a wide range of network environments and operational uses.
The Gemini Recipe System: a dynamic workflow for automated data reduction
Kathleen Labrie, Craig Allen, Paul Hirst, et al.
Gemini's next generation data reduction software suite aims to offer greater automation of the data reduction process without compromising the flexibility required by science programs using advanced or unusual observing strategies. The Recipe System is central to our new data reduction software. Developed in Python, it facilitates near-real time processing for data quality assessment, and both on- and off-line science quality processing. The Recipe System can be run as a standalone application or as the data processing core of an automatic pipeline. The data reduction process is defined in a Recipe written in a science (as opposed to computer) oriented language, and consists of a sequence of data reduction steps, called Primitives, which are written in Python and can be launched from the PyRAF user interface by users wishing to use them interactively for more hands-on optimization of the data reduction process. The fact that the same processing Primitives can be run within both the pipeline context and interactively in a PyRAF session is an important strength of the Recipe System. The Recipe System offers dynamic flow control allowing for decisions regarding processing and calibration to be made automatically, based on the pixel and the metadata properties of the dataset at the stage in processing where the decision is being made, and the context in which the processing is being carried out. Processing history and provenance recording are provided by the AstroData middleware, which also offers header abstraction and data type recognition to facilitate the development of instrument-agnostic processing routines.
The Spitzer Bibliography Database: bibliographic statistics
Elena Scire, Ben Hiu Pan Chan, Nancy Silbermann, et al.
The Spitzer Science Center maintains a database of peer-refereed publications utilizing observations made by the Spitzer Space Telescope5. Originally intended as a way to easily track these publications with limited resources, the database has grown in scope to provide more services for investigators. The design and population of the system and some interesting insights into the use of Spitzer data are presented.
Spitzer warm mission transition and operations
William A. Mahoney, Lisa J. Garcia, Joseph Hunt Jr., et al.
Following the successful dynamic planning and implementation of IRAC Warm Instrument Characterization activities, transition to Spitzer Warm Mission operations has gone smoothly. Operation teams procedures and processes required minimal adaptation and the overall composition of the Mission Operation System retained the same functionality it had during the Cryogenic Mission. While the warm mission scheduling has been simplified because all observations are now being made with a single instrument, several other differences have increased the complexity. The bulk of the observations executed to date have been from ten large Exploration Science programs that, combined, have more complex constraints, more observing requests, and more exo-planet observations with durations of up to 145 hours. Communication with the observatory is also becoming more challenging as the Spitzer DSN antenna allocations have been reduced from two tracking passes per day to a single pass impacting both uplink and downlink activities. While IRAC is now operating with only two channels, the data collection rate is roughly 60% of the four-channel rate leaving a somewhat higher average volume collected between the less frequent passes. Also, the maximum downlink data rate is decreasing as the distance to Spitzer increases requiring longer passes. Nevertheless, with well over 90% of the time spent on science observations, efficiency has equaled or exceeded that achieved during the cryogenic mission.
Toward a green observatory
Ueli Weilenmann, Christian Ramírez, Pierre Vanderheyden
Many of the modern observatories are located at remote sites, far from larger cities and away from infrastructure like power grids, water supplies and roads. On-site power generation in island mode is often the only choice to provide electricity to an observatory. During the 2008 petrol price rally, conventional power generation has received special attention and alternatives are being studied now in many organisations to keep energy prices at bay. This paper shall outline the power generation at the ESO VLT/VLTI observatory at Paranal as it is now and a plan for a possible way out of the dependency on fossil fuels in the near future. A discussion of several alternatives including wind energy, solar energy and heat recovery from a conventional power plant shall be analysed and compared. Finally, a project is being proposed to equip the VLT/VLTI with a modern alternative energy supply, based on a novel concept: Solar cooling.
First year of ALMA site software deployment: where everything comes together
Víctor González, Matias Mora, Rodrigo Araya, et al.
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front line of the project's software deployment and integration effort. Among the group's main responsibilities are the deployment, configuration and support of the observation systems, in addition to infrastructure administration, all of which needs to be done in close coordination with the development groups in Europe, North America and Japan. Software support has been the primary interaction key with the current users (mainly scientists, operators and hardware engineers), as the software is normally the most visible part of the system. During this first year of work with the production hardware, three consecutive software releases have been deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at 5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the experience of this 15-people group as part of the construction team at the ALMA site, and working together with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent results of teamwork, and also some of the troubles that such a complex and geographically distributed project can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations plan.
Software operations support at Gemini Observatory
Angelic W. Ebbers, Cristian Urrutia, Tom Cumming, et al.
Operating a modern telescope requires many software systems working together to maintain/monitor the optomechanical positions, sequence and control individual instruments and wave-front sensors and control the data transfer and quality monitoring. Supporting these complex interconnected systems can be a daunting task, especially when any single failure can cause a cascade effect which tends to hide the original problem. At Gemini, we have several indispensable tools which allow us to track the behavior and performance of all running systems and enable us to accurately investigate and isolate any problems which occur at night. These tools include: The Gemini Engineering Archive (GEA), VME console logs, operational software logging, circular buffers and high level tools like the observation logs. We'll go into some detail about each of these tools and how they enable us to accurately investigate problems and performance issues. Finally, the most important ingredient for successful operations support is the dedicated people from electronics, systems, mechanical, optical and software specializations who work together, sharing their varied expertise, utilizing the tools and data collected in order to solve issues as they occur.
The ESO Extremely Large Telescope dome: system engineering strategies for electrical power management
G. Marchiori, L. Giacomel, C. Manfrin
The isolated location and the electrical power demanded by EELT Dome operations put several engineering challenges from a system point of view: the power generation and distribution, the required Electromagnetic Compatibility between different users connected to the same network, the management of mass accelerations associated to other electrical power peaks. Moreover, initial costs and life-cycle costs shall be also considered for the electrical network configuration and for the final trade-offs. The systemic approach is presented with the strategies proposed for generation, distribution and users connection, in order to optimise the configuration and assure the required Dome performances.
A portable observatory for persistent monitoring of the night sky
We describe the design and operation of a small, transportable, robotic observatory that has been developed at Los Alamos National Laboratory. This small observatory, called RQD2 (Raptor-Q Design 2), is the prototype for nodes in a global network capable of continuous persistent monitoring of the night sky. The observatory employs five wide-field imagers that altogether view about 90% of the sky above 12 degrees elevation with a sensitivity of R=10 magnitude in 10 seconds. Operating robotically, the RQD2 system acquires a nearly full-sky image every 20 seconds, taking more than 10,000 individual images per night. It also runs real-time astrometric and photometric pipelines that provide both a capability to autonomously search for bright astronomical transients and monitor the variability of optical extinction across the full sky. The first RQD2 observatory began operation in March 2009 and is currently operating at the Fenton Hill site located near Los Alamos, NM.We present a detailed description of the RQD2 system and the data taken during the first several months of operation.
From Chile to Europe in minutes: handling the data stream from ESO's Paranal Observatory
Martino Romaniello, Stefano Zampieri, Cecilia Cerón, et al.
The ESO telescopes in Chile are operated in a geographically distributed scheme, in which some of the essential steps in the end-to-end observing chain take place in Europe. Most notably, the health status of the instruments as derived from the data themselves is monitored in Europe and the results fed back to the observatory within the hour. The flexibility of this scheme strongly depends on the speed with which the data stream produced by the telescopes can be sent to Europe for analysis and storage. The main challenge to achieve a fast intercontinental data transfer is the data volume itself, which currently reaches an average 25 GB/night (compressed) for the four VLT Unit Telescopes. Since late 2008, this stream has been entirely transferred through the internet via a 4.56 Mbit/s bandwidth assured via a Quality of Service policy, which sufficed to transfer an average night of data within a few hours. A very recent enlargement of this capacity to 9.12 Mbit/s will soon allow the addition of the calibration data for VISTA, the new infrared survey telescope on Paranal, to the data stream transferred through the internet. Ultimately, the average data volume produced on Paranal once the visible VLT Survey Telescope (VST) and the full complement of second-generation VLT instruments becomes available is expected to exceed 200 GB/night. Transferring it over the internet will require a new fiber-based infrastructure currently under construction, as well as the use of additional high bandwidth channels. This infrastructure, provided by the European Union co-funded project EVALSO, should provide a data transfer capacity exceeding 1 Gbit/s that will allow the transfer to Europe of the entire Paranal data stream, as well as that of the nearby Observatory of Cerro Armazones and of the future European Extremely Large Telescope, with a delay of minutes at most since the data were taken.
PySALT: the SALT science pipeline
Steven M. Crawford, Martin Still, Pim Schellart, et al.
PySALT is the python/PyRAF-based data reduction and analysis pipeline for the Southern African Large Telescope (SALT), a modern 10m class telescope with a large user community consisting of 13 partner institutions. The two first generation instruments on SALT are SALTICAM, a wide-field imager, and the Robert Stobie Spectrograph (RSS). Along with traditional imaging and spectroscopy modes, these instruments provide a wide range of observing modes, including Fabry-Perot imaging, polarimetric observations, and high-speed observations. Due to the large user community, resources available, and unique observational modes of SALT, the development of reduction and analysis software is key to maximizing the scientific return of the telescope. PySALT is developed in the Python/PyRAF environment and takes advantage of a large library of open-source astronomical software. The goals in the development of PySALT are: (1) Provide science quality reductions for the major operational modes of SALT, (2) Create analysis tools for the unique modes of SALT, and (3) Create a framework for the archiving and distribution of SALT data. The data reduction software currently provides support for the reduction and analysis of regular imaging, high-speed imaging, and long slit spectroscopy with planned support for multi-object spectroscopy, high-speed spectroscopy, Fabry-Perot imaging, and polarimetric data sets. We will describe the development and current status of PySALT and highlight its benefits through early scientific results from SALT.
Autonomous operations in extreme environments: the AMICA case
Gianluca Di Rico, Maurizio Ragni, Mauro Dolci, et al.
An autonomous observatory is being installed at Dome C in Antarctica. It will be constituted by the International Robotic Antarctic Infrared Telescope (IRAIT) and the Antarctic Multiband Infrared CAmera (AMICA). Because of the extreme environment, the whole system has been developed to operate robotically, paying particular attention to the environmental conditions and the subsystems activity monitoring. A detailed description of the IRAIT/AMICA data acquisition process and management will be shown, focusing on automated procedures and solutions against safety risks.
The X-shooter pipeline
Andrea Modigliani, Paolo Goldoni, Frédéric Royer, et al.
The X-shooter data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for Paranal Science Operations, and for Data Product and Quality Control Operations at Garching headquarters. At Paranal, it is used for the quick-look data evaluation. The pipeline recipes can be executed either with EsoRex at the command line level or through the Gasgano graphical user interface. The recipes are implemented with the ESO Common Pipeline Library (CPL). X-shooter is the first of the second generation of VLT instruments. It makes possible to collect in one shot the full spectrum of the target from 300 to 2500 nm, subdivided in three arms optimised for UVB, VIS and NIR ranges, with an efficiency between 15% and 35% including the telescope and the atmosphere, and a spectral resolution varying between 3000 and 17,000. It allows observations in stare, offset modes, using the slit or an IFU, and observing sequences nodding the target along the slit. Data reduction can be performed either with a classical approach, by determining the spectral format via 2D-polynomial transformations, or with the help of a dedicated instrument physical model to gain insight on the instrument and allowing a constrained solution that depends on a few parameters with a physical meaning. In the present paper we describe the steps of data reduction necessary to fully reduce science observations in the different modes with examples on typical data calibrations and observations sequences.
The new FORS pipeline
C. Izzo, L. de Bilbao, J. Larsen, et al.
Over the last decade of successful science operations with the VLT at Paranal, the instrument pipelines have played a critical role in ensuring the quality control of the instruments. During the last few years, instrument pipelines have gradually evolved into a tool suite capable of providing science grade data products for all major modes available for each instrument. In this paper we present the major enhancements that have been recently brought into the body of the FORS pipeline. The algorithms applied for wavelength and photometric calibrations have been deeply revised and improved by implementing innovative ideas, and the FORS instrument is now almost fully supported in all of its modes: spectroscopy, imaging and spectro-polarimetry. Furthermore, the satisfactory results obtained with the FORS pipeline have prompted synergies with other instrument pipelines. EFOSC2 at the NTT of the La Silla Observatory already shares with the FORS pipeline the imaging and spectroscopic data reduction code, and the spectroscopic part of the VIMOS pipeline is being reengineered along the same lines.
Spectroradiometric calibration of telescopes using laser illumination of flat field screens
It is standard practice at many telescopes to take a series of flat field images prior to an observation run. Typically the flat field consists of a screen mounted inside the telescope dome that is uniformly illuminated with a broadband light source. These flat field images are useful for characterizing the relative response of CCD pixels to light passing through the telescope optics and filters, but carry limited spectral information and are not calibrated for absolute flux. We present the results of performing in situ, spectroradiometric calibrations of a 1.2 m telescope at the Fred Lawrence Whipple Observatory, Mt. Hopkins, AZ. To perform a spectroradiometric calibration, a laser, tunable through the visible to near infrared, was coupled into an optical fiber and used to illuminate the flat field screen in situ at the telescope facility. A NIST traceable, calibrated photodiode was mounted on the telescope to measure the spectral flux reaching the aperture. For a particular filter, images of the screen were then captured for each laser wavelength as the wavelength was tuned over the filter bandpass. Knowledge of the incident flux then allows the relative responsivity of each CCD pixel at each wavelength to be calculated.
Changes and improvements to the Gemini North Aircraft Avoidance Program at the Gemini North Laser Guide Star facility on Mauna Kea
Jon Archambeau, Richard Oram, Michael Sheehan
Since March 2005 Gemini North Observatory routinely propagates a 12W solid state sodium laser into the night sky as part of Adaptive Optics imaging on dimmer portions of the celestial sphere. Gemini along with Keck and Subaru telescopes have created aircraft spotting programs to meet the FAA's rules for aircraft avoidance for outdoor laser propagation. This paper reviews the GN laser safety protocol for the outdoor use of lasers and assessment of the risks considered as part of outdoor laser propagation. We will show the results of Gemini's Aircraft Spotter program, and its continuous development over the past 5 years. As part of a continuous improvement activity Gemini in conjunction with the other laser equipped MK Observatories, Keck and Subaru, is currently testing the use of an all sky camera (ASCAM) to monitor the night sky and shutter the laser for air traffic over the Mauna Kea summit, HI. Use of the ASCAM is expected to increase the efficiency and accuracy of the aircraft spotting program. Gemini not only complies with, but strives to exceed the strict FAA rules for aircraft avoidance for outdoor laser propagation. The creation and implementation of the ASCAM is reviewed in this paper.
High-precision photometry with WIRCam at the CFHT
Daniel Devost, Loïc Albert, Douglas Teeple, et al.
We present a new observing mode using WirCam on the Canada-France-Hawaii Telescope (CFHT). The staring mode with WIRCam can observe a target for several hours on the same pixels of the array. This allows for characterization of the photometric variations of the target to less than 0.02%, or to a signal-to-Noise Ratio ≥ 5000. The technical challenges encountered to implement this mode are described as well as a simple model to estimate the idealized performance of this observing mode. Early results are also presented and compared to the models.
Characterization of the mid-IR image quality at Gemini South
Dan Li, Charles M. Telesco, Frank Varosi
To help the prospective observer take full advantage of the mid-IR capability of Gemini South, we characterize a key aspect of the mid-IR performance of the 8-meter telescope at Gemini-S, namely, the appearance and stability of its delivered mid-IR image profiles, with the goal of demonstrating that it can be used with a level of precision not used before. About 2000 images obtained with T-ReCS (a facility mid-IR camera at Gemini-S) between late 2003 and early 2009 were used for our image quality analysis. All targets are flux standards and recorded at one or more of the four bands Si-2 (8.74 μm), N (10.36 μm), Si-5 (11.66 μm), and Qa (18.3 μm). A non-linear least squares fitting of three profile models (Lorentzian, Gaussian, and Moffat) was performed on each image, and key parameters such as FWHM, ellipticity, position angle and Strehl-ratio were measured from the fitted profile. We find that the long-time-scale image quality is quite stable in terms of profile width or ellipticity, though short-time-scale variation is evident. We also examined the correlation between image quality and many ambient parameters and confirmed the interdependence between the image quality in the Qa band and the ambient humidity. The ellipticity of the profile was analyzed statistically as well. The average profiles for different filters can be used as important references in the future when a high-quality profile reference is not available during an observation.