Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.stsci.edu/~mperrin/software/gpidata/usage/starting.html
Дата изменения: Sat Feb 15 03:42:07 2014 Дата индексирования: Sun Mar 2 06:35:09 2014 Кодировка: IBM-866 Поисковые слова: guide 8.0 |
The pipeline software is designed to run in two different IDL sessions:
Splitting these tasks between two processes enables the GUIs to remain responsive even while long computations are running.
Exactly how you start up those two IDL sessions varies with operating system, and with whether you have installed from source or compiled code.
Mac OS and Linux
On Linux or Mac, a script is provided in pipeline/scripts that starts 2 xterms, each with an IDL session, and runs the two appropriate commands:
shell> gpi-pipeline
You should see two xterms appear, both launch IDL sessions, and various commands run. If in the second xterm you see a line reading “Now polling for data in such-and-such directory” at the bottom, and the GPI Status Console and Launcher windows are displayed, then the pipeline has launched successfully.
Warning
In order for the gpi-pipeline script to work, your system must be set up such that IDL can be launched from the command line by running idl. The script will not execute correctly if you use an alias to start IDL rather than having the IDL executable in your path. In this case you will probably get an error in the xterms along the lines lines of: ‘xterm: Can’t execvp idl: No such file or directory’. To check on how you start IDL, run:
shell> which idl
A blank output (or an output that says ‘aliased’) means that idl is not in your path. To add it, go to a user-writeable directory in your path (you can check which directories are in your path by running echo $PATH). Then create a symbolic link in the directory by running:
shell> ln -s /path/to/idl idl
Note that you can always start the pipeline using two separate IDL sessions as in the Windows instructions, below. You can also edit the gpi-pipeline script with the full path of your IDL binary.
Windows
On Windows, there is a batch script in the pipeline/scripts directory called gpi-pipeline-windows.bat. Double click it to start the GPI pipeline.
For convenience, you can create a shortcut of gpi-pipeline-windows.bat by right clicking on the file and selecting the option to create a shortcut. You can then place this on your desktop, start menu, or start screen to launch the pipeline from where it is convenient for you.
Note
Alternatively, on any OS you can use the following to start up the pipeline manually:
Start an IDL session. Run
IDL> gpi_launch_pipeline
Start a second IDL session. Run
IDL> gpi_launch_guis
The compiled binary versions of DRP applications that can be started with the IDL Virtual Machine are:
How to run a .sav file in the IDL Virtual Machine depends on your operating system.
Mac OS and Linux
Just like for the source code install, a script is provided in pipeline/scripts that launches 2 IDL sessions, and starts the pipeline code. While the under the hood implementation is slightly different, the script name and effective functionality are identical:
shell> gpi-pipeline
You should see two xterms appear, both launch IDL sessions, and various commands run. If in the second xterm you see a line reading “Now polling for data in such-and-such directory” at the bottom, and the GPI Status Console and Launcher windows are displayed, then the pipeline has launched successfully.
Alternatives
You may also choose to start the IDL runtime sessions manually, as follows. The first step is to launch the IDL Virtual Machine from the command line. To run a .sav file in the IDL Virtual Machine:
Enter the following at the UNIX command line:
>>>idl -vm=<path><filename>
where <path> is the complete path to the .sav file and <filename> is the name of the .sav file. The IDL Virtual Machine window is displayed.
Click anywhere in the IDL Virtual Machine window to close the window and run the .sav file.
You may also launch the IDL Virtual Machine and use the file selection menu to locate the .sav file to run:
Enter the following at the UNIX command line:
>>>idl -vm
The IDL Virtual Machine window is displayed.
Click anywhere in the IDL Virtual Machine window to display the file selection menu.
Locate and select the .sav file and click OK.
Windows
Windows users can drag and drop the .sav file onto the IDL Virtual Machine desktop icon, launch the IDL Virtual Machine and open the .sav file, or launch the.sav file in the IDL Virtual Machine from the command line.
To use drag and drop:
To open a .sav file from the IDL Virtual Machine icon:
To run a .sav file from the command line prompt:
Open a command line prompt. Select Run from the Start menu, and enter cmd.
Change directory (cd) to the IDL_DIR\bin\bin.platform directory where platform is the platform-specific bin directory.
Enter the following at the command line prompt:
>>> idlrt -vm=<path><filename>
where <path> is the path to the .sav file, and <filename> is the name of the .sav file.
Mac OS
Macintosh users can also drag and drop the .sav file onto the IDL Virtual Machine desktop icon, launch the IDL Virtual Machine and open the .sav file, or launch the.sav file in the IDL Virtual Machine from the command line.
To use drag and drop:
To open a .sav file from the IDL Virtual Machine icon:
The IDL session running the pipeline should immediately begin to look for new recipes in the queue directory. A status window will be displayed on screen (see below). On startup, the pipeline will display status text that looks like:
% Compiled module: [Lots of startup messages]
[...]
01:26:22.484 Now polling and waiting for Recipe files in /Users/mperrin/data/GPI/queue/
*****************************************************
* *
* GPI DATA REDUCTION PIPELINE *
* *
* VERSION 1.0 *
* *
* By the GPI Data Analysis Team *
* *
* Perrin, Maire, Ingraham, Savransky, Doyon, *
* Marois, Chilcote, Draper, Fitzgerald, Greenbaum *
* Konopacky, Marchis, Millar-Blanchaer, Pueyo, *
* Ruffio, Sadakuni, Wang, Wolff, & Wiktorowicz *
* *
* For documentation & full credits, see *
* http://docs.planetimager.org/pipeline/ *
* *
*****************************************************
Now polling for Recipe files in /Users/mperrin/data/GPI/queue/ at 1 Hz
If you see the “Now polling” line at the bottom, then the pipeline has launched successfully.
The pipeline will create a status display console window (see screen shot below). This window provides the user with progress bar indicators for ongoing actions, a summary of the most recently completed recipes, and a view of log messages. It also has a button for exiting the DRP (though you can always just control-C or quit the IDL window too). This is currently the only one of the graphical tools that runs in the same IDL session as the main reduction process.
Above: Snapshot of the administration console.
Several GUIs are available to select your data to be processed and to decide which processes and primitives will be applied to the data.
The gpi_launch_guis commands starts the GUI Launcher window:
These are described in detail in the GPI Data Pipeline User’s Guide.
To ensure scientific reproducibility and aid in comparisons of results, the GPI data pipeline carefully logs its actions.
Log files: The GPI data pipeline writes a log of all activities to text files in the $GPI_DRP_LOG_DIR directory. A new file is created for each date, with filenames following the format gpi_drp_YYMMDD.log where YYMMDD gives the current year, month, and date numbers in standard Gemini fashion. Log messages are also viewable on screen in the Status Console GUI, and printed to the console in the Pipeline IDL session.
FITS header history: Provenance information is also written to FITS headers of all output files, in several forms.
- A copy of the entire reduction recipe used to reduce a given file is pasted into the header, in a block of COMMENT lines. This block also includes comments giving the values of any environment variables used in that recipe. If an output file from one recipe is then used as input to a subsequent recipe, then both recipes will be recorded in the headers cumulatively.
- HISTORY lines in the FITS headers record actions as each recipe is processed, including which primitives are run and what the results are of various calculations. For each Primitive used in the recipe, a HISTORY line states the specific revision id of that primitive. HISTORY keywords also record the date and time of reduction, the computer hostname, and the username of the pipeline user.
- Some values of particular interest such as the names of calibration files used to reduce a given data set are also written as additional header keywords. For instance the keyword DRPWVCLF (DRP Wavecal File) records the name of the wavelength calibration file used when reducing a given observation.
Of particular note, the keyword QUIKLOOK = T indicates that a given file is the result of a “quicklook” quality reduction, typically in real time at the telescope. These may not have made use of optimal calibration files, are not likely to be as good as more careful re-analyses, and should generally not be used directly for publications.