Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://www.atnf.csiro.au/people/Tasso.Tzioumis/psss-mod8-corr.pdf
Äàòà èçìåíåíèÿ: Tue Apr 3 19:19:41 2012
Äàòà èíäåêñèðîâàíèÿ: Fri Feb 28 02:49:24 2014
Êîäèðîâêà:

Ïîèñêîâûå ñëîâà: http astrokuban.info astrokuban
Techniques of Radio Astronomy
T. L. Wilson1
Code 7210, Naval Research Laboratory, 4555 Overlook Ave., SW, Washington DC 20375-5320 tom.wilson@nrl.navy.mil

Abstract
This chapter provides an overview of the techniques of radio astronomy. This study began in 1931 with Jansky's discovery of emission from the cosmos, but the period of rapid progress began fifteen years later. From then to the present, the wavelength range expanded from a few meters to the sub-millimeters, the angular resolution increased from degrees to finer than milli arc seconds and the receiver sensitivities have improved by large factors. Today, the technique of aperture synthesis produces images comparable to or exceeding those obtained with the best optical facilities. In addition to technical advances, the scientific discoveries made in the radio range have contributed much to opening new visions of our universe. There are numerous national radio facilities spread over the world. In the near future, a new era of truly global radio observatories will begin. This chapter contains a short history of the development of the field, details of calibration procedures, coherent/heterodyne and incoherent/bolometer receiver systems, observing methods for single apertures and interferometers, and an overview of aperture synthesis. keywords: Radio Astronomy­Coherent Receivers­Heterodyne Receivers­Incoherent Receivers­Bolometers­Polarimeters­Spectrometers­High Angluar Resolution­ Imaging­Aperture Synthesis

1 Intro duction
Following a short introduction, the basics of simple radiative transfer, propagation through the interstellar medium, polarization, receivers, antennas, interferometry and aperture synthesis are presented. References are given mostly to more recent publications, where citations to earlier work can be found; no internal reports or web sites are cited. The units follow the usage in the astronomy literature. For more details, see Thompson et al. (2001), Gurvits et al. (2005), Wilson et al. (2008), and Burke & Graham-Smith (2009).


2

T. L. Wilson

The origins of optical astronomy are lost in pre-history. In contrast radio astronomy began recently, in 1931, when K. Jansky showed that the source of excess radiation at =20.5 MHz ( =14.6 m) arose from outside the solar system. G. Reber followed up and extended Jansky's work, but the most rapid progress occurred after 1945, when the field developed quickly. The studies included broadband radio emission from the Sun, as well as emission from extended regions in our galaxy, and later other galaxies. In wavelength, the studies began at a few meters where the emission was rather intense and more easily measured (see Sullivan 2005, 2009). Later, this was expanded to include centimeter, millimeter and then sub-mm wavelengths. In Fig. 1 a plot of transmission through the atmosphere as a function of frequency and wavelength, is presented. The extreme limits of the earth-bound radio window extend roughly from a lower frequency of 10 MHz ( 30 m) = = where the ionosphere sets a limit, to a highest frequency of 1.5 THz = ( 0.2 mm), where molecular transitions of atmospheric H2 O and N2 absorb = astronomical signals. There is also a prominent atmospheric feature at 55 GHz, or 6 mm, from O2 . The limits shown in Fig. 1 are not sharp since there are variations both with altitude, geographic position and time. Reliable measurements at the shortest wavelengths require remarkable sites on earth. Measurements at wavelengths shorter than =0.2 mm require the use of high flying aircraft, balloons or satellites. The curve in Fig. 1 allows an estimate of the height above sea level needed to carry out astronomical measurements. The broadband emission mechanism that dominates at meter wavelengths has been associated with the synchrotron process. Thus although the photons have energies in the micro electron volt range, this emission is caused by highly relativistic electrons (with factors of more than 103 ) moving in microgauss fields. In the centimeter and millimeter wavelength ranges, some broadband emission is produced by the synchrotron process, but additional emission arises from free-free Bremsstrahlung from ionized gas near high mass stars and quasi-thermal broadband emission from dust grains. In the mm/sub-mm range, emission from dust grains dominates, although freefree and synchrotron emission may also contribute. Spectral lines of molecules become more prominent at mm/sub-mm wavelengths (see Rybicki & Lightman 1979, Lequeux 2004, Tielens 2005). Radio astronomy measurements are carried out at wavelengths vastly longer than those used in the optical range (see Fig. 1), so extinction of radio waves by dust is not an important effect. However, the longer wavelengths lead to lower angular resolution, , since this is proportional to /D where D is the size of the aperture (see Jenkins & White 2001). In the 1940's, the angular resolutions of radio telescopes were on scales of many arc minutes, at best. In time, interferometric techniques were applied to radio astronomy, following the method first used by Michelson. This was further developed, resulting in Aperture Synthesis, mainly by M. Ryle and associates at Cambridge University (for a history, see Kellermann & Moran 2001). Aperture


Techniques of Radio Astronomy

3

Fig. 1. A plot of transmission through the atmosphere versus wavelength, in metric units and frequency, , in Hertz. The thick curve gives the fraction of the atmosphere (left vertical axis) and the altitude (right axis) needed to reach a transmission of 0.5. The fine scale variations in the thick curve are caused by molecular transitions (see Townes & Schawlow 1975). The thin vertical line on the left ( 10MHz) marks the boundary where ionospheric effects impede astronomical measurements. The labels above indicate the types of facilities needed to measure at the frequencies and wavelengths shown. For example, from the thick curve, at =100 µm, one half of the astronomical signal penetrates to an altitude of 45 km. In contrast, at =10 cm, all of this signal is present at the earth's surface. The arrows at the bottom of the figure indicate the type of atomic or nuclear process that gives rise to the radiation at the frequencies and wavelengths shown above (from Wilson et al. 2008).


4

T. L. Wilson

synthesis has allowed imaging with angular resolutions finer than milli arc seconds with facilities such as the Very Long Baseline Array (VLBA). Ground based measurements in the sub-mm wavelength range have been made possible by the erection of facilities on extreme sites such as Mauna Kea, the South Pole and the 5 km high site of the Atacama Large Millimeter/submm Array (ALMA). Recently there has been renewed interest in high resolution imaging at meter wavelengths. This is due to the use of corrections for smearing by fluctuations in the electron content of the ionosphere and advances that facilitate imaging over wide angles (see, e.g., Venkata 2010). With time, the general trend has been toward higher sensitivity, shorter wavelength, and higher angular resolution. Improvements in angular resolution have been accompanied by improvements in receiver sensitivity. Jansky used the highest quality receiver system then available. Reber had access to excellent systems. At the longest wavelengths, emission from astronomical sources dominates. At mm/sub-mm wavelengths, the transparency of the earth's atmosphere is an important factor, adding both noise and attenuating the astronomical signal, so both lowering receiver noise and measuring from high, dry sites are important. At meter and cm wavelengths, the sky is more transparent and radio sources are weaker. The history of radio astronomy is replete with ma jor discoveries. The first was implicit in the data taken by Jansky. In this, the intensity of the extended radiation from the Milky Way exceeded that of the quiet Sun. This remarkable fact shows that radio and optical measurements sample fundamentally different phenomena. The radiation measured by Jansky was caused by the synchrotron mechanism; this interpretation was made more than 15 years later (see Rybicki & Lightman 1979). The next discovery, in the 1940's, showed that the active Sun caused disturbances seen in radar receivers. In Australia, a unique instrument was used to associate this variable emission with sunspots (see Dulk 1985, Gary & Keller 2004). Among later discoveries have been: (1) discrete cosmic radio sources, at first, supernova remnants and radio galaxies (in 1948, see Kirshner 2004), (2) the 21 cm line of atomic hydrogen (in 1951, see Sparke & Gallagher 2007, Kalberla et al. 2005), (3) Quasi Stellar Ob jects (in 1963, see Begelman & Rees 2009), (4) the Cosmic Microwave Background (in 1965, see Silk 2008), (5) Interstellar molecules (see Herbst & Dishoeck 2009) and the connection with Star Formation, later including circumstellar and protoplanetary disks (in 1968, see Stahler & Palla 2005, Reipurth et al. 2007), (6) Pulsars (in 1968, see Lyne & Graham-Smith 2006), (7) distance determinations using source proper motions determined from Very Long Baseline Interferometry (see Reid 1993) and (8) molecules in high redshift sources (see Solomon & Vanden Bout 2005). These areas of research have led to investigations such as the dynamics of galaxies, dark matter, tests of general relativity, Black Holes, the early universe and gravitational radiation (for overviews see Longair 2006, Harwit 2006). Radio as-


Techniques of Radio Astronomy

5

tronomy has been recognized by the physics community in that four Nobel Prizes (1974, 1978, 1993 and 2006) were awarded for work in this field. In chemistry, the community has been made aware of the importance of a more general chemistry involving ions and molecules (see Herbst 2001). Two Nobel Prizes for chemistry were awarded to persons actively engaged in molecular line astronomy. Over time, the trend has been away from small groups of researchers constructing special purpose instruments toward the establishment of large facilities where users propose pro jects carried out by specialized staffs. These large facilities are in the process of becoming global. Similarly, the evolution of data reduction has been toward standardized packages developed by large teams. In addition, the demands of the interpretation of astronomical phenomena have led to multi-wavelength analyses interpreted with the use of detailed models. Outside the norm are pro jects designed to measure a particular phenomenon. A prime example is the study of the cosmic microwave background (CMB) emission from the early universe. CMB data were taken with the COBE and WMAP satellites. These results showed that the CMB is is a Black Body (see Eq. 6) with a temperature of 2.73 K. Aside from a dipole moment caused by our motion, there is angular structure in the CMB at a very low level; this is being studied with the PLANCK satellite. Much effort continues to be devoted to measurements of the polarization of the CMB with ground-based experiments such as BICEP, CBI, DASI and QUIET. For details and references to other CMB experiments, see their websites. In spectroscopy, there have been extensive surveys of the 21 cm line of atomic hydrogen, H I (see Kalberla et al. 2005) and the rotational J = 1 - 0 line from the ground state of carbon monoxide (see Dame et al. 1987). These surveys have been extended to external galaxies (see Giovanelli & Haynes 1991). During the Era of Reionization (redshift z 10 to 15), the H I line is shifted to meter wavelengths. The detection of such a feature is the goal of a number of individual groups, under the name HERA (Hydrogen Epoch of Reionization Arrays). 1.1 A Selected List of Radio Astronomy Facilities There are a large number of existing facilities; a selection is listed here. General purpose instruments include the largest single dishes: the Parkes 64-m, the Robert C. Byrd Green Bank Telescope, hereafter GBT, the Effelsberg 100 meter, the 15-m James Clerk Maxwell Telescope (JCMT), the IRAM 30-m millimeter telescope and the 305-m Arecibo instrument. All of these have been in operation for a number of years. Interferometers form another category of instruments. The Expanded Very Large Array, the EVLA, is now in the test phase with shared risk observing. Other large interferometer systems are the VLBA, the Westerbork Synthesis Radio Telescope in the Netherlands, the Australia Telescope, the Giant Meter Wave Telescope in


6

T. L. Wilson

India, the MERLIN array a number of arrays at Cambridge University in the UK and the MOST facility in Australia. In the mm range, CARMA in California and Plateau de Bure in France are in full operation, as is the SubMillimeter Array of the Harvard-Smithsonian CfA and ASIAA on Mauna Kea, Hawaii. At longer wavelengths, the Low Frequency Array, LOFAR, has started the first measurements and will expand by adding stations throughout Europe. The Square Kilometer Array, the SKA, is in the planning phase as is the FASR solar facility, while the Australian SKA Precursor (ASKAP), the South African SKA precuror, (MeerKAT), the Murchison Widefield Array in Western Australia and Long Wavelength Array in New Mexico are under construction. A portion of the Allen Telescope Array, ATA, is in operation. A number of facilities are under construction, being commissioned or have recently become operational. At sub-mm wavelengths, the Herschel Satellite Observatory has been delivering data. The Five Hundred Meter Aperture Spherical Telescope, FAST, a design based on the Arecibo instrument, is being planned in China. This will be the world's largest single aperture. The Large Millimeter Telescope, LMT, a joint Mexican-US pro ject, will soon begin science operations as will the Stratospheric Far-Infrared Observatory (SOFIA) operated by NASA and the German DLR organization. Descriptions of these instruments are to be found in the internet. Finally, the most ambitious ground based astronomy pro ject to date is ALMA which will start early science operations in late 2011 (for an account of the variety of ALMA science goals, see Bachiller & Cernicharo 2008).

2 Radiative Transfer and Black Bo dy Radiation
The total flux of a source is obtained by integrating Intensity (in Watts m- Hz-1 steradian-1 ) over the total solid angle s subtended by the source S =

s

2

I (, ) cos d .

(1)

The flux density of astronomical sources is given in units of the Jansky (hereafter Jy), that is, 1 Jy = 10-26 W m-2 Hz-1 . The equation of transfer is useful in interpreting the behavior of astronomcial sources, receiver systems, the effect of the earth's atmosphere on measurements. Much of this analysis is based on a one dimensional version of the general expression as (see Lequeux 2004 or Tielens 2005): dI = - I + ds . (2)



The linear absorption coefficient and the emissivity are independent of the intensity I . From the optical depth definition d = - ds, the Kirch-


Techniques of Radio Astronomy

7

hoff relation / = B (see (Eq. 6)) and the assumption of an isothermal medium, the result is: I (s) = I (0) e-


(s)

+ B (T ) (1 - e

- (s)

)

.

(3)

For a large optical depth, that is for (0) , (Eq. 3) approaches the limit I = B ( T ) . (4)

This is case for planets and the 2.73 K CMB. From (Eq, 3), the difference between I (s) and I (0) gives I (s) = I (s) - I (0) = (B (T ) - I (0))(1 - e
-

).

(5)

this represents the result of an on-source minus off-source measurement, which is relevant for discrete sources. The spectral distribution of the radiation of a black body in thermodynamic equilibrium is given by the Planck law B (T ) = If h 1 2h 3 c2 eh /kT - 1 . (6)

k T , the Rayleigh-Jeans Law is obtained: B ( , T ) = 2 2 kT c2 . (7)

RJ

In the Rayleigh-Jeans relation, the brightness and the thermodynamic temperatures of Black Body emitters are strictly proportional (Eq. 7). This feature is useful, so the normal expression of brightness of an extended source is brightness temperature TB : TB = c2 1 2 I = I . 2k 2 2k (8)

If I is emitted by a black body and h k T then (Eq. 8) gives the thermodynamic temperature of the source, a value that is independent of . If other processes are responsible for the emission of the radiation (e.g., synchrotron, free-free or broadband dust emission), TB will depend on the frequency; however (Eq. 8) is still used. If the condition (GHz) 20.84 (T(K)) is not valid, (Eq. 8) can still be applied, but TB will differ from the thermodynamic temperature of a black body. However, corrections are simple to obtain. If (Eq. 8) is combined with (Eq. 5), the result is an expression for brightness temperature: J (T ) = c2 (B (T ) - I (0))(1 - e 2k 2
- (s)

).


8

T. L. Wilson

The expression J (T ) can be expressed as a temperature in most cases. This quantity is referred to as TR , the radiation temperature in the mm/submm range, or the brightness temperature, TB for longer wavelengths. In the Rayleigh-Jeans approximation the equation of transfer is: dTB (s) = Tbk (0) - T (s) d , (9)

where TB is the measured quantity, Tbk (s) is the background source temperature and T (s) is the temperature of the intervening medium If the medium is isothermal, the general (one dimensional) solution becomes TB = Tbk (0) e-


(s)

+ T (1 - e

- (s)

)

.

(10)

2.1 The Nyquist Theorem and Noise Temp erature This theorem relates the thermodynamic quantity temperature to the electrical quantities voltage and power. This is essential for the analysis of noise in receiver systems. The average power per unit bandwidth, P (also referred to as Power Spectral Density, PSD), produced by a resistor R is P = iv = v2 1 = v 2R 4R
2 N

,

(11)

where v (t) is the voltage that is produced by i across R, and · · · indicates a time average. The first factor 1 arises from the condition for the transfer of 2 maximum power from R over a broad range of frequencies. The second factor 1 2 2 arises from the time average of v . Then v
2 N

= 4R k T .

(12)

When inserted into (Eq. 11), the result is P = k T . (13)

(Eq. 13) can also be obtained by a reformulation of the Planck law for one dimension in the Rayleigh-Jeans limit. Thus, the available noise power of a resistor is proportional to its temperature, the noise temperature TN , independent of the value of R and of frequency. Not all circuit elements can be characterized by thermal noise. For example a microwave oscillator can deliver 1 µW, the equivalent of more than 1016 K, although the physical temperature is 300 K. This is an example of a very nonthermal process, so temperature is not a useful concept in this case.


Techniques of Radio Astronomy

9

2.2 Overview of Intensity, Flux Density and Main Beam Brightness Temp erature Temperatures in radio astronomy have given rise to some confusion. A short summary with references to later sections is given here. Power is measured by an instrument consisting of an antenna and receiver. The power input can be calibrated and expressed as Flux Density or Intensity. For very extended sources, Intensity (see (Eq. 8)) can be expressed as a temperature, the main beam brightness temperature, TMB . To obtain TMB , the measurements must be calibrated (Section 5.3) and corrected using the appropriate efficiencies (see Eq. 37 and following). For discrete sources, the combination of (Eq. 1) with (Eq. 8) gives: S = 2 k 2 TB c2 . (14)

For a source with a Gaussian spatial distribution, this relation is S = 0.0736 TB Jy arc seconds
2

mm

-2

(15)

if the flux density S and the actual (or true ) source size are known, then the true brightness temperature, TB , of the source can be determined. For Local Thermodynamic Equilibrium (LTE), TB represents the physical temperature of the source. If the apparent source size, that is, the source angular size as measured with an antenna is known, (Eq. 15) allows a calculation of TMB . For discrete sources, TMB depends on the angular resolution. If the antenna beam size (see Fig. 3 and discussion) has a Gaussian shape b , the relation of actual s and apparent size o is:
2 2 2 o = s + b .

(16)

then from (Eq. 14), the relation of T

MB 2 b

and TB is: = TB
2 s

2 TMB s +

(17)

Finally, the PSD entering the receiver (Eq. 13) is antenna temperature, TA ; this is relevant for estimating signal to noise ratios (see (Eq. 39) and (Eq. 42)). To establish temperature scales and relate received power to source parameters for filled apertures, see Section 5.3. For interferometry and Aperture Synthesis, see Section 6. 2.3 Interstellar Disp ersion and Polarization Pulsars emit radiation in a short time interval (see Lorimer & Kramer 2004, Lyne & Graham-Smith 2006). If all frequencies are emitted at the same instant, the arrival time delay of different frequencies is caused by the ionized


10

T. L. Wilson
L

Interstellar Medium (ISM). This is characterized by the quantity 0 N (l) dl, which is the column density of the electrons to a distance L. Since distances in astronomy are measured in parsecs it has become customary to express the dispersion measure as:
L

DM =
0

N cm-

3

d

l pc

(18)

The lower frequencies are delayed more in the ISM, so the relative time delay is: 1 1 (19) - 1 2 2 2 MHz MHz Since both time delay D and observing frequencies 1 < 2 can be measured with high precision, a very accurate value of DM for a given pulsar can be determined. Provided the distance to the pulsar, L, is known, a good estimate of the average electron density between observer and pulsar can be found. However since L is usually known with moderate accuracy, only approximate values for N can be obtained. Often the opposite procedure is used: From reasonable values for N , a measured DM provides information on the unknown distance L to the pulsar. Broadband linear polarization is caused by non-thermal processes (see Rybicki & Lightman 1979) including Pulsar radiation, quasi-thermal emission from aligned, non-spherical dust grains (see Hildebrand 1983) and scattering from free electrons. Faraday rotation will change the position angle of linear polarization as the radiation passes through an ionized medium; this varies as 2 , so this effect is larger for longer wavelengths. It is usual to characterize polarization by the four Stokes Parameters, which are the sum or difference of measured quantities. The total intensity of a wave is given by the parameter I . The amount and angle of linear polarization by the parameters Q and U , while the amount and sense of circular polarization is given by the parameter V . Hertz dipoles are sensitive to a single linear polarization. By rotating the dipole over an angle perpendicular to the direction of the radiation, it is possible to determine the amount and angle of linearly polarized radiation. Helical antennas or arrangements of two Hertz dipoles are sensitive to circular polarization. Generally, polarized radiation is a combination of linear and circular, and is usually less than 100% polarized, so four Stokes parameters must be specified. The definition of the sense of circular polarization in radio astronomy is the same as in Electrical Engineering but opposite to that used in the optical range; see Born & Wolf (1965) for a complete analysis of polarization, using the optical definition of circular polarization. Poincar´ e introduced a representation that permits an easy visualization of all the different states of polarization of a vector wave. See Thompson et al. (2001), Crutcher (2008), Thum et al. (2008) or Wilson et al. (2008) for more details. D = 1.34 â 10- µs
9

DM cm-2


Techniques of Radio Astronomy

11

3 Receiver Systems
3.1 Coherent and Incoherent Receivers Receivers are assumed to be linear power measuring devices, i. e. any non-linearity is a small quantity. There are two types of receivers: coherent and incoherent. Coherent receivers are those which preserve the phase of the input radiation while incoherent do not. Heterodyne (technically superheterodyne ) receivers are those which those which shift the frequency of the input but preserve phase. The most commonly used coherent receivers employ heterodyning, that is, frequency shifting (see Section 4.2.1). The most commonly used incoherent receivers are bolometers (Section 4.1); these are direct detection receivers, that is, operate at sky frequency. Both coherent and incoherent receivers add noise to the astronomical input signal; it is assumed that the noise of both the input signal and the receiver follow Gaussian distributions. The noise contribution of coherent receivers is expressed in Kelvins. Bolometer noise is characterized by the Noise Equivalent Power, or NEP, in units of Watts Hz-1/2 (see Section 3.1.1 and Section 5.3.3). NEP is the input power level which doubles the output power. More extensive discussions of receiver properties are given in Rieke (2002) or Wilson et al. (2008). To analyze the performance of a receiver, the commonly accepted model is an ideal receiver with no internal noise, but connected to two noise sources, one for the external noise (including the astronomical signal) and a second for the receiver noise. To be useful, receiver systems must increase the input power level. The noise contribution is characterized by the Noise Factor, F . If the signal-to-noise ratio at the input is expressed as (S1 /N1 ) and at the output as (S2 /N2 ), the noise factor is: S1 /N1 . (20) S2 /N2 A further step is to assume that the signal is amplified by a gain factor G but otherwise unchanged. Then S2 = G S1 and: F= F= N2 . G N1 (21)

For a direct detection system such as a bolometer, G = 1. For coherent receivers, there must be a minimum noise contribution (see Section 4.2.4), so F > 1. For coherent receivers F is expressed in temperature units as TR using the relation TR = (F - 1) · 290K . (22) 3.1.1 Receiver Calibration Heterodyne receiver noise performance is usually expressed in degrees Kelvin. In the calibration process, a power scale (the PSD) is established at the receiver input. This is measured in terms of the noise temperature. To calibrate


12

T. L. Wilson

a receiver, the noise temperature increment T at the receiver input must be related to a given measured receiver output increment z (this applies to coherent receivers which have a wide dynamic range and a total power or DC response). Usually resistive loads at two known (thermodynamic) temperatures TL and TH are used. The receiver outputs are zL and zH , while TL and TH are the resistive loads at two temperatures. The relations are: zL = (TL + TR ) G , zH = (TH + TR ) G , taking y = zH /zL . the result is: Trx = TH - TL y y-1 , (24) (23)

This is known as the y-factor ; the procedure is a hot-cold measurement. The determination of the y factor is calculated in the Rayleigh-Jeans limit. Absorbers at temperatures of TH and TL are used to produce the inputs. Often these are chosen to be at the ambient temperature (TH 293 K or 20 C) = and at the temperature of liquid nitrogen (TL 78 K or -195 C). When = receivers are installed on antennas, such hot-cold calibrations are done only infrequently. As will be discussed in Section 5.3.2, in the cm and meter range, calibration signals are provided by noise diodes; from measurements of sources with known flux densities intensity scales are established. Any atmospheric corrections are assumed to be small at these wavelengths. As will be discussed in Section 5.3.3, in the mm/sub-mm wavelength range, from measurements of an ambient load (or two loads at different temperatures), combined with measurements of emission from the atmosphere and models of the atmosphere, estimates of atmospheric transmission are made. Bolometer performance is characterized by the Noise Equivalent Power, or NEP, given in units of Watts Hz-1/2 . The expression for NEP can be related to a receiver noise temperature. For ground based bolometer systems, background noise dominates. For these, the background noise is given as TBG : NEP = 2 k TBG . (25)

here is the emissivity of the background and is the bandwidth. Typical values for ground-based mm/sub-mm bolometers are = 0.5, TBG = 300 K and = 100 GHz. For these values, NEP= 1.3 â 10-15 Watts Hz-1/2 . With the collecting area of the IRAM 30 m or the JCMT telescopes, sources in the milli-Jansky (mJy) range can be measured. Usually bolometers are A. C. coupled, that is, the output responds to differences in the input power, so hot-cold measurements are not useful for


Techniques of Radio Astronomy

13

characterizing bolometers. The response of bolometers is usually determined by measurements of sources with known flux densities, followed by measurements at, for example, elevations of 20o , 30o , 60o and 90o to determine the atmospheric transmission (see Section 5.3.4). 3.1.2 Noise Uncertainties due to Random Pro cesses The noise contributions from source, atmosphere, ground, telescope surface and receiver are always additive: Tsys = Ti (26)

From Gaussian statistics, the Root Mean Square, RMS, noise is given by the mean value divided by the square root of the number of samples. From the estimate that the number of samples is given by the product of receiver bandwidth multiplied by the integration time, the result is: Tsys TRMS = . (27)

A much more elaborate derivation is to be found in Chapter 4 of Rohlfs & Wilson (2004), while a somewhat simpler account is in Wilson et al. (2008). The calibration process in (Section 3.1.1) allows the receiver noise to be expressed in degrees Kelvin. The relation of Tsys to Trx is Tsys = TA + Trx , where TA represents the power entering the receiver; at some wavelengths TA will dominate Trx . In the mm/sub-mm range, use is made of T , the system sys noise outside the atmosphere, since the attenuation of astronomical radiation is large. This will be presented in Section 5.3.1 and following. 3.1.3 Receiver Stability Sensitive receivers are designed to achieve a low value for Trx . Since the signals received are of exceedingly low power, receivers must also provide large receiver gains, G (of order 1012 ), for sufficient output power. Thus even very small gain instabilities can dominate the thermal receiver noise. Since receiver stability considerations are of prime importance, comparison switching was necessary for early receivers (Dicke 1946). Great advances have been made in improving receiver stability since the 1960's so the need for rapid switching is lessened. In the meter and cm wavelength range, the time between reference measurements has increased. However in the mm/sub-mm range, instabilities of the atmosphere play an important role; to insure that noise decreases following (Eq. 27), the effects of atmospheric and/or receiver instabilities must be eliminated. For single dish measurements, atmospheric changes can be compensated for by rapidly differencing a measurement of the target source


14

T. L. Wilson

and a reference. Such comparison or Dicke switched measurements are necessary for ground-based observations. If a typical procedure consists of using a total power receiver to measure on-source for 1/2 of the total time, then an off-source comparison for 1/2 of the time and taking the difference of onsource minus off-source measurements, the TRMS will be a factor of 2 larger than the value given by (Eq. 27).

4 Practical Asp ects of Receivers
This section concentrates on receivers that are currently in use. For more details see Goldsmith (1988), Rieke (2002), or Wilson et al. (2008). 4.1 Bolometer Radiometers Bolometers operate by use of the effect that the resistance, R, of a material varies with the temperature. In the 1970's, the most sensitive bolometers were semiconductor devices pioneered by F. Low. This is achieved when the bolometer element is cooled to very low temperatures. When radiation is incident, the characteristics change, so this is a measure of the intensity of the incident radiation. Because this is a thermal effect, it is independent of the frequency and polarization of the radiation absorbed. Thus bolometers are intrinsically broadband devices. It is possible to mount a polarization-sensitive device before the bolometer and thereby measure the direction and degree of linear polarization. Also, it is possible to carry out spectroscopy, if frequency sensitive elements, either filters, Michelson or Fabry-Perot interferometers, are placed before the bolometer element. Since these spectrometers operate at the sky frequency, the fractional resolution ( / ) is at best 10-4 . The data from each bolometer detector element (pixel) must be read out and then amplified. For single dish (i. e. filled apertures) broadband continuum measurements at < 2 mm, multi-beam bolometers are the most common systems and such systems can have a large number of beams. A promising new development in bolometer receivers is Transition Edge Sensors referred to as TES bolometers. These superconducting devices may allow more than an order of magnitude increase in sensitivity, if the bolometer is not background limited. For bolometers used on earth-bound telescopes, the improvement with TES systems may be only 2­3 times more sensitive than the semiconductor bolometers, but TES's will allow readouts from a much larger number of pixels. A number of large bolometer arrays have produced numerous publications: (1) MAMBO2 (MAx-Planck-Millimeter Bolometer) used on the IRAM 30-m telescope at 1.3 mm, (2) SCUBA (Submillimeter Common User Bolometer Array; Holland et al. 1999) on the JCMT, (3) the LABOCA (LArge Bolometer CAmera) array on the APEX 12 meter telescope, (4) SHARC


Techniques of Radio Astronomy

15

(Sub-mm High Angular Resolution Camera) on the Caltech Sub-mm Observatory 10-m telescope and (5) MUSTANG (MUtiplexed Squid TES Array) on the GBT. SCUBA will be replaced with SCUBA-2 now being constructed at the U. K. Astronomy Technology Center, and there are plans to replace the MUSTANG array by MUSTANG-2, which is a larger TES system. 4.2 Coherent Receivers Usually, coherent receivers make use of heterodyning to shift the signal input frequency without changing other properties of the input signal; in practice, this is carried out by the use of mixers (Section 4.2.2). The heterodyne process is used in all branches of communications technology; use of heterodyning allows measurements with unlimited spectral resolution. Although heterodyne receivers have a number of components, these systems have more flexibility than bolometers. 4.2.1 Noise Contributions in Coherent Receivers The noise generated in the first element dominates the system noise. The mathematical expression is given by the Friis relation which accounts for the effect of cascaded amplifiers on the noise performance of a receiver: TS = TS1 + 1 1 1 TS2 + TS3 + . . . + G1 G1 G2 G1 G2 . . . Gn TSn
-1

.

(28)

Where G1 is the gain of the first element, and TS1 is the noise temperature of this element. For >3 mm ( < 115GHz), the best cooled first elements, High Electron Mobility Transistors (HEMTs), typically have G1 = 103 and TS1 = 50K; for <0.8 mm, the best cooled first elements, superconducting mixers, typically have G1 1, that is, a small loss, and TS1 500K. The stage following the mixer should have the lowest noise temperature and high gain. 4.2.2 Mixers Mixers have been used in heterodyne receivers since Jansky's time. At first these were metal-oxide-semiconductor or Schottky mixers. Mixers allow the signal frequency to be changed without altering the characteristics of the signal. In the mixing process, the input signal is multiplied by an intense monochromatic signal from a local oscil lator, LO. The frequency stability of the LO signal is maintained by a stabilization device in which the LO signal is compared with a stable input, in recent times, an atomic standard. These phaselock loop systems produce a pure, highly stable, monochromatic signal. The mixer can be operated in the Double Sideband (DSB) mode, in which


16

T. L. Wilson

two sky frequencies, signal and image at equal separations from the LO frequency (equal to the IF frequency) are shifted into intermediate (IF) frequency band. For spectral line measurements, usually one sideband is wanted, but the other not. DSB operation adds both noise and (usually) unwanted spectral lines; for spectral line measurements, single sideband (SSB) operation is preferred. In SSB operation, the unwanted sideband is suppressed, at the cost of more complexity. In the sub-mm wavelength ranges, DSB mixers are still commonly used as the first stage of a receiver; in the mm and cm ranges, SSB operation is now the rule. A significant improvement can be obtained if the mixer junction is operated in the superconducting mode. The noise temperatures and LO power requirements of superconducting mixers are much lower than Schottky mixers. Finally, the physical layout of such devices is simpler since the mixer is a planar device, deposited on a substrate by lithographic techniques. SIS mixers consist of a superconducting layer, a thin insulating layer and another superconducting layer (see Phillips & Woody 1982). Superconducting Hot Electron Bolometer-mixers (HEB) are heterodyne devices, in spite of the name. These mixers make use of superconducting thin films which have sub-micron sizes (see Kawamura et al. 2002). A number of multi-beam heterodyne cameras are in operation in the cm range, but only a few in the mm/sub-mm range. The first mm multi-beam system was the SEQUOIA array receiver pioneered by S. Weinreb; such devices are becoming more common. In contrast, multibeam systems that use SIS front ends are rare. Examples are a 9 beam Heterodyne Receiver Array of SIS mixers at 1.3 mm, HERA, on the IRAM 30-m millimeter telescope, HARP-B, a 16 beam SIS system in operation at the JCMT for 0.8 mm and the CHAMP+ receiver at the Max-Planck-Inst. fur Radioastronomy on the ¨ APEX 12-m telescope. 4.2.3 Square Law Detectors For heterodyne receivers the input is normally amplified (for < 115GHz), translated in frequency and then detected in a device that produces an output signal y (t) which is proportional to the square of v (t): y (t) = a v 2 (t) (29)

Once detected, phase information is lost. For interferometers, the output of each antenna is a voltage, shifted in frequency and then digitized. This output is brought to a central location for correlation. 4.2.4 The Minimum Noise in a Coherent System The ultimate limit for coherent receivers or amplifiers is obtained by an application of the Heisenberg uncertainty principle involving phase and number


Techniques of Radio Astronomy

17

Fig. 2. Receiver noise temperatures for coherent amplifier systems compared to the temperatures from the Milky Way galaxy (at long wavelengths, on left part of figure) and the atmosphere (at mm/sub-mm wavelengths on the right side). The atmospheric emission is based on a model of zenith emission for 0.4 mm of water vapor, that is, excellent weather (plot from B. Nicolic (Cambridge Univ.) using the AM program of S. Paine (Harvard-Smithsonian Center for Astrophysics)). This does not take into account the absorption of the astronomical signal. In the 1 to 26 GHz range, the two horizontal lines represent the noise temperatures of the best HEMT amplifiers, while the solid line represents the noise temperatures of maser receivers. The shaded region between 85 and 115.6 GHz is the receiver noise for the SEQUOIA array (Five College Radio Astronomy Observatory) which consists of monolithic millimeter integrated circuits (MMIC). The meaning of the other symbols is given in the upper left of the diagram (SIS's are SuperconductorInsulator-Superconductor mixers, HEB's are Hot Electron Bolometer mixers). The double sideband (DSB) mixer noise temperatures were converted to single sideband (SSB) noise temperatures by doubling the receiver noise. The ALMA mixer noise temperatures are SSB, as are the HEMT values. The line marked 10 h /kT refers to the limit described in (Eq. 30). Some data used in this diagram are taken from Rieke (2002). The figure is from Wilson et al. (2008)


18

T. L. Wilson

of photons. From this, the minimum noise of a coherent amplifier results in a receiver noise temperature of Trx (minimum) = h k . (30)

For incoherent detectors, such as bolometers, phase is not preserved, so this limit does not exist. In the mm wavelength region, this noise temperature limit is quite small; at =2.6 mm ( =115 GHz), this limit is 5.5 K. The value for the ALMA receiver in this range is about 5 to 6 times the minimum. A significant difference between radio and optical regimes is that the minimum noise in the radio range is small, so that the power from a single receiver can be amplified and then divided. For example, for the EVLA, the voltage output of all 351 antenna pairs are combined with little or no loss in the signal-to-noise ratio. Another example is given in Section 4.3.1, where a radio polarimeter can produce all four Stokes parameters from two inputs without a loss of the signal-to-noise ratio. 4.3 Back Ends: Polarimeters & Sp ectrometers The term Back End is used to specify the devices following the IF amplifiers. Many different back ends have been designed for specialized purposes such as continuum, spectral or polarization measurements. For a single dish continuum correlation receiver, the (identical) receiver input is divided, amplified in two identical systems and then the outputs are multiplied. The gain fluctuations are uncorrelated but the signals are, so the effect on the output is the same as with a Dicke switched system, but with no time spent on a reference. 4.3.1 Polarimeters A typical heterodyne dual polarization receiver consists of two identical systems, each sensitive to one of the two orthogonal polarizations, linear or circular. Both systems must be connected to the same local oscillator to insure that the phases have a definite relation. Given this arrangement, a polarimeter can provide values of all four Stokes parameters simultaneously. All Stokes parameters can also be measured using a single receiver whose input is switched from one sense of polarization to the other, but then the integration time for each polarization will be halved. 4.3.2 Sp ectrometers Spectrometers analyze the spectral information contained in the radiation field. To accomplish this, the spectrometer must be SSB and the frequency


Techniques of Radio Astronomy

19

resolution is usually very good, sometimes in the kHz range. In addition, the time stability must be high. If a resolution of is to be achieved for the spectrometer, all those parts of the system that enter critically into the frequency response have to be maintained to better than 0.1 . For an overview of the current state of spectrometers, see Baker et al. (2007). Conceptually, the simplest spectrometer is composed of a set of n adjacent filters, each with a bandwidth . Following each filter is a square-law detector and integrator. For a finer resolution, another set of n filters must be constructed. Another approach to spectral analysis is to Fourier Transform (FT) the input, v (t), to obtain v ( ) and then square v ( ) to obtain the Power Spectral Density. The maximum bandwidth is limited by the sampling rate. From (another!) Nyquist theorem, it is necessary to sample at a rate equal to twice the bandwidth. In the simplest scheme, for a bandwidth of 1 GHz, the sampling must occur at a rate of 2 GHz. After sampling and Fourier Transform, the output is squared to produce power in an FX autocorrelator. For 103 samples, each channel will have a 1 MHz resolution. For XF systems, the input v (t) is multiplied (the X ) with a delayed signal v (t - ) to obtain the autocorrelation function R( ). This is then Fourier Transformed to obtain the spectrum. For 103 samples, there will be 103 frequency channels. For an XF system the time delays are performed in a set of serial digital shift registers with a sample delayed by a time . Autocorrelation can also be carried out with the help of analog devices using a series of cable delay lines; these can provide very large bandwidths. The first XF system for astronomy was a digital autocorrelator built by S. Weinreb in 1963. The two significant advantages of digital spectrometers are: (1) flexibility and (2) a noise behavior that follows 1/ t after many hours of integration. The flexibility allows the choice of many different frequency resolutions and bandwidths or even to employ a number of different spectrometers, each with different bandwidths, simultaneously. A serious drawback of digital auto and cross correlation spectrometers had been limited bandwidths. However, advances in digital technology in recent years have allowed the construction of autocorrelation spectrometers with several 103 channels covering instantaneous bandwidths of several GHz. Autocorrelation systems are used in single antennas. The calculation of spectra makes use of the symmetric nature of the autocorrelation function, ACF, so the number of delays gives the number of spectral channels. Cross-correlators are used in interferometers and in some single dish applications. When used in an interferometer, the cross-correlation is between different inputs so will not necessarily be symmetric. Thus, the zero delay of the cross-correlator is placed in channel N/2. The number of delays, N , allows the determination of N/2 spectral intensities, and N/2 phases. The


20

T. L. Wilson

cross-correlation hardware can employ either an XF or a FX correlator. For more details about the use of cross-correlation, see Section 6. Until recently, spectrometers with bandwidths of several GHz often made use of Acoustic Optical analog techniques. The Acoustic Optical Spectrometer (AOS) makes use of the diffraction of light by ultrasonic waves: these cause periodic density variations in the crystal through which it passes. These density variations in turn cause variations in the bulk constants of the crystal, so that a plane light wave passing through this medium will be modulated by the interaction with the crystal. The modulated light is detected in a charge coupled device. Typical AOS's have an instantaneous bandwidth of 2 GHz and 2000 spectral channels. In all cases, the spectra of the individual channels of a spectrometer are expressed in terms of temperature with the relation: Ti = [(Si - Ri ) /Ri ] · Tsys (31)

where Si is the normalized spectrum of channel i for the on-source measurement and Ri is the corresponding reference for this channel. For mm/sub-mm spectra, Tsys is replaced by Tsys (corrected for atmospheric losses; see Section 5.3.3). For cross-correlators, as used in interferometers, the signals from two antennas are multiplied. In this case, the value of Tsys is the square root of the product of the system noise temperatures of the two systems.

5 Antennas
The antenna serves to focus power into the feed, a device that efficiently transfers power in the electromagnetic wave to the receiver. According to the principle of reciprocity, the properties of antennas such as beam sizes, efficiencies etc. are the same whether these are used for receiving or transmitting. Reciprocity holds in astronomy, so it is usual to interchangeably use expressions that involve either transmission or reception when discussing antenna properties. All of the following applies to the far-field radiation. 5.1 The Hertz Dip ole The total power radiated from a Hertz dipole carrying an oscillating current I at a wavelength is P= 2c 3 I l 2
2

.

(32)

For the Hertz dipole, the radiation is linearly polarized with the electric field along the direction of the dipole. The radiation pattern has a donut shape, with the cylindrically symmetric maximum perpendicular to the axis


Techniques of Radio Astronomy

21

of the dipole. Along the direction of the dipole, the radiation field is zero. To improve directivity, reflecting screens have been placed behind a dipole, and in addition, collections of dipoles, driven in phase, are used. Hertz dipole radiators have the best efficiency when the size of the dipole is 1/2 . 5.2 Filled Ap ertures This Section is a simplified description of antenna properties needed for the interpretation of astronomical measurements. For more detail, see Baars (2007). At cm and shorter wavelengths, flared waveguides ( feed horns ) or dipoles are used to convey power focussed by the antenna (i. e., electromagnetic waves in free space) to the receiver (voltage). At the longest wavelengths, dipoles are used as the antennas. Details are to be found in Love (1976) and Goldsmith (1988, 1994). 5.2.1 Angular Resolution and Efficiencies From diffraction theory (see Jenkins & White 2001), the angular resolution of a reflector of diameter D at a wavelength is =k D . (33)

where k is of order unity. This universal result gives a value for (here in radians when D and have the same units). Diffraction theory also predicts the unavoidable presence of sidelobes, i. e. secondary maxima. The sidelobes can be reduced by tapering the antenna illumination. Tapering lowers the response to very compact sources and increases the value of , i. e. widens the beam. The reciprocity concept provides a method to measure the power pattern (response pattern or Point Spread Function, PSF) using transmitters. However, the distance from a large antenna A (diameter D ) to a transmitter B (small in size) must be so large that B produces plane waves across the aperture D of antenna A, that is, so B is in the far field of A. This is the Rayleigh distance; it requires that the curvature of a wavefront emitted by B is much less than /16 across the geometric dimensions of antenna A. By definition, at the Rayleigh distance D, the curvature must be D2 /8 for an antenna of diameter D. Often, the normalized power pattern is measured: Pn (, ) = 1 P
max

P (, )

.

(34)

For larger apertures, the transmitter is usually replaced by a small diameter radio source of known flux density (see Baars et al. 1977, Ott et al. 1994).


22

T. L. Wilson

The flux densities of a few primary calibration sources are determined by measurements using horn antennas at centimeter and millimeter wavelengths. At mm/sub-mm wavelengths, it is usual to employ planets, or moons of planets, whose surface temperatures are known (see Altenhoff 1985, Sandell 1994).

Fig. 3. A polar power pattern showing the main beam, and near and far sidelobes. The weaker far sidelobes have been combined to form the stray pattern

The beam solid angle A of an antenna is given by
2

A =
4

Pn (, ) d =
00

Pn (, ) sin d d

(35)

this is measured in steradians (sr). The integration is extended over all angles, so A is the solid angle of an ideal antenna having Pn = 1 for A and Pn = 0 everywhere else. For most antennas the (normalized) power pattern has much larger values for a limited range of both and than for the remainder; the range where A is large is the main beam of the antenna; the remainder are the sidelobes or backlobes (Fig. 3). In analogy to (Eq. 35) the main beam solid angle MB is defined as MB =
main lob e

Pn (, ) d

.

(36)

The quality of a single antenna depends on how well the power pattern is concentrated in the main beam. The definition of main beam efficiency or beam efficiency, B , is:


Techniques of Radio Astronomy

23

B =

MB A

.

(37)

B is the fraction of the power is concentrated in the main beam. The main beam efficiency can be modified (within limits) for parabolic antennas by changing the illumination of the main reflector. An underilluminated antenna has a wider main beam but lower sidelobes. The angular extent of the main beam is usually described by the ful l width to half power width (FWHP), the angle between points of the main beam where the normalized power pattern falls to 1/2 of the maximum. For elliptically shaped main beams, values for widths in orthogonal directions are needed. The beamwidth, is given by (Eq. 33). If the FWHP beamwidth is well defined, the location of an isolated source is determined to the accuracy given by the FWHP divided by the S/N ratio. Thus, it is possible to determine positions to small fractions of the FWHP beamwidth, if the signal-to-noise ratio is high and noise is the only limit. If a plane wave with the power density | S | in Watts m-2 is intercepted by an antenna, a certain amount of power is extracted from this wave. This power is Pe and the fraction is: A e = Pe / | S | (38)

the effective aperture of the antenna. Ae has the dimension of m2 . Compared to the geometric aperture Ag an aperture efficiency A can be defined by: Ae = A A
g

.

(39)

If an antenna with a normalized power pattern Pn (, ) is used to receive radiation from a brightness distribution B (, ) in the sky, at the output terminals of the antenna the power per unit bandwidth (PSD), in Watts Hz-1 , P is: P =
1 2

A

e

B (, ) Pn (, ) d .

(40)

By definition, this operates in the Rayleigh-Jeans limit, so the equivalent distribution of brightness temperature can be replaced by an equivalent antenna temperature TA (Eq. 13): P = k T A . (41) This definition of antenna temperature relates the output of the antenna to the power from a matched resistor. When these two power levels are equal, then the antenna temperature is given by the temperature of the resistor. The effective aperture Ae can be replaced by the the beam solid angle A · 2 . Then (Eq. 40) becomes


24

T. L. Wilson

TA (0 , 0 ) =

TB (, )Pn ( - 0 , - 0 ) sin d d Pn (, ) d

(42)

From (Eq. 42), TA < TB in all cases. The numerator is the convolution of the brightness temperature with the beam pattern of the telescope (Fourier methods are of great value in this analysis; see Bracewell 1986). The brightness temperature Tb (, ) corresponds to the thermodynamic temperature of the radiating material only for thermal radiation in the Rayleigh-Jeans limit from an optically thick source; in all other cases TB is a convenient quantity that represents source intensity at a given frequency. The quantity TA in (Eq. 42) was obtained for an antenna in which ohmic losses and absorption in the earth's atmosphere were neglected. These losses can be corrected in the calibration process. Since TA is the quantity measured while TB is desired, (Eq. 42) must be inverted. (Eq. 42) can be solved only if TA (, ) and Pn (, ) are known exactly over the full range of angles. In practice this inversion is possible only approximately, since both TA (, ) and Pn (, ) are known only for a limited range of and values, and the measured data are affected by noise. Therefore only an approximate deconvolution can be performed. If the source distribution TB (, ) has a small extent compared to the telescope beam, the best estimate for the upper limit to the actual FWHP source size is 1/2 of the FWHP of the telescope beam. 5.2.2 Efficiencies for Compact Sources For a source small compared to the beam (Eq. 40) and (Eq. 41) give:
1 P = 2 Ae S = k T A

(43)

TA is the antenna temperature at the receiver, while TA is this quantity corrected for effect of the earth's atmosphere. In the meter and cm range TA = TA , so in the following, TA will be used: TA = S (44)

where is the sensitivity of the telescope measured in K Jy-1 . Introducing the aperture efficiency A according to (Eq. 39) we find = D 8k
2

A

.

(45)

Thus or A can be measured with the help of a calibrating source provided that the diameter D and the noise power scale in the receiving system are known. When (Eq. 44) is solved for S , the result is:


Techniques of Radio Astronomy

25

S = 3520

TA [K] . A [D/m]2

(46)

The brightness temperature is defined as the Rayleigh-Jeans temperature of an equivalent black body which will give the same power per unit area per unit frequency interval per unit solid angle as the celestial source. Both TA and TMB are defined in the Rayleigh-Jeans limit, but the brightness temperature scale has to be corrected for antenna efficiency. The conversion from source flux density to source brightness temperature for sources with sizes small compared to the telescope beam is given by (Eq. 15). For sources small compared to the beam, the antenna and main beam brightness temperatures are related by the main beam efficiency, B : B = TA . TMB (47)

This is valid for sources where sidelobe structure is not important (see the discussion after (Eq. 42)). Although a source may not have a Gaussian shape, fits of multiple Gaussians can be used to obtain an accurate representation. What remains is a calibration of the temperature scales and a correction for absorption in the earth's atmosphere. This is dealt with in Section 5.3 5.2.3 Fo ci, Blo ckage and Surface Accuracy If the size of a radio telescope is more than a few hundred wavelengths, designs are similar to those of optical telescopes. Cassegrain, Gregorian and Nasmyth systems have been used. See Fig. 4 for a sketch of these focal systems. In a Cassegrain system, a convex hyperbolic reflector is introduced into the converging beam immediately in front of the prime focus. This reflector transfers the converging rays to a secondary focus which, in most practical systems is situated close to the apex of the main dish. A Gregorian system makes use of a concave reflector with an elliptical profile. This must be positioned behind the prime focus in the diverging beam. In the Nasmyth system this secondary focus is situated in the elevation axis of the telescope by introducing another, usually flat, mirror. The advantage of a Nasmyth system is that the receiver front ends remain horizontal while when the telescope is pointed toward different elevations. This is an advantage for receivers cooled with liquid helium, which may become unstable when tipped. Cassegrain and Nasmyth foci are commonly used in the mm/sub-mm wavelength ranges. In a secondary reflector system, feed illumination beyond the edge receives radiation from the sky, which has a temperature of only a few K. For low-noise systems, this results in only a small overall system noise temperature. This is significantly less than for prime focus systems. This is quantified in the socalled G/T value , that is, the ratio of antenna gain of to system noise. Any telescope design must aim to minimize the excess noise at the receiver input


26

T. L. Wilson

Fig. 4. The geometry of parabolic apertures: (a) Cassegrain, (b) Gregorian, (c) Nasmyth and (d) offset Cassegrain systems (from Wilson et al. 2008).

while maximizing gain. For a specific antenna, this maximization involves the design of feeds and the choice of foci. The secondary reflector and its supports block the central parts in the main dish from reflecting the incoming radiation, causing some significant differences between the actual beam pattern and that of an unobstructed antenna. Modern designs seek to minimize blockage due to the support legs and subreflector. The beam pattern differs from a uniformly illuminated unblocked aperture for 3 reasons: (1) the illumination of the reflector will not be uniform but has a taper by 10 dB, that is, a factor of 10 or more at the edge of the reflector. This is in contrast to optical telescopes which have no taper. (2) the side-lobe level is strongly influenced by this taper: a larger taper lowers the sidelobe level. (3) the secondary reflector must be supported by three or four support legs, which will produce aperture blocking and thus affect the shape of the beam pattern. Feed leg blockage will cause deviations from circular symmetry. For altitude-azimuth telescopes these sidelobes will change position on the sky with hour angle (see Reich et al. 1978). This may be a serious defect, since these effects will be significant for maps of low intensity regions near an intense source. The sidelobe response may depend on the polarization of the incoming radiation (see Section 5.3.6). A disadvantage of on-axis systems, regardless of focus, is that they are often more susceptible to instrumental frequency baselines, so-called baseline ripples across the receiver band than primary focus systems (see Morris 1978). Part of this ripple is caused by reflections of noise from source or receiver in the antenna structure. Ripples from the receiver can be removed if the amplitude and phase are constant in time. Baseline ripples caused by the


Techniques of Radio Astronomy

27

source, sky or ground radiation are more difficult to eliminate since these will change over short times. It is known that large amounts of blockage and larger feed sizes lead to large baseline ripples. The influence of baseline ripples on measurements can be reduced to a limited extent by appropriate observing procedures. A possible solution is an off-axis system such as the GBT of the National Radio Astronomy Observatory. In contrast to the GBT, the Effelsberg 100-m has a large amount of blocking from massive feed support legs and, as a result, show large instrumental frequency baseline ripples. These ripples might be mitigated by the use of scattering cones in the reflector. The gain of a filled aperture antenna with small scale surface irregularities cannot increase indefinitely with increasing frequency but reaches a maximum at m = 4 , and this gain is a factor of 2.7 below that of an error-free antenna of identical dimensions. The usual rule-of-thumb is that the irregularities should be 1/16 of the shortest wavelength used. Larger filled aperture radio telescopes are made up of panels. For these, the irregularities are of two types: (1) roughness of the individual panels, and (2) misadjustment of panels. The second irregularity gives rise to an error beam. The FWHP of the error beam is given approximately by the ratio of wavelength to panel size. In addition, if the surface material is not a perfect conductor, there will be some loss and consequently additional noise. 5.3 Single Dish Observational Techniques 5.3.1 The Earth's Atmosphere For ground­based facilities, the amplitudes of astronomical signals have been attenuated and the phases have been altered by the earth's atmosphere. In addition to attenuation, the receiver noise is increased by atmospheric emission, the signal is refracted and there are changes in the path length. These effects may change slowly with time, but there can also be rapid changes such as scintillation and anomalous refraction. Thus propagation properties must be taken into account if the astronomical measurements are to be correctly interpreted. At meter wavelengths, these effects are caused by the ionosphere. In the mm/sub-mm range, tropospheric effects are especially important. The various constituents of the atmosphere absorb by different amounts. Because the atmosphere can be considered to be in LTE, these constituents also emit radiation. The total amount of precipitable water (usually measured in mm) is an integral along the line-of-sight to a source. Frequently, the amount of H2 O is determined by measurements of the continuum emission of the atmosphere with a small dish at 225 GHz. For a set of measurements at elevations of 20o , 30o , 60o and 90o , combined with models, rather accurate values of the atmospheric can be obtained. For extremely dry mm/sub-mm sites, measurements of the 183 GHz spectral line of water vapor can be used to estimate the total amount of H2 O in the atmosphere. For sea level sites, the 22.235


28

T. L. Wilson

GHz line of water vapor has been used for this purpose. The scale height HH2 O 2 km, is considerably less than Hair 8 km of dry air. For this reason, sites for submillimeter radio telescopes are usually mountain sites with elevations above 3000 m. For ionospheric effects, even the highest sites on earth provide no improvement. The effect on the intensity of a radio source due to propagation through the atmosphere is given by the standard relation for radiative transfer (from (Eq. 10)): TB (s) = TB (0) e-
(s)

+ Tatm (1 - e

- (s)

)

.

(48)

Here s is the (geometric) path length along the line-of-sight with s = 0 at the upper edge of the atmosphere and s = s0 at the antenna, (s) is the optical depth, Tatm is the temperature of the atmosphere and TB (0) is the temperature of the astronomical source above the atmosphere. Both the (volume) absorption coefficient and the gas temperature Tatm will vary with s. Introducing the mass absorption coefficient k by = k · , (49)

where is the gas density; this variation of can mainly be related to that of as long as the gas mixture remains constant along the line-of-sight. This is a simplified relation. For a more detailed calculations, a multi-layer model is needed. Models can provide corrections for average effects; fluctuations and detailed corrections needed for astronomy must be determined from real-time measurements. 5.3.2 Meter and Centimeter Calibration Pro cedures This involves a three step procedure: (1) the measurements must be corrected for atmospheric effects, (2) relative calibrations are made using secondary standards and (3) if needed, gain versus elevation curves for the antenna must be established. In the cm wavelength range, atmospheric effects are usually small. For steps (2) and (3) the calibration is carried out with the use of a pulsed signal injected before the receiver. This pulsed signal is added to the receiver input. The calibration signal must be stable, broadband and of reasonable size. Often noise diodes are used as pulsed broadband calibration sources. These are secondary standards that provide broadband radiation with effective temperatures > 105 K. With a pulsed calibration, the receiver outputs are recorded separately as: (1) receiver only, (2) receiver plus calibration and (3) repeat of this cycle. If the calibration signal has a known value and the zero point of the receiver system is measured, the receiver noise is determined (see Eq. 24). Most often the calibration value in either Jy/beam or TMB units is determined by a continuum scan through a non-time variable compact discrete source of known flux density. Lists of calibration sources are to be found in Baars et al. (1977), Altenhoff (1985), Ott et al. (1994) and Sandell (1994).


Techniques of Radio Astronomy

29

5.3.3 Millimeter and Sub-mm Calibration Pro cedures In the mm/sub-mm wavelength range, the atmosphere has a larger influence and can change on timescales of seconds, so more complex corrections are needed. Also,large telescopes may operate close to the limits caused by their surface accuracy, so that the power received in the error beam may be comparable to that received in the main beam. In addition, many sources such as molecular clouds are rather extended. Thus, relevant values of telescope efficiencies must be used (see Downes 1989). The calibration procedure used in the mm/sub-mm range is referred to as the chopper wheel method (Penzias & Burrus 1973). This consists of two steps: (1) the measurement of the receiver noise (the method is very similar to that in Section (3.1.1). and (2) the measurement of the receiver response when directed toward cold sky at a certain elevation. In the following it is assumed that the receiver is operated in the SSB mode. For (1), the output of the receiver while measuring an ambient load, Tamb , is denoted by Vamb : V
amb

= G (Tamb + Trx ) .

(50)

where G is the system gain. This is sometimes repeated with a second load at a different temperature. The result is a determination of the receiver noise as in Section (3.1.1). For step (2), the load is removed; then the output refers to noise from a source-free sky (Tsky ), ground ( Tgr = Tamb ) and receiver: V
sky

= G [F

eff

Tsky + (1 - Feff ) Tgr + Trx ] .

(51)

where Feff is the forward efficiency. forward beam of the feed. This can be with the angular size of the Moon (it an extended molecular cloud). Taking V
cal

This is the fraction of power in the interpreted as the response to a source is assumed that Feff is appropriate for the difference between Vamb and Vsky : = G Feff Tamb e-



=V

amb

-V

sky

,

(52)

where is the atmospheric absorption at the frequency of interest. If it is assumed that Tsky (s) = Tatm (1 - e- ) describes the emission of the atmosphere, and, as in (Eq. 48), in is the same for emission and absorption, emission measurements can provide the value of . If Tatm = Tamb , the correction is simplified. For more complex situations, models of the atmosphere are needed (see e.g., Pardo et al. 2009). Once is known, the signal from the radio source, TA , after passing through the earth's atmosphere, is V or
sig

= G TA e

-




30

T. L. Wilson

TA =

Vsig F Vcal

eff

Tamb

where TA is the antenna temperature of the source outside the earth's atmosphere. We define Vsig T Tamb (53) TA = A = Feff Vcal
The right side involves only measured quantities. TA is commonly referred to as the corrected antenna temperature, but it is really a forward beam bright ness temperature. An analogous temperature is Tsys , the system noise correcting for all atmospheric effects: Tsys =

Trx + Tsky Feff

e



(54)

This result is used to determine continuum or line temperature scales (Eq. 31). A typical set of values for = 3mm are: Trx =40 K, Tsky =50 K, =0.3. Using these, the Tsys =135 K. For sources 30 , there is an additional correction for the telescope beam efficiency, which is commonly referred to as Beff . Then TMB = Feff T Beff A

Typical values of Feff are = 0.9, and at the shortest wavelengths used for a telescope, Beff 0.6. In general, for extended sources, the brightness tem= perature corrected for absorption by the earth's atmosphere, TA , should be used. 5.3.4 Bolometer Calibrations Since most bolometers are A. C. coupled (i. e. respond to differences), so the D. C. response (i. e. respond to total power) used in hot­cold or chopper wheel calibration methods cannot be used. Instead astronomical data are calibrated in two steps: (1) measurements of atmospheric emission at a number of elevations to determine the opacities at the azimuth of the target source, and (2) the measurement of the response of a nearby source with a known flux density; immediately after this, a measurement of the target source is carried out. 5.3.5 Continuum Observing Strategies 1) Position Switching and Wobbler Switching. Switching against a load or absorber is used only in exceptional circumstances, such as studies of the 2.73 K cosmic microwave background. For the CMB, Penzias & Wilson


Techniques of Radio Astronomy

31

(1965) used a helium cooled load with a precisely known temperature. For compact regions, compensation of transmission variations of the atmosphere is possible if double beam systems can be used. At higher frequencies, in the mm/sub-mm range, rapid movement of the telescope beam (by small movements of the sub-reflector or a mirror in the path from antenna to receiver) over small angles is referred to as beam switching , wobbling or wobbler switching . This is used to produce two beams on the sky for a single pixel receiver. The individual telescope beams should be spaced by a distance of 3 FWHP beam widths. 2) Mapping of Extended Regions and On the Fly Mapping. Multibeam bolometer systems are preferred for continuum measurements at > 100 GHz. Usually, a wobbler system is needed for such arrays. With these, it is possible to measure a fairly large region and to better cancel sky noise due to weather. Some details of more recent data taking and reduction methods are given in e.g., Johnstone et al. (2000) or Motte et al. (2006). If extended areas are to be mapped, scans are made along one direction (e.g., Azimuth or Right Ascension). Then the antenna is offset in the orthogonal direction by 1/2 to 1/3 of a beamwidth, and the scanning is repeated until the region is completely mapped. This is referred to as a raster scan . There should be reference positions free of sources at the beginning and the end of each scan, to allow the determination of zero levels and calibrations should be made before the scans are begun. For more secure results, the map is then repeated by scanning in the orthogonal direction (e.g., Elevation or Declination). Then both sets of results are placed on a common grid, and averaged; this is referred to as basket weaving . Extended emission regions can also be mapped using a double beam system, with the receiver input periodically switched between the first and second beam. In this procedure, there is some suppression of very extended emission. A summation of the beam switched data along the scan direction has been used to reconstruct infrared images. More sophisticated schemes can recover most, but not all, of the information (Emerson et al. 1979; EKH ). Most mm/sub-mm antennas employ wobbler switching in azimuth to cancel ground radiation. By measuring a source using scans in azimuth at different hour angles, then transforming the positions to an astronomical coordinate frame and combining the maps it is possible to reduce the effect of sidelobes caused by feed legs and supress sky noise (Johnstone et al. 2000). 5.3.6 Additional Requirements for Sp ectral Line Observations In addition to the requirements placed on continuum receivers, there are three additional requirements for spectral line receiver systems. If the observed frequency of a line is compared to the known rest frequency, the relative radial velocity of the source and the receiving system can be determined. But this velocity contains the motion of the source as well


32

T. L. Wilson

as that of the receiving system, so the velocity measurements are referred to some standard of rest. This velocity can be separated into several independent components: (1) Earth rotation with a maximum velocity v = 0.46 km s-1 and (2) The motion of the center of the Earth relative to the barycenter of the Solar System is said to be reduced to the heliocentric system. Correction algorithms are available for observations of the earth relative to center of mass of the solar system. The standard solar motion is the motion relative to the mode of the velocity of the stars in the solar neighborhood. Data where the standard solar motion has been taken into account are said to refer to the local standard of rest (LSR). Most extragalactic spectral line data do not include the LSR correction but are referred to the heliocentric velocity. For high redshift sources, special relativity corrections must be included. For larger bandwidths, there is an instrumental spectrum and a baseline must be subtracted from the (on-off )/off spectrum. Often a linear fit to spectrum is sufficient, but if curvature is present, polynomials of second or higher order must be subtracted. At high galactic latitudes, more intense 21 cm line radiation from the galactic plane can give rise to artifacts in spectra from scattering of radiation within the antenna (see Kalberla et al. 2010). This is apparently less of a problem in surveys of galactic carbon monoxide (see Dame et al. 1987). 5.3.7 Sp ectral Line Observing Strategies Astronomical radiation is often only a small fraction of the total power received. To avoid stability problems, the signal of interest must be compared with another that contains approximately the same total power and differs only that it contains no source. The receiver must be stable so that any gain or bandpass changes occur over time scales long compared to the time needed for position change. To detect an astronomical source, three observing modes are used to produce a suitable comparison. 1) Position is compared For spectral gion. This is this method especially in Switching and Wobbler Switching. The signal on source with a measurement obtained at a nearby position in the sky. lines, there must be little line radiation at the comparison rereferred to as the total power observing mode. A variant of is wobbler switching. This is very useful for compact sources, the mm/sub-mm range.

2) On the Fly Mapping. This very important observing method is an extension of method (1). In this procedure, spectral line data is taken at a rate of perhaps one spectrum or more per second. 3) Frequency Switching. For many sources, is restricted to a narrow band, that is, present interval, , for example /0 10-5 . If all over , changing the frequency of a receiver on spectral line radiation at 0 only over a small frequency other effects vary very little a short time by up to 10


Techniques of Radio Astronomy

33

produces a comparison signal with the line well shifted. The line is measured all of the time, so this is an efficient observing mode.

6 Interferometers and Ap erture Synthesis
From diffraction theory, the angular resolution is given by (Eq. 33). However, as shown by Michelson (see Jenkins & White 2001), a much higher resolving power can be obtained by coherently combining the output of two reflectors of diameter d B separated by a distance B yeilding /B. In the radio/mm/sub-mm range, from (Eq. 30), the outputs can be amplified without seriously degrading the signal-to-noise ratio. This amplified signal can be divided and used to produce a large number of cross-correlations. Aperture synthesis is a further development. This is the procedure to produce high quality images of sources by combining a number of measurements for different antenna spacings up to the maximum B . The longest spacing gives the angular resolution of an equivalent large aperture. This has become the method to obtain high quality, high angular resolution images. The first practical demonstration of aperture synthesis in radio astronomy was made by M. Ryle and his associates (see Section 3 in Kellermann & Moran 2001). Aperture synthesis allows the reproduction of the imaging properties of a large aperture by sampling the radiation field at individual positions within the aperture. Using this approach, a remarkable improvement of the radio astronomical imaging was made possible. More detailed accounts are to be found in Taylor et al. (1999), Thompson et al. (2001) or Dutrey (2001). The simplest case is a two element system in which electromagnetic waves are received by two antennas. These induce the voltage V1 at A1 : V1 E e while at A2 : V2 E e
i t

, ,

(55) (56)

i (t- )

where E is the amplitude of the incoming electromagnetic plane wave, is the geometric delay caused by the relative orientation of the interferometer baseline B and the direction of the wave propagation. For simplicity, receiver noise and instrumental phase were neglected in (Eq. 55) and (Eq. 56). The outputs will be correlated. Today all radio interferometers use direct correlation followed by an integrator. The output is proportional to: E2 R( ) T
T

e
0

i t

e

- i (t- )

dt .

If T is a time much longer than the time of a single full oscillation, i.e., T 2 / then the average over time T will not differ much from the average over a single full period, resulting in


34

T. L. Wilson

Fig. 5. A schematic diagram of a two element correlation interferometer. The antenna output voltages are V1 and V2 ; the instrumental delay is i and the geometric delay is g . s is the direction to the source. Perpendicular to s is the pro jection of the baseline B . The signal is digitized after conversion to an intermediate frequency. Time delays are introduced using digital shift registers (from Wilson et al. 2008).

R( ) 1 E 2 e 2

i

.

(57)

The output of the correlator + integrator varies periodically with , the delay. Since s is slowly changing due to the rotation of the earth, will vary, producing interference fringes as a function of time. The basic components of a two element system are shown in Fig. 5. If the radio brightness distribution is given by I (s), the power received per bandwidth d from the source element d is A(s)I (s) d d , where A(s) is the effective collecting area in the direction s; the same A(s) is assumed for each of the antennas. The amplifiers are assumed to have constant gain and phase factors (neglected here for simplicity). The output of the correlator for radiation from the direction s (Fig. 5) is r12 = A(s) I (s) e
i

d d

(58)
g

where is the difference between the geometrical and instrumental delays and i . If B is the baseline vector between the two antennas = g - i = 1 B · s - i c

(59)

the total response is obtained by integrating over the source S R (B ) =


A(s)I (s) e2 i (

1 c

B ·s-

i

) d d

(60)


Techniques of Radio Astronomy

35

The function R(B ), the Visibility Function is closely related to the mutual coherence function (see Born & Wolf 1965, Thompson et al. 2001, Wilson et al. 2008) of the source. For parabolic antennas, it is usually assumed that A(s) = 0 outside the main beam area so that (Eq. 60) is integrated only over this region. A one dimensional version of (Eq. 60), for a baseline B , frequency = 0 and instrumental time delay i = 0, is R (B ) = A() I () e2
i 0

(

1 c

B

) d

(61)

With = x and Bx / = u, this is R(B ) = A() I () e2
iux

d

(62)

This form of (Eq. 60) illustrates more clearly the of u and x. This simplified version will be used interferometer responses (see Section 6.2). In two on a similar form with the additional variables y can be obtained from the inverse Fourier transform 6.1 Calibration

Fourier Transform relation to provide illustrations of dimensions, (Eq. 60) takes and By / = v . The image of Visibilities; see (Eq. 65).

Amplitude and phase must be calibrated for all interferometer measurements. In addition, the instrumental passband must be calibrated for spectral line measurements. The amplitude scale is calibrated by a determination of the system noise at each antenna using methods presented for single dish measurements (see Section 5.3.2 and following). In the centimeter range, the atmosphere plays a small role while in the mm and sub-mm wavelength ranges, the atmospheric effects must be accounted for. For phase measurements, a suitable point-like source with an accurately known position is required to determine 2 i in (Eq. 60). For interferometers, the best calibration sources are usually unresolved or point-like sources. Most often these are extragalactic time variable sources. To calibrate the response in units of flux density or brightness temperatures, these amplitude measurements must be referenced to primary calibrators (see a list of non-variable sources of known flux densities in Ott et al. 1994 or Sandell 1994). The calibration of the instrumental passband is carried out by a longer integration on an intense source to determine the channel-to-channel gains and offsets. The amplitude, phase and passband calibrations are carried out before the source measurements. The passband calibration is usually carried out every few hours or once per observing session. The amplitude and phase calibrations are made more often; the time between such calibrations depends on the stability of the electronics and weather. If weather conditions require frequent measurements of calibrators (perhaps less than once per minute for fast switching ), integration time is reduced. In case of even more rapid


36

T. L. Wilson

weather changes, the ALMA pro ject will make use of water vapor radiometers mounted on each antenna (see Section 5.3.1). These will be used to determine the total amount of H2 O vapor above each antenna, and use this to make corrections to phase. 6.2 Resp onses of Interferometers 6.2.1 Time Delays and Bandwidth The instrumental response is reduced if the bandwidth at the correlator is large compared to the delay caused by the separation of the antennas. For large bandwidths, the loss of correlation can be minimized by adjusting the phase delay so that the difference of arrival time between antennas is negligible. In practice, this is done by inserting a delay between the antennas so that 1 c B · s equals i . This is equivalent to centering the resp onse on the central, or white light fringe. Similarly, the reduction of the response caused by finite bandwidth can be estimated by an integration of (Eq. 60) over frequency, taking A(s) and I (s) as constants. The result is a factor, sin( )/ in (Eq. 60). This will reduce the interferometer response if 1 . For typical bandwidths of 100 MHz, the offset from the zero delay must be 10-8 s. This adjustment of delays is referred to as fringe stopping. The exponent in (Eq. 60) has both sine and cosine components, but digital cross-correlators record both components, so that the entire response can be recovered. 6.2.2 Beam Narrowing The white light fringe the delay compensation must be racy to prevent a reduction in the interferometer resp mary antenna beamwidth, b , this cannot be the case For a bandwidth there will be a phase difference. lengths to frequencies and using sin the result is = = 2
offset

set with a high accuonse. For a finite priover the entire beam. Converting the wave-



b



(63)

This effect can be important for continuum measurements made with large bandwidths, but can be reduced if the cross correlation is carried out using a series of narrow contiguous IF sections. For each of these IF sections, an extra delay is introduced to center the response at the value which is appropriate for that wavelength before correlation. 6.2.3 Source Size From an idealized source, of shape I (0 ) = I0 for < 0 and I (0 ) = 0 for > 0 ; we take the primary beamsize of each antenna to be much larger, and define the fringe width for a baseline B b to be B , The result is


Techniques of Radio Astronomy

37

R(B ) = A I0 · 0 e

i

0 b

sin ( 0 /b ) ( 0 /b )

(64)

The first terms are normalization and phase factors. The important term is in brackets. If 0 >> b , the interferometer response is reduced. This is sometimes referred to as the problem of missing short spacings '. To correct for the loss of source flux density, the interferometer data must be supplemented by single dish measurements. The diameter of the single dish antenna should be larger than the shortest interferometer spacing. This single dish image must extend to the FWHP of the smallest of the interferometer antennas. When Fourier transformed and appropriately combined with the interferometer response, the resulting data set has no missing flux density. 6.3 Ap erture Synthesis To produce an image, the integral equation (Eq. 60) must be inverted. A number of approximations may have to be applied to produce high quality images. In addition, the data are affected by noise. The most important steps of this development will be presented. For imaging over a limited region of the sky rectangular coordinates are adequate, so relation (Eq. 60) can be rewritten with coordinates (x, y ) in the image plane and coordinates (u, v ) in the Fourier plane. The coordinate w, corresponding to the difference in height, is set to zero. Then the relevant relation is:


I (x, y ) = A(x, y ) I (x, y ) =
-

V (u, v , 0) e-

2 i (ux+v y )

du dv

(65)

where I (x, y ) is the intensity I (x, y ) modified by the primary beam shape A(x, y ). It is easy to correct I (x, y ) by dividing by A(x, y ). Usually data present beyond the half power point is excluded. The most important definitions are: (1) Dynamic Range: The ratio of the maximum to the minimum intensity in an image. In images made with an interferometer array, it is assumed that corrections for primary beam taper have been applied. If the minimum intensity is determined by the random noise in an image, the dynamic range is defined by the signal-to-noise ratio of the brightest feature in the image. The dynamic range is an indication of the ability to recognize low intensity features in the presence of intense features. If the minimum noise is determined by artifacts, i.e., noise in excess of the theoretical value, image improvement techniques should be applied. (2) Image Fidelity: This is defined by the agreement between the measured results and the actual ( true ) source structure. A quantitative assessment of fidelity is:


38

T. L. Wilson

F = |(S - R)|/S where F is the fidelity, R is the resulting image obtained from the measurement, and S is the actual source structure. The highest fidelity is F = 0. Usually errors can only be estimated using a priori knowledge of the correct source structure. In many cases, S is a source model, while R is obtained by processing S with a model of the instrumental response. This relation can only be applied when the value of R is more than 5 times the RMS noise. 6.3.1 Interferometric Observations Usually measurements are carried out in 1 of 4 ways. 1. Measurements of a single target source. This is similar to the case of single telescope position switching. Two significant differences with single dish measurements are that the interferometer measurement may have to extend over a wide range of hour angles to provide a better coverage of the (u, v ) or Fourier plane, and that instrumental phase must be determined also. After the measurement of a calibration source or reference source, which has a known position and size, the effect of instrumental phases in the instrument and atmosphere is removed and a calibration of the amplitudes of the source is made. Target sources and calibrators are usually observed alternately; the calibrator should be close to the target source. The time variations caused by instrumental and weather effects must be slower than the time between measurements of source and calibrator. If, as is the case for mm/sub-mm wavelength measurements, weather is an important influence, target and calibration source must be measured often. For ALMA (see Fig. 6), observing will follow a two part scheme. For fast switching there will be integrations of perhaps 10 seconds on a nearby calibrator, then a few minutes on-source. This method will reduce the amount of phase fluctuations, at the cost of on-source observing time. For more rapid changes in the earth's atmosphere, phases will be corrected using measurements of atmospheric water vapor from measurements of the 183 GHz line. 2. Snapshot Mode. A series of short observations (at different hour angles) of one source after another, and then the measurements are repeated. For sensitivity reasons, snapshots are usually made in the radio continuum or more intense spectral lines. As in observing method (1), measurements of source and calibrator are interspersed to remove the effects of instrumental phase drifts and to calibrate the amplitudes of the sources in question. The images will affected by the shape of the synthesized beam since there is sparse coverage in the (u, v ) plane. If the size of the source to be imaged is comparable to the primary beam of the individual antennas there should be a correction for the power pattern.. 3. Multi-Configuration Imaging Here the goal is the image of a source either with high dynamic range or high sensitivity. Measurements with a number of different interferometer configurations better fill the uv plane. These mea-


Techniques of Radio Astronomy

39

Fig. 6. An artists sketch of ALMA. To date, this is the most ambitious construction pro ject in ground based astronomy. ALMA is now being built in north Chile on a 5 km high site. It will consist of fifty-four 12-m and twelve 7-m antennas, operating in 10 bands between wavelength 1 cm and 0.3 mm. In Early Science, four receiver bands at 3, 1.3, 0.8 and 0.6 mm will be available. The high ALMA sensitivity is due to the extremely low noise receivers, the highly accurate antennas, and the high altitude site. At the largest antenna spacing, and shortest wavelength, the angular resolution will be 5 milliarcseconds (courtesy ESO/NRAO/NAOJ).

surements are taken at different epochs and after calibration, the visibilities are entered into a common data set. 4. Mosaicing An extension of procedure (1) can be used for sources with an extent much larger than the primary antenna beam. These images require measurements at adjacent pointings. This is spoken of as mosaicing. In a mosaic, the antennas are pointed at narby positions. These positions should overlap at the half power power point. The images can be formed separately


40

T. L. Wilson

and then combined to produce an image of the larger region. Another method is to combine the data in the (u, v ) plane and then form the image. 6.4 Interferometer Sensitivity The random noise limit to an interferometer system can be calculated following the method used for a single telescope (Eq. 27). The use of (Eq. 43) provides a conversion from TRMS to S . collecting area of a single antenna. For an array of n identical antennas, there are N = n(n - 1)/2 simultaneous pairwise correlations, so the RMS variation in flux density is: S = with This tions temp
2 M k Tsys . Ae 2 N t

(66)

M 1, Ae the effective area of each antenna and Tsys given by (Eq. 54). = relation can be recast in the form of brightness temperature fluctuausing the Rayleigh-Jeans relation; then the RMS noise in brightness erature units is: 2 M k 2 Tsys . (67) TB = Ae b 2 N t

For a Gaussian beam, mb = 1.133 2 , so the RMS temperature fluctuations can be related to observed properties of a synthesis image. Aperture synthesis is based on discrete samples of the visibility function V (u, v ), with the goal of the densest possible coverage of the (u, v ) or Fourier plane. It has been observed that the RMS noise in a synthesis image obtained by Fourier transforming the (u, v ) data is often higher than given by (Eq. 66) or (Eq. 67). Possible causes are: (1) phase fluctuations caused by atmospheric or instrumental instabilities, (2) incomplete sampling of the (u, v ) plane, which gives rise to artifacts such as stripe-like features in the images, or (3) grating rings around more intense sources; these are analogous to high sidelobes in single dish diffraction patterns. 6.5 Corrections of Visibility Functions 6.5.1 Amplitude and Phase Closure The relation between the measured Vik visibility and actual visibility Vik is considered linear: Vik (t) = gi (t) gk (t) Vik + ik (t) . (68) Values for the complex antenna gain factors gk and the noise term ik (t) are determined by measuring calibration sources as frequently as possible. Actual values for gk are then computed by linear interpolation. The (complex) gain of the array is obtained by the multiplication of the gains of the individual antennas. If the array consists of n such antennas, n(n - 1)/2 visibilities


Techniques of Radio Astronomy

41

can be measured simultaneously, but only (n - 1) independent gains gk are needed since one antenna in the array can be taken as a reference. So in an array with many antennas, the number of antenna pairs greatly exceeds the number of antennas. For phase, one must determine n phases. Often these conditions can be introduced into the solution in the form of closure errors. Defining the phases , and by Vik = |Vik | e i ik , Gik = |gi | |gk | e i i e Vik = |Vik | e i ik .

-i

k

,

(69)

From (Eq. 68) the visibility phase the observed phase ik by

ik

on the baseline ik will be related to (70)
ikl

ik = ik + i - k + ik , where ik is the phase noise. Then the closure phase triangle of baseline ik , k l, li,

around a closed (71)

ikl = ik + kl + li = ik + kl + li + ik + kl + li ,

will be independent of the phase shifts introduced by the individual antennas and the time variations. With this procedure, phase errors can be minimized. If four or more antennas are used simultaneously, then the closure amplitudes can be formed. These are independent of the antenna gain factors: A
klmn

=

|Vkl ||Vmn | . |Vkm ||Vln |

(72)

Both phase and closure amplitudes can be used to improve the quality of the complex visibility function. At each antenna there is an unknown complex gain factor g with amplitude and phase, the total number of unknowns can be reduced significantly by measuring closure phases and amplitudes. If four antennas are available, 50 % of the phase information and 33 % of the amplitude information can thus be recovered; in a 10 antenna configuration, these ratios are 80 % and 78 % respectively. 6.5.2 Calibrations, Gridding, FFTs, Weighting and Self Calibration For two antenna interferometers, phase calibration can only be made pairwise. This is referred to as baseline based solutions for the calibration. For a multi-antenna system, antenna based solutions are preferred. These are determined by applying phase and amplitude closure for subsets of antennas and then solving for the best fit for each.


42

T. L. Wilson

Normally the Cooley-Tukey fast Fourier transform algorithm is used to invert (Eq. 65) To apply the simplest version of the FFT, the visibilities must be placed on a regular grid with sizes that are powers of two of the sampling interval. Since the data seldom lie on such regular grids, an interpolation scheme must be used. From the gridded (u, v ) data, an image with a resolution corresponding to /D, where D is the array size, is obtained. However, this may still contain artifacts caused by the observing procedure, especially the limited coverage of the (u, v ) plane. Therefore the dynamic range of such socalled dirty maps is rather small. This can be improved by further analysis. If the calibrated visibility function V (u, v ) is known for the full (u, v ) plane both in amplitude and in phase, this can be used to determine the modified (i.e., structure on angular scales finer than /D are lost) intensity distribution I (x, y ) by performing the Fourier transformation (Eq. 65). However, in a realistic situation V (u, v ) is only sampled at discrete points and in some regions of the (u, v ) plane, V (u, v ) is not measured at all. The visibilities can be weighted by a grading function, g . For a discrete number of visibilities, a version of (Eq. 65) involving a summation, not an integral, is used to obtain an image with the use of a discrete Fourier transform (DFT): ID (x, y ) =
k

g (uk , vk )V (uk , vk ) e-

2 i (uk x+vk y )

,

(73)

where g (u, v ) is a weighting function referred to as the grading or apodisation. g (u, v ) can be used to change the effective beam shape and side lobe level. There are two widely used weighting functions: uniform and natural. Uniform weighting uses g (uk , vk ) = 1, while natural weighting uses g (uk , gk ) = 1/Ns (k ), where Ns (k ) is the number of data points within a symmetric region of the (u, v ) plane. Data which are naturally weighted result in lower angular resolution but give a better signal-to-noise ratio than uniform weighting. But these are only extreme cases. Intermediate weighting schemes are referred to as robust weighting. Often the reconstructed image ID may not be a particularly good representation of I , but these are related by: ID (x, y ) = PD (x, y ) I (x, y ) , (74)

where I (x, y ) is the best representation of the source intensity modified by the primary beam shape; it contains only those spatial frequencies (uk , vk ) where the visibility function has been measured. (see (Eq. 65)). The expression for PD is: PD = g (uk , vk ) e-2 i (uk x+vk y) (75)
k

this is the response to a point source, or the point spread function PSF for the dirty beam. Thus PD is a transfer function that distorts the image; PD is produced assuming an amplitude of unity and phase zero at each point


Techniques of Radio Astronomy

43

sampled. This is the response of the interferometer system to a point source. The sum in (Eq. 75) extends over the same positions (uk , vk ) as in (Eq. 73); the sidelobe structure of the beam depends on the distribution of these points. Amplitude and phase errors scatter power across the image, giving the appearance of enhanced noise. This problem can be alleviated to an impressive extent by the method of self-calibration. This process can be applied if there is a sufficiently intense compact feature in the field contained within the primary beam of the interferometer system. If self-calibration can be applied, the positional information is usually lost. Self-calibration can be restricted to an improvement of phase alone or to both phase and amplitude. Normally, self-calibration is carried in the (u, v ) plane. If this method is used on ob jects with low signal-to-noise ratios, this may lead to a concentration of random noise into one part of the interferometer image (see Cornwell & Fomalont 1989). For measurements of weak spectral lines, self-calibration is carried out using a continuum source in the field. The corrections are then applied to the spectral line data. In the case of intense lines, one of the frequency channels containing the emission is used. 6.5.3 More Elab orate Improvements of Visibility Functions: The CLEANing Pro cedure CLEANing is the most commonly used technique to improve single radio interferometer images (H¨ om 1974). In addition to its inherent low dynamic ogb range, the dirty map often contains features such as negative intensity artifacts that cannot be real. Another unsatisfactory aspect is that the solution is quite often rather unstable, in that it can change drastically when more visibility data are added. The CLEAN method approximates the intensity distribution that represents the best image of the source (sub ject to angular resolution, noise, etc.), I (x, y ), by the superposition of a finite number of point sources with positive intensity Ai placed at positions (xi , yi ). The goal of CLEAN to determine the Ai (xi , yi ), such that I (x, y ) =
i

Ai PD (x - xi , y - yi ) + I (x, y )

(76)

where I is the dirty map obtained from the inversion of the visibility function and PD is the dirty beam (Eq. 75). I (x, y ) is the residual brightness distribution after decomposition. Approximation (Eq. 76) is considered successful if I is of the order of the noise in the measured intensities. This decomposition must be carried out iteratively. The CLEAN algorithm is most commonly applied in the image plane. This is an iterative method which functions in the following fashion: (1) find the peak intensity of the dirty image, then subtract a fraction (the socalled loop gain ) having the shape of the dirty beam from the image, and (2) repeat this n times.


44

T. L. Wilson

This loop gain has values 0 < < 1 while n is often taken to be 104 . The goal is that the intensities of the residuals are comparable to the noise limit. Finally, the resulting model is convolved with a clean beam of Gaussian shape with a FWHP given by the angular resolution expected from /D where D is the maximum baseline length. Whether this algorithm produces a realistic image depends on the quality of the data and other variables. 6.5.4 More Elab orate Improvements of Visibility Functions: The Maximum Entropy Pro cedure The Maximum Entropy Deconvolution Method (MEM) is commonly used to produce a single optimal image from a set of separate but contiguous images (Gull & Daniell 1978). The problem of how to select the best image from many possible images which all agree with the measured visibilities is solved by MEM. Using MEM, those values of the interpolated visibilities are selected, so that the resulting image is consistent with all previous relevant data. In addition, the MEM image has maximum smoothness. This is obtained by maximizing the entropy of the image. One definition of entropy is given by Ii -1 , (77) H=- Ii ln Mi i where Ii is the deconvolved intensity and Mi is a reference image incorporating all a priori knowledge. In the simplest case Mi is the empty field Mi = const > 0, or perhaps a lower angular resolution image. Additional constraints might require that all measured visibilities should be reproduced exactly, but in the presence of noise such constraints are often incompatible with Ii > 0 everywhere. Therefore the MEM image is usually constrained to fit the data such that 2 = |Vi - Vi | 2 i
2

(78)

has the expected value, where Vi is the measured visibility, Vi is a visibility corresponding to the MEM image and i is the error of the measurement. Acknowledgement: K. Weiler made a thorough review of the text and H. Bond suggested a number of improvements.