Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.adass.org/adass/proceedings/adass00/O10-01/
Дата изменения: Tue May 29 19:22:03 2001
Дата индексирования: Tue Oct 2 06:48:53 2012
Кодировка:

Поисковые слова: stonehenge
Astronomical Software--A Review Next: Reflections on a Decade of ADASS
Up: Software History
Previous: Software History
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint

Shortridge, K. 2001, in ASP Conf. Ser., Vol. 238, Astronomical Data Analysis Software and Systems X, eds. F. R. Harnden, Jr., F. A. Primini, & H. E. Payne (San Francisco: ASP), 343

Astronomical Software--A Review

Keith Shortridge
Anglo-Australian Observatory, P.O. Box 296, Epping, NSW 1710, Australia

Abstract:

It is now impossible to imagine `doing astronomy' without using software. Sometimes it is hard to remember that it has not always been like this.

Over a timescale now measured in decades, the art (or science) of astronomical programming has evolved. Once it involved the squeezing of hand-crafted assembler routines into insufficient memory. Now it includes the design of ambitiously large frameworks for data acquisition and reduction. The organisation required for the production of such software has had to grow to match these new ambitions.

This review looks back on the path taken by this fascinating evolutionary process, in the hope that it can provide a background that may let us imagine where the next years will lead.

1. Introduction

What follows is essentially a verbatim transcript of the rather informal talk given at ADASS. A more formal treatment of some of the ideas presented here can be found in Shortridge (2001).

2. Early History

I'm going to look back into the murky history of astronomical computing. There are usually two reasons for looking back like this. One is that it's fun. This is what memories are made of--and once upon a time memories were made of little bits of core with wires threaded through them...The other reason is that history provides a context for understanding the present and guessing the future.

This talk has been billed as being about astronomical software since the `60s, but it wouldn't hurt to remember that computation in astronomy goes back much earlier than that. I don't want to go into the speculation that structures such as Stonehenge were computers...But debugging them must have been fun. `Comment out this stone and we'll try again next mid-summer's day...' And some smart Alec will say, `Of course, when we go from 1000BC to 999BC you'll wish you'd used more than two stones for the year...'

Even without going that far back, astronomy--or at least, astrometry--has always depended on calculations. I was once told of a report written by an early Astronomer Royal who complained that `the computers were a disorderly, drunken, rabble.' These computers were, of course, what computers are now: a resource astronomers use to do tedious mathematical calculations. Except that these were people--some of them a bit the worse for drink.

You still find the term `computer' being used to mean a person into the early 20th century. But technology advances all the time.

In 1951 the then Astronomer Royal was still having trouble with `computers'. His report included: `The model 602A calculator, which is the only machine that can multiply, is the key machine as far as general computing work is concerned. Unfortunately, the calculator was delivered from the USA without the relays for division, so there is still a considerable lack of flexibility for complicated calculations.'

You know just how he felt, don't you?

The next year, it still couldn't divide. But in the next year, 1953, they finally delivered the division relays. Let's hear it for field service!

Now a 602A calculating punch was--well, it's a computer, Jim, but not as we know it. It could be programmed. It had a pluggable patch panel with up to 60 program steps. It couldn't just divide (eventually), it could loop. It didn't store the program in memory. It didn't treat program and data interchangeably; a program couldn't really be the output from another program.

But it needed programmers.

I'd like to emphasise, by the way, that all this was before my time. I'm not doing this review because I was there through it all. I'm not that old. I was 30 this year. Unfortunately, I was twenty-F last year. Come on, how many people here can still work that out in their heads?

3. Mainframes and Mini-computers

Let's move on. Round about 1958, the term `software' was coined. In 1959 came the IBM 7090; a recognisable mainframe computer with--gasp--a FORTRAN compiler. You could do ephemeris calculations, you could run model atmosphere codes. You had card readers, you had paper tape punches...

Back then, you knew how big a program module had to be. There was a rule of thumb. If you couldn't hold all the cards in one hand without dropping them, it was too big. About 400 lines. Or you'd be scrambling around on the floor muttering, ``Next time, I'll leave out the comments."

Have you noticed how programmers go on about how it used to be? ``When I first used PCs, they couldn't address more than 640K." ``640K?--I once got an operating system, a compiler, an editor, an assembler and useful applications into 2K!" ``2K? We had to make do with single bits--we had to sign forms for each one we used!" ``Real memory--luxury! We used graduate students; they stood up for one, sat down for zero; the floating point unit took up a whole football field."

Come on. Who here's played that game in the last few days?

A problem with mainframes is that you can't control instruments with them. Obviously, one huge change came in when mainframes appeared in computer centers. But--for us--an equally big change was when mini-computers came into the labs. (And then, of course, microprocessors into the instruments. And networks, which is a different kind of change.) Mini-computers allowed the use of computers for control as well as calculation. Most of the programs I write don't actually `compute' very much--they control. And that happened in the late Њ60s and early Њ70s.

That was when the PDP-8 came out, followed a bit later by the 16-bit machines that were the mainstay of control systems for over ten years. The DEC PDP-11 was probably the most ubiquitous, but there were also the Data General Nova and Eclipse, and the machine that introduced me to all this: the Interdata 70.

These were the machines that controlled the 4-metre class optical telescopes of the Њ70s. When the Anglo-Australian Telescope went into operation in 1974 it had a marvelous control system (which was nothing to do with me--the credit goes to Pat Wallace et al.) This modeled the deformation of the telescope and allowed it to set to better than 1.5arc-seconds, which was astonishing. Just to reminisce for a moment, these machines used at the AAT had 64kilobytes of memory, a 4MHz system clock--although the current infatuation with Megahertz was unknown then--and 5MB of disk space.

I gave a talk a bit ago in Sydney where I passed around an 8KB core memory board from one of these machines. This big (14 inches square) and 8KB (read my lips, Kilobytes) of memory. I'd like to do that now, but I wasn't allowed to bring it with me. You see, the current AAT control computer is ...yes, it's that same Interdata 70 and that board is one of the vital set of spares we keep for it. You shouldn't assume your control software won't still be being used in twenty five years time.

Let's really talk about software. These machines were programmed in assembler. You add two numbers by `add register one to register two'. Or in FORTRAN, where you do it by A = A + B. (In capital letters, because card punches don't do lower case.) You knew--and cared--exactly how long each instruction took, and they had a good real-time operating system. And you didn't have to worry about whether your driver was at a lower priority than the Ethernet driver, because there wasn't one.

The main programming methodology was `whatever works fast enough and will fit in memory will do,' and the only thing even resembling a standard library was the built-in FORTRAN I/O package--and I usually didn't use that because it used up all of 8KB.

4. Memory Constraints

Back then memory usage was one of the biggest constraints. Generally, a 16-bit machine can address 64KB. If you like splitting time up into eras, then the predominant number of address bits is one criterion. This was the 16-bit era. We're now arguably at the end of the 32-bit era. Once you can physically buy all the memory you can address, you clearly need more address bits. And you can now afford to buy memory in Gigabytes.

Partly because of these memory considerations, the FORTH language enjoyed a vogue then. FORTH defined a FORTH machine, with an extensible set of operations (words). Words were defined in terms of other words, and because they were very lightweight you reused them a lot and got very dense code. I really did once get an operating system, compiler, editor and applications into 2KB, and it was done in FORTH. It sacrificed clarity for conciseness: A = A + B is now A @ B @ + A ! which is only arguably an improvement.

FORTH originated in astronomy, and was used a lot then fell out of favour. Eventually, its compactness was not so important and its disadvantages became more apparent. One thing it missed, and I think this is still important though I've never heard it discussed much, was `locality of code'. You couldn't look at just one part of a program listing and understand it. You never were familiar with the words used and they were defined elsewhere--in terms of other words defined elsewhere. It's an aspect of complex code that's introduced by modularity. You can reduce it by designing your components as intuitively as possible, and by encapsulation, but it's still a big issue for code maintainability.

What removed the memory constraints?--More memory. 32-bit machines.

Perkin-Elmer released a 32-bit version of the Interdata 70 they called the `Megamini'. You could order one with a megabyte of memory. I remember a meeting that ended up specifying one for RGO with 512KB, because nobody could think of any possible reason a control computer could need a {\t{M\/}}egabyte of memory. And the next year I think they ordered the other 512KB ...

But the machine most will remember from the Њ80s was the VAX. Out in 1977, the VAX 11/780 had 32-bit addressing and virtual memory. VAX--Virtual Address Extension, because we all know `Extend' is spelled with an `X'. It looked like a wardrobe, but it was a wardrobe that could address 4 gigabytes.

5. Software Frameworks

Moore's Law has been given a good airing at this meeting, but nobody has actually stated it. It says that the processor power needed to run Microsoft Word doubles roughly every 18 months, but fortunately the hardware keeps up. But it isn't just processor speed that drives the changes we've seen. Speed lets you do things faster, but memory--disk as well as core--lets you do more complex things.

With unimaginable amounts of memory now available, programmers could start to build the sort of new systems they now realised they'd always wanted to write. This is where we start to see the emergence of the big systems. You know these, you use them now. IRAF, ADAM, AIPS, MIDAS.

Looking back, these are component software frameworks. You define the way a program gets run, the way it gets its parameters, how it handles disk files, and an application becomes an easily-written component that fits into the framework, providing a facility that wasn't previously available under that framework.

Persuade people that such components are easy to write (once you've mastered the framework) and more components will be written and the framework becomes richer and richer. The people who write the programs don't get richer and richer, because we don't work in that sort of environment. Fame has to be the spur. And it is. Don't you get a kick from getting bug reports from all around the world?

AIPS came out around 1978, IRAF in the early eighties. ESO's early IHAP system, running on 16-bit HPs, was replaced by MIDAS. These are data reduction systems. Data acquisition frameworks are harder, because they're real-time systems, but you can do it if you're rash enough. ADAM emerged as a data acquisition framework, originally for that Perkin-Elmer `Megamini', and was used a lot, particularly by places that had UK connections.

There was another, more or less parallel trend. The emergence of standards. What format do you use for data interchange? FITS. FITS has been around a long time. Who uses 2880 as a PIN number?--it's one of those numbers that rolls trippingly off the tongue. FITS has been a great triumph. So have standard subroutine libraries--SLALIB for astrometry--and standard components like SAO image that can be massaged into a number of frameworks.

Well, things change. The VAX lasted a long time, but the combination of RISC chips and UNIX was unstoppable. Most of the frameworks moved over to UNIX. Some had been there all the time. With UNIX came C, and A = A + B is now in lower case and has a semicolon after it--so you can squeeze a lot of statements onto a line in the interests of readability...

Then C++, and you don't just add numbers together anymore; now you need to know what sort of thing they represent so you can encapsulate them into a class and define how the `plus' operator works on them. And the productivity gains are amazing!

It's easy to poke fun, but actually I've found I enjoy writing C++ and Java. I think that's because there's no obvious real-life metaphor for procedural programming, but we're all used to working with things--particularly people--with different skills and specialities, and getting them to work together is something we understand. And Java is a framework all to itself...And then there's the Web, and the Grid...

Back a step. As UNIX started to dominate the data reduction world, it also took over the top levels of data acquisition systems. But it generally doesn't go all the way down to the sharp end--the instrumentation hardware. You find the same RISC processors there now--SPARCS, PowerPCs. (PowerPCs--you have to love a chip with an instruction called EIEIO. Enforce In-order Execution of I/O. It does something to the cache, but I'm glad to say I don't know exactly what. You should always take home one fact from any talk--but maybe not that one.) These hardware control processors can have as much or more memory as the workstations, and Ethernet connections, but they aren't running the same software, generally.

The data acquisition frameworks are now controlling highly complex networked systems. Both ESO and Gemini, for example, have systems that use a database paradigm, where hardware components map to database entries and changing the database entry is supposed to have a direct effect on the hardware. In both cases there's a distinct boundary between the real-time parts and the top levels.

In Gemini, the low levels are supposed to look like EPICS databases. In the ESO VLT system you have to think carefully about whether you put software items into the workstations or the low level LCU systems. Both systems also have elements of the more conventional `send a command, wait for a response' systems. The AAO's DRAMA system--an ADAM descendant--has the same API at all levels, but is a pure command/response system and misses out on some advantages of the database approach.

Interestingly, all these use the VxWorks real-time kernel at the low level, but they all hide it so much you'd not know it was there. Knowing VxWorks--or UNIX--doesn't help much when it comes to learning these systems.

And learning matters.

6. Summary

Looking back, there's been a steady progression, not just in the speed of systems, but also in the increasing amounts of memory available, which allows not just the handling of more data but also the production of more complex software systems. And complexity has many ramifications. One can tackle complexity through the use of packages, components, objects, but these increase the learning curve and tend to reduce the code `localisation'--the ability to understand a piece of code just by looking at a page on a screen and knowing the language used.

And, perversely, as we move to more complex astronomical frameworks, some of our organisations are moving to organisational structures where the components written for these structures are written by outside contractors. Both ESO and Gemini outsource their instrumentation. But you can't just advertise in the IT section of the local paper for someone with expertise in the VLT software environment.

These were thoughts that came to me as I tried to look back a bit. Everything you could possibly know about FORTRAN IV fitted into a large typeface IBM manual maybe a third of an inch thick. Who here has `Java in a Nutshell' on their desks? It's a series of three thick volumes in a specially condensed font.

One last thing. A point that was made earlier in the SETI talks, and which I heard first from Ron Ekers at ATNF. Some time back, costs crossed over. Computer hardware is now a consumable that the software uses. The software represents the capital investment.

So, next time the organisation's bean counters want to stick an asset number on your workstation, tell them it's just a consumable, like a box of printer paper. Tell them to stick their asset number--wait for it--tell them to stick it on your program code. They won't, but it might give you a warm inner glow, and that's what writing software should be all about.

References

Shortridge, K. 2001, Software in Astronomy, in The Encyclopedia of Astronomy and Astrophysics, ed. Paul Murdin (London: Institute of Physics Publishing)


© Copyright 2001 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA
Next: Reflections on a Decade of ADASS
Up: Software History
Previous: Software History
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint

adass-editors@head-cfa.harvard.edu