Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.parallel.ru/sites/default/files/docs/faq/04.txt
Дата изменения: Wed Nov 2 11:53:59 2011
Дата индексирования: Tue Oct 2 02:53:32 2012
Кодировка:
Newsgroups: comp.parallel,comp.sys.super
From: eugene@sally.nas.nasa.gov (Eugene N. Miya)
Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)
Subject: [l/m 10/22/97] group history/glossary comp.parallel (4/28) FAQ
Organization: NASA Ames Research Center, Moffett Field, CA
Date: 4 Mar 1998 13:03:14 GMT
Message-ID: <6djjei$1kt$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq
Last-modified: 22 Oct 1997

4 Comp.parallel news group history, glossary, etc.
6 parlib
8 comp.parallel group dynamics
10 Related news groups, archives and references
12
14
16
18 Supercomputing and Crayisms
20 IBM and Amdahl
22 Grand challenges and HPCC
24 Suggested (required) readings
26 Dead computer architecture society
28 Dedications
2 Introduction and Table of Contents and justification



News group history
==================

Comp.parallel began as a mailing list specifically for
Floating Point Systems T-series hypercubes in the late 1980s by
"Steve" Stevenson at Clemson University. Later, the news group was
gatewayed (originally) as comp.hypercube. About six months into it,
someone suggested that the news group be all parallel stuff.
That's when it was changed (by democratic vote, to be sure) to the
moderated Usenet group comp.parallel.

Comp.parallel distinguished itself as one of the better Usenet groups
with a high "signal to noise" posting ratio.
Prior to comp.parallel, parallel and supercomputing were discussed in
the unmoderated Usenet group comp.arch (poor signal to noise ratio).
[aka high performance computing]

I forget (personally) the discussion which went along with the creation of
comp.sys.super and comp.unix.cray. It is enough to say that "it happened."

Comp.sys.super started as part of the "Great Usenet Reorganization"
(circa 1986/7).
C.s.s. was just seen as part of the existing sliding scale of
computer performance (from micros to supers).
Minicomputers (16-bit LSI machines) started disappearing about this time.


Where's the charter?
====================

It's going to be substituted here.


What's okay to post here?
=========================

Most anything relating parallel computing (comp.parallel) or
supercomputing (comp.sys.super, but unmoderated). Additionally, one
typically posts opinions about policy as relating to running the news group
(i.e., news group maintenance). Largely, it is up to the moderator in
comp.parallel to decide what ultimately propagates (in addition to the
usual propagation problems [What? you expect news to be propagated reliabily?
I have bridge to sell and some land in Florida which is occasionally
above water.]).

We are not here to hold your hand. Read and understand the netiquette posts
in groups such as news.announce.newusers (or de.newusers or similar groups).
Netiquette != etiquette.
Netiquette ~= etiquette.
Netiquette not = etiquette.
NETIQUETTE .NE. ETIQUETTE.
Avoid second and third degree flames: no pyramid posts or sympathy card calls.
Sure some one might be dying, but that's more appropriate in other groups.
We have posted obits and funeral notices (e.g., Sid Fernbach, Dan Slotnick).
No spam. We will stop spam especially cross-posted spam.

Current (1996) SPAM count to (comp.parallel): growing.
Current (1996) SPAM count to (comp.sys.super): more than c.p.

The spam count is the number of attempts to spam the group which get
blocked by moderation.

One more note:
Good jokes are always appreciated. Is it Monday?
GOOD JOKES.


Old joke (net.arch: 1984) with many variants:

In the 21st Century, we will have greater than Cray-1 power
with massive memories and huge disks, easily carryable under the arm
and costing less than $3000, and the first thing the user asks:
"Is it PC compatible?"


Guidance on advertising:
------------------------
Keep it short and small. This means: post-docs, employment, products, etc.
Don't post them too frequently.


What's okay to cross-post here?
-------------------------------

Your moderators are in communication with other moderators.
Currently, if you cross-post to two or more moderated news groups,
a single moderator can approve or cancel such an article.
Mutual agreements for automatic cross-post approval have been
negiotated with:
comp.compilers
comp.os.research
comp.research.japan
news.announce.conferences (moderator must email announcement to n.a.c.
moderator)
Pending
comp.doc.techreports

You are free to separately dual post (this isn't a cross-post) to
those moderated news groups.


Group Specific Glossary
=======================

Q: What does PRAM stand for?

Confused by acronyms?
---------------------
http://www.ucc.ie/info/net/acronyms/acro.html


The following are noted but not endorsed (other name collisions possible):
Frequent acronyms:
ICPP: International Conference on Parallel Processing
ICDCS IDCS DCS: International Conference on Distributed Computer Systems
ISCA: International Symposium on Computer Architecture
MIN: Multistage Interconnection Network
ACM && IEEE/CS: two professional computer societies
ACM: the one with the SIGs, IEEE: the one with the technical commitees
CCC: Cray Computer Corporation (defunct)
CRI: Cray Research Inc. (SGI div.)
CDC: Centers for Disease Control and prevention
Control Data Corporation (defunct)
CDS: Control Data Services
DMM:
DMP:
DMMP DMC: Distributed Memory Multi-Processor/Computer
DMMC: Distributed Memory Multiprocessor Conference (aka Hypercube Conference)
ERA: Engineering Research Associates
ETA: nothing or Engineering Technology Associates (depending who you talk to)
ASC: Texas Instruments Advanced Scientific Computer (real old)
ASCI: Accelerated Strategic Computing Initiative
ASPLOS: Architectural Support for Programming Languages and Operating Systems

IPPS International Parallel Processing Symposium
JPDC Journal of Parallel and Distributed Computing
MIDAS: Don't use. Too many MIDASes in the world.
MIP(S): Meaningless Indicators of Performance, also MFLOPS, GFLOPS, TFLOPS,
PFLOPS, (also substitute IPS and LIPS (logical inferences) for FLOPS
NDA: Non-disclosure Agreement
POPL: Principles of Programming Languages
POPP PPOPP PPoPP: Principles and Practice of Parallel Programming
HPF: High Performance Fortran (a parallel Fortran dialect)
MPI: Message Passing Interface (also see PVM)
PVM: Parallel Virtual Machine (clusters/networks of workstations)
also see MPI
Parallel "shared" Virtual Memory [Not the same as the other PVM]
SC'xx: Supercomputing'xx (a conference, not to be confused with the journal)
SGI: Silicon Graphics, Inc.
SUN: Stanford Unversity Network
SOSP: Symposium on Operating Systems Principals
SPDC: Symposium on Principles of Distributed Computing
SPAA: Symposium on Parallel Algorithms and Architectures
TOC/ToC: IEEE Transactions on Computers
Table of Contents
TOCS: ACM Transactions on Computer Systems
TPDS/PDS: Transactions on Parallel and Distributed Systems,
Partitioned Data Set
TSE: Transactions on Software Engineering
Pascal && Unix: They aren't acronyms.

You can suggest others.....
We have dozens of others, we are not encouraging their use.
This is a list of last resort.
While people use these macros in processors like bibTeX, many interdisciplinary
applications people reading these groups are clueless. USE THE COMPLETE
expansion when possible or include the macro with the citation.
Leave it out, and you will appear like
"one of those arrogant computer scientists..." to quote a friend.

Less volatile acronyms (accepted in the community):

SISD: [Flynn's terminology] Single-Instruction stream, Single-Data stream
SIMD: [Flynn's terminology] Single-Instruction stream, Multiple-Data stream
MISD: [Flynn's terminology] Multiple-Instruction stream, Single-Data stream
MIMD: [Flynn's terminology] Multiple-Instruction stream, Multiple-Data stream

PRAM: Parallel Random Access Memory
QRQW: Queued reads and writes
EREW: Exclusive access Read, Exclusive access Write
CREW: Concurrent read, exclusive write PRAM

ASCI = Accelerated Strategic Computing Initiative
(i.e. simulating nuclear bombs, so we don't feel
compelled to blow them up in order to test them.)

ASCI Red = the Intel machine at Sandia National Labs,
consisting of >9000 200 MHz Pentium Pro cpus
in a 2-D mesh configuration.

ASCI Blue = Two systems, both targetted at 3 TFLOPS peak,
1 TFLOPS sustained:

1. A future IBM machine to be installed at Lawrence
Livermore National Labs. By the end of 1998 or
early 1999, it should be 512 SMP machines in
a message-passing cluster. Each machine is based
on 8 PowerPC 630 processors. For starters, IBM
has installed an SP-2 machine.

2. A future SGI/Cray machine to be installed at
Los Alamos National Labs. By the end of 1998 or
early 1999, it should be a 3072-cpu distributed
shared memory system, based on a future SGI/MIPS
processor. For starters, SGI has installed a
moderately large number of 32-cpu Origin 2000 systems.


Shared Memory

1. A glossary of terms in parallel computing can be found at:

http://www.npac.syr.edu:80/nse/hpccgloss/hpccgloss.html

(Most of this was taken from my IEEE P&DT article w/o my
permission, and without proper credit; the credit thing has
apparently now been fixed.)

2. My history of parallel computing is available as technical
report CSRI-TR-312 from the Computer Systems Research Institute,
University of Toronto, at:

http://www.cdf.toronto.edu/DCS/CSRI/CSRI-OverView.html

%A Gregory V. Wilson
%T A Chronology of Major Events in Parallel Computing
%R CSRI-312
%I U. of Toronto, DCS
%D December 1994
%X ftp.csri.toronto.edu cd csri-technical-reports


Remember:
http://www.ucc.ie/info/net/acronyms/acro.html


URLs
----
http://www.cray.com/ # this might change
http://www.convex.com/ # this might change
http://www.ibm.com/
Got the pattern?

http://spud-web.tc.cornell.edu/HyperNews/get/SPUserGroupNT.html
http://www.umiacs.umd.edu/~dbader/sites.html
http://www.cnct.com/~gunter
http://parallel.rz.uni-mannheim.de/top500/top500.html

Brazil Parallel Processing Homepage
http://www.dcc.ufmg.br/~kitajima/sbac-eng.html

Also
HPCC


Other mailing lists
-------------------
pario
sp2


Where can I find "references?"
------------------------------

BEWARE: The Law of Least Effort! (*if you need this reference, mail me.)

The references provided herein are not intended to be comprehensive for the
most part. That's the perview of a bibliography.

The major biblios I am aware:
Mine, and I will attempt to integrate the following as well
Cherri Pancake's parallel debugging biblio
David Kotz's parallel I/O biblio
H.T. Kung's Systolic array biblio
http://liinwww.ira.uka.de/bibliography/Parallel/index.html

NCSTRL Project: (from ARPA: CSTR)
http://www.ncstrl.org
and
the Unified CS TR index:
http://www.cs.indiana.edu:800/cstr/search


If you ask a query, and I know the answer, I might give you a quick
search off the top of the biblio, but I'm not your librarian.
I am a Journal Associate Editor for John Wiley & Sons, Inc.
If I don't answer, I don't have the time or don't know you well enough.
Knowledgeable people have up to date copies of my biblio
(and the other biblios).

If you are a student or a prof, and you assemble a biblio on some topics,
1) if you use one of these biblios: ACKnowledge that fact.
2) If you post it, separate the new entries and submit it directly to me,
if you don't, you make work busy work for those of us maintaining it because
we have to resolve entry collisions (not that as simple as you might think,
like name differences (full vs. abbreviated name, bibtex macros (w/o the
expansion [do you have any appreciation how irksome that it to some people?]).

Assembling a biblio is a fine student exercise, BUT
it should build on existing information. It should also minimize the
propagation of typos and other errors (we are all still finding them in
the existing biblios).

Notorious (frequently posted) biblio topics:

MINs (multistage interconnection networks).
Load balancing.
Checkpointing.

While clearly important, these are topics which bore and upset some people
(ignore them, they can hit 'n' on their news system). You are supposed to
kill file this FAQ after reading it (subject to last modified dates,
of course).




Some very telling personal favorite quotes from the literature of
-----------------------------------------------------------------
parallel processing:
--------------------

[Wulf81] describes the plight of the multiprocessor researcher:
.(q
We want to learn about the consequences of different designs on
the useability and performance of multiprocessors.
Unfortunately, each decision we make precludes us from exploring its
alternatives. This is unfortunate, but probably inevitable for hardware.
Perhaps, however, it is not inevitable for the software....
and especially for the facilities provided by the operating system.
.)q


[Wulf81, pp. 276]:
.(q
In general, we believe that it's possible to make two major mistakes at the
outset of a project like C.mmp. One is to design one's own processor;
doing so is guaranteed to add two years to the length of the project and,
quite possibly, sap the energy of the project staff to the point that nothing
beyond the processor ever gets done. The second mistake is to use someone
else's processor. Doing so forecloses a number of critical decisions, and thus
sufficiently muddies the water that crisp evaluations of the results are
difficult. We can offer no advice. We have now made the second mistake\**
\*- for variety, next time we'd like to make the first! Given the chance, our
processor would:
.(f
\**[Wulf81]: Twice, in fact.
The second multiprocessor project at C-MU, $Cms$, also uses the PDP-11.
.)f
.(b F
Be both inherently more reliable and go to extremes not to propagate errors;
once an error is detected, it would report that error without further effect
on the machine state.

Provide rapid domain changing; we see no inherent reason that this should
require more than, say, a dozen instruction times.

Provide an adequate address space; actually, rather than a larger number of
address bits, we would prefer true capability-based addressing [Fabry74] at
the instruction level since this leads to a logically infinite address space.
.)b
.)q

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other reason -- including blind
stupidity." -- Wm. A. Wulf

Make it work first before you make it work fast.
--Bruce Whiteside in J. L. Bentley, More Programming Pearls


Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel