Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.parallel.ru/sites/default/files/docs/faq/24.txt
Дата изменения: Wed Nov 2 11:53:59 2011
Дата индексирования: Tue Oct 2 02:56:04 2012
Кодировка:
Newsgroups: comp.parallel,comp.sys.super
From: eugene@sally.nas.nasa.gov (Eugene N. Miya)
Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)
Subject: [l/m 7/23/97] Suggested readings comp.par/comp.sys.super (24/28) FAQ
Keywords: REQ,
Organization: NASA Ames Research Center, Moffett Field, CA
Date: 24 May 1998 12:03:07 GMT
Message-ID: <6k929r$7ja$1@sun500.nas.nasa.gov>

Archive-Name: superpar-faq
Last-modified: 23 Jul 1997

24 Suggested (required) readings < * this panel * >
26 Dead computer architecture society
28 Dedications
2 Introduction and Table of Contents and justification
4 Comp.parallel news group history
6 parlib
8 comp.parallel group dynamics
10 Related news groups, archives and references
12
14
16
18 Supercomputing and Crayisms
20 IBM and Amdahl
22 Grand challenges and HPCC

So you didn't search TM-86000? (panel 14).


Here's the context: this is more parallel (rather than super) computing
oriented.

Every calendar year, I ask in comp.parallel for everyone's opinions
on what should people be reading. I couch this with the proviso that
the reader be at least a 1st or 2nd year grad student in computer science
or related technical field. This presumes some basic ACM CORE curriculum
knowledge like:
basic computer architecture,
compilers,
operating systems, and some numerical analysis
(some would argue: not enough, but that's a separate argument).

For better or worse, it's done numerically (a mid 1980s experiment).
Every suggester gets "10 votes."
You will see the 10 perceived "REQUIRED" readings in parallel computing
by your colleagues: and they are very good colleagues like JH and DP, DH, etc.

Disadvantages:
1) sometimes 10 votes is not enough (I made the rules, I can make
exceptions).
2) new unfamiliar books tend to take time to make it to "the top-10."
Yes, some references might be old, so vote for newer references
and encourage your colleagues to "vote" for those references, too.
3) for those we have a RECOMMENDED 100 (for recommended class
reading lists). Search (panel 14 in TM-86000) and find them.
I might make a separate FAQ panel later. Ten is enough for now.
Some people will claim "anti-votes." Sorry I have no provision for anti-votes
except to note them in annotations. Watch for them!

And if you have voted in the past and wish to change your "vote,"
just ask.

We are not doing this to sell textbooks. This is merely a yearly opinion
survey. You can suggest 10 at just about anytime (especially if you want to
N an existing endorsement, or anti, or whatever).



COME ON COME! you are long winded
-------------


Here:

REQUIRED

%A George S. Almasi
%A Allan Gottlieb
%T Highly Parallel Computing, 2nd ed.
%I Benjamin/Cummings division of Addison Wesley Inc.
%D 1994
%K ISBN 0-8053-0443-6
%K ISBN # 0-8053-0177-1, book, text, Ultracomputer, grequired96, 91,
%d 1st edition, 1989
%K enm, cb@uk, ag, jlh, dp, gl, dar, dfk, a(umn),
%$ $36.95
%X This is a kinda neat book. There are special net antecdotes
which make this interesting.
%X Oh, there are a few significant typos: LINPAK is really LINPACK. Etc.
These were fixed in the second edition.
%X It's cheesy in places and the typography is
pitiful, but it's still the best survey of parallel processing. We really
need a Hennessy and Patterson for parallel processing.
(The topography was much improved in the second edition so much of
the cheesy flavor is gone --ag.)
%X (JLH & DP) The authors discuss the basic foundations, applications,
programming models, language and operating system issues and a wide
variety of architectural approaches. The discussions of parallel
architectures include a section that describes the key concepts within
a particular approach.
%X Very broad coverage of architecture, languages, background theory,
software, etc. Not really a book on programming, of course, but
certainly a good book otherwise.
%X Top-10 required reading in computer architecture to Dave Patterson.
%X It is hardware oriented, but makes some useful comments on programming.

%A Michael Wolfe
%T Optimizing Supercompilers for Supercomputers
%S Pitman Research Monographs in Parallel and Distributed Computing
%I MIT
%C Cambridge, MA
%D 1989
%d October 1982
%r Ph. D. Dissertation
%K parallelization, compiler, summary,
%K book, text,
%K grequired91/3,
%K cbuk, dmp, lls, +6 c.compilers,
%K Recursion removal and parallel code
%X Good technical intro to dependence analysis, based on Wolfe's PhD Thesis.
%X This dissertation was re-issued in 1989 by MIT under it's Pittman
parallel processing series.
%X ...synchronization and locking instructions when compiling the
parallel procedures and those called by them. This is a bit like
the 'random synchronization' method described by Wolfe but
works with pointer-based datastructures rather than array elements.
%X Cited Chapters:
Data Dependence 11-57
Structure of a Supercomplier 214-218

%A W. Daniel Hillis
%A Guy. L. Steele, Jr.
%Z Thinking Machines Corp.
%T Data Parallel Algorithms
%J Communications of the ACM
%V 29
%N 12
%D December 1986
%P 1170-1183
%r DP86-2
%K Special issue on parallel processing,
grequired97: enm, hcc, dmp, jlh, dp, jwvz, sm,
CR Categories and Subject Descriptors: B.2.1 [Arithmetic and Logic Structures]:
Design Styles - parallel; C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors) - parallel processors;
D.1.3 [Programming Techniques] Concurrent Programming;
D.3.3 [Programming Languages] Language Constructs -
concurrent programming structures: E.2 [Data Storage Representations]:
linked representations; F.1.2 [Computation by Abstract Devices]:
Modes of Computation - parallelism; G.1.0 [Numerical Analysis]
General- parallel algorithms,
General Terms: Algorithms
Additional Key Words and Phrases: Combinator reduction, combinators,
Connection Machine computer system, log-linked lists, parallel prefix,
SIMD, sorting, Ultracomputer
%K Rhighnam, algorithms, analysis, Connection Machine, programming, SIMD, CM,
%X (JLH & DP) Discusses the challenges and approaches for programming an SIMD
like the Connection Machine.

%A C. L. Seitz
%T The Cosmic Cube
%J Communications of the ACM
%V 28
%N 1
%D January 1985
%P 22-33
%r Hm83
%d June 1984
%K grequired91: enm, dmp, jlh, dp, j-lb, jwvz,
Rcccp, Rhighnam,
%K CR Categories and Subject Descriptors: C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors);
C.5.4 [Computer System Implementation]: VLSI Systems;
D.1.2 [Programming Techniques]: Concurrent Programming;
D.4.1 [Operating Systems]: Process Management
General terms: Algorithms, Design, Experimentation
Additional Key Words and Phrases: highly concurrent computing,
message-passing architectures, message-based operating systems,
process programming, object-oriented programming, VLSI systems,
homogeneous machine, hypercube, C^3P,
%X Excellent survey of this project.
Reproduced in "Parallel Computing: Theory and Comparisons,"
by G. Jack Lipovski and Miroslaw Malek,
Wiley-Interscience, New York, 1987, pp. 295-311, appendix E.
%X * Brief survey of the cosmic cube, and its hardware
%X (JLH & DP) This is a good discussion of the Caltech approach, which
embodies the ideas several of these machines (often called hypercubes).
The work at Caltech is the basis for the machines at JPL and the Intel iPSC,
as well as closely related to the NCUBE design. Another paper by Seitz
on this same topic appears in the Dec. 1984 issue of IEEE Trans.
on Computers.
%X One of my top-10 papers to Dave Patterson (on computer architecture).
%X Literature search yielded:
1450906 C85023854
The Cosmic Cube (Concurrent Computing)
Seitz, C.L.
Author Affil: Dept. Of Comput. Sci., California Inst. Of Technol.,
Pasadena, Ca, Usa
Source: Commun. Acm (Usa) Vol.28, No.1, Pp.: 22-33
Publication Year: Jan. 1985
Coden: Cacma2 Issn: 0001-0782
U. S. Copyright Clearance Center Code: 0001-0782/85/0100-002275c
Treatment: Practical;
Document Type: Journal Paper
Languages: English
(14 Refs)
Abstract: Sixty-four small computers are connected by a network of
point-to-point communication channels in the plan of a binary 6-cube. this
cosmic cube computer is a hardware simulation of a future vlsi
implementation that will consist of single-chip nodes. the machine offers
high degrees of concurrency in applications and suggests that future
machines with thousands of nodes are both feasible and attractive. it uses
message switching instead of shared variables for communicating between
concurrent processes.
descriptors: multiprocessing systems; message switching
identifiers: message passing architectures; process programming; vlsi
systems; point-to-point communication channels; binary 6-cube; cosmic cube;
hardware simulation; VLSI implementation; single-chip nodes; concurrency
class codes: C5440; C5620

%A Edward Gehringer
%A Daniel P. Siewiorek
%A Zary Segall
%Z CMU
%T Parallel Processing: The Cm* Experience
%I Digital Press
%C Boston, MA
%D 1987
%K book, text, multiprocessor,
%K grequired91: enm, ag, jlh, dp, dar,
%O ISBN 0-932376-91-6
%O $42
%X Looks okay!
%X [Extract from inside front cover]
... a comprehensive report of the important parallel-processing
research carried out on Cm* at Carnegie-Mellon University. Cm* is a
multiprocessing system consisting of 50 tightly coupled processors and
has been in operation since the mid-1970s. Two operating
systems-StarOs and Medusa-are part of its development, along with a
vast number of applications.
%X (JLH & DP) This book reviews the Cm* experience. The book
discusses hardware issues, operating system strategies,
programming systems, and includes an extensive discussion of the
experience with over 20 applications on Cm*.
%X (DAR) a must read to avoid re-inventing the wheel.

%A John Hennessy
%A David Patterson
%T Computer Architecture: A Quantitative Approach, 2nd ed.
%I Morgan Kaufmann Publishers Inc.
%C Palo Alto, CA 94303
%D 1995
%O ISBN 1-55860-069-8
%K books, text, textbook, basic concepts, multiprocessors,
computer architecture, textbook, pario bib,
%K grequired97,
%K rgs, dn, a(umn), dab, sm,
%X http://Literary.com/mkp/new/hp2e/hp2e_index.shtml
%X This is an excellent book, and I would guess it was about suitable for
second or final-year undergraduate use.
%X The book emphasises quantitative measurement of various architectures, as
hinted at in the title. Thus, benchmarking, using real applications, is
heavily emphasised. Naturally, considering the authors, the benefits of the
class of processors generically referred to as 'RISC' are highlighted.
%X The book costs M-#25 Sterling here in England (hard-back).
%X Chapter titles are:
1. Fundamentals of Computer Design
2. Performance and Cost
3. Instruction Set Design: Alternatives and Principles
4. Instruction Set Examples and Measurements of Use
5. Basic Processor Implementation Strategies
6. Pipelining
7. Vector Processors
8. Memory-Heirarchy Design
9. Input/Output
10. Future Directions
Appendix A: Computer Arithmetic
Appendix B: Complete Instruction Set Tables
Appendix C: Detailed Instruction Set Measurements
Appendix D: Time Versus Frequency Measurements
Appendix E: Survey of RISC Architectures
%X Looks like a great coverage of architecture. Of course a chapter on I/O!
[David.Kotz@Dartmouth.edu]
%X Watch for printing or edition number in paper copies
(The "V. Pratt" Warning).

%A M. Ben-Ari
%T Principles of Concurrent and Distributed Programming
%I Prentice Hall International, Inc.
%C Englewood Cliffs, NJ
%D 1989
%O ISBN 0-13-711821-X
%K conditional grequired91 (1986 version was the suggested version, see VRP),
parallel processing (electronic computers),
%K sc, +3 votes posted from c.e. discussion.
%X Sound familiar?
%X I (VRP) ran into a problem with Prentice-Hall over Ben-Ari: they do not
regard his rewrite as a 2nd edition but as a completely new book. If
you order it under the title you give in your bibliography THEY WILL
SHIP YOU THE OLD BOOK. The Stanford bookstore even called them to ask
whether they'd be receiving the new edition and P-H told them that if
the instructor ordered it under the old title that was what he must want.
%X Why a publishing company would not only create a situation with such an
obvious built-in pitfall but then proceed to firmly and insistently
push their customers into this pit is utterly beyond me. God and
publishers move in mysterious ways.
%X Moral: Change your title to "Principles of Concurrent and Distributed
Computing" and don't refer to it as "the second edition" since it isn't.

%K fox:cubix,
%A Geoffrey C. Fox
%A Mark A. Johnson
%A Gregory Lyzenga
%A Steve W. Otto
%A John Salmon
%A David Walker
%Z Caltech
%T Solving Problems on Concurrent Processors
%V 1, General Techniques and Regular Problems
%I Prentice-Hall
%C Englewood Cliffs, New Jersey
%D 1988
%K book, text, hypercubes, CCCP, MIMD, parallel programming,
communication, applications, physics, pario bib,
parallel processing, supercomputers,
%K grequired91,
%K bb, jlh, dp, dfk,
%K suggested supplemental ref by jh and dp
%K Barnes-Hut N-body problem,
%K parallel programming distributed memory
%K parallel scheduling bib,
%O ISBN 13-823022-6 (HB), 13-823469-8 (PB) $66.00
%X Interesting book. Given out for free at Supercomputing'89.
%X My Bible of Distributed Parallel Computing; even if you are not using
Express it is a wonderful book to have !
%X "It is a good introduction to loosely synchronous
concurrent problems on hypercube topologies."
%X See fox:cubix for parallel I/O.
%P chapters 6 and 15
%K parallel file system, hypercube, pario bib,
%X Parallel I/O control, called CUBIX. Interesting method.
Depends a lot on ``loose synchronization'', which is sortof SIMD-like.

%A John L. Gustafson
%A Gary R. Montry
%A Robert E. Benner
%Z Sandia National Labs.
%T Development of Parallel Methods for a 1024-Processor Hypercube
%J SIAM Journal on Scientific and Statistical Computing
%V 9
%N 4
%D July 1988
%K fluid dynamics, hypercubes, MIMD machines, multiprocessor performance,
parallel computing, structural analysis, supercomputing, wave mechanics,
%K grequired91,
%K jlh, dp, hds, dar,
%X Introduces concept of operation efficiency, scaled speed-up.
Also covers communication cost, beam strain analysis, and a bit on
benchmarking. Winner of 1988 Bell and Karp Prizes.
%X (JLH & DP) This paper report interesting results in using a
large scale NCUBE. The authors won the Gordon Bell prize with their work.
They also suggest the idea of problem scaling to overcome the limitations of
sequential portions of an application.
%X (DAR) some application flavor mixed with performance analysis.

%A W. Daniel Hillis
%T The Connection Machine
%S Series in Artificial Inteligence
%I MIT Press
%C Cambridge, MA
%D 1985
%K book, text, PhD thesis,
%K grequired96, 91
%K JLb, dar, jwvz, dn,
%O ISBN #: 0262580977 $15.95 [1989 printing?]
%X Has a chapter on why computer science is no good.
%X Patent 4,709,327, Connection Machine, 24 Nov 87 (individuals)
"Parallel Processor / Memory Circuit", W. Daniel Hillis et al.
This looks like the meat of the connection machine design.
It probably has lots of stuff that up til the patent was considered
proprietary.
%X another dissertation rehash and woefully lacking in details
(a personal gripe about MIT theses) but otherwise a CM introduction.
%X Top-10 required reading in computer architecture to Dave Patterson.



Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel