From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cray-1 Anniversary Event - September 21st Newsgroups: alt.folklore.computers Date: Sat, 23 Sep 2006 09:22:16 -0600Morten Reistad <first@last.name> writes:
other posts mentioning the same talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006o.html#61 Health Care
https://www.garlic.com/~lynn/2006p.html#17 Health Care
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Greatest Software Ever Written? Newsgroups: alt.folklore.computers Date: Sat, 23 Sep 2006 10:03:07 -0600Morten Reistad <first@last.name> writes:
these can be purely memory with no on-chip intelligence or processing.
old post mentioning iso 14443 (and other stuff)
https://www.garlic.com/~lynn/2004h.html#30 ECC Encryption
i had a weird experience with a wash dc metro card a couple years ago ... where I had left a metro station with something like (positive) $10 still on the card ... and the next time I tried to use the card, the reader claimed there was a negative $5 balance (while outside the transit system, card had lost $15 and actually $5 negative w/o being used)
a lot of RFID started out being next generation barcode; just read the number ... a lot more digits allowing unique chip identification down to individual item level (rather than just vendor and product) and being able to inventory w/o having to manually count each individual item. big driver recently has been walmart mandating them from suppliers. they would like to get these chips/technology into the penny range (or even less; along with new & less expensive methods of producing RFID signal w/o necessarily being traditional chip fabrication process)
with (pure barcode) RFID technology becoming more prevalent, there is other applications trying to leverage it.
a post with a lot of news URLs regarding RFID and passports
https://www.garlic.com/~lynn/aadsm25.htm#11 And another cloning tale
one of the objectives for the aads chip strawman was to be able to
do ecdsa processing within transit gate iso 14443 requirements
https://www.garlic.com/~lynn/x959.html#aadsstraw
other references to aads technology and patents
https://www.garlic.com/~lynn/x959.html#aads
some other posts mentioning contactless/proximity
https://www.garlic.com/~lynn/aadsm22.htm#40 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#45 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#1 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#2 UK Banks Expected To Move To DDA EMV Cards
https://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
https://www.garlic.com/~lynn/aadsm24.htm#7 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#8 Microsoft - will they bungle the security game?
https://www.garlic.com/~lynn/aadsm24.htm#27 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#28 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#30 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm25.htm#1 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm25.htm#8 smart cards with displays - at last!
https://www.garlic.com/~lynn/aadsm25.htm#24 DDA cards may address the UK Chip&Pin woes
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sat, 23 Sep 2006 10:43:30 -0600Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
also as the chip sizes remained somewhat the same ... while the circuit sizes shrank ... you also had significantly more circuits per chip. you could use the additional circuits for multiple cores ... but you could also use the circuits for on-chip caches. you could have dedicated on-chip "L1" caches per cpu core ... and shared on-chip "L2" caches for all cpu cores on the same chip. That means that any off-chip cache becomes "L3".
the modern out-of-order execution is at least equivalent of anything that 370/195 (supercomputer) had ... and there is also branch prediction, speculative execution (down predicted branch path) and instruction nullification/abrogation (when prediction is wrong) ... which 370/195 didn't have.
the out-of-order execution helps with latency compensation (i.e. when one instruction is stalled on some fetch operation ... execution of other instructions may proceed somewhat independently). multi-threaded operation was also a form of latency compensation ... trying to keep the execution units filled with independent work/instructions.
370/195 did allow concurrent execution of instructions in the pipeline ... but branches would drain/stall processing. i had gotten involved in a program to add multi-threading to a 370/195, i.e. dual instruction stream; registers and instructions in the pipeline having one bit tag identifying which instruction stream they belong to (but not otherwise increasing the hardware or executable units). however, this project never shipped a product.
this was based on the peak thruput of 370/195 was around ten mips ... but that required careful management of branches ... most codes ran at five mips (because of the frequent branches that drained the pipeline). dual i-streams (running at five mips per) had a chance of keeping the "ten mip" peak executing units busy.
misc. past post mentioning 370/195 dual i-stream effort:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#29 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#10 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006m.html#51 The System/360 Model 20 Wasn't As Bad As All That
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Trying to design low level hard disk manipulation program Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 23 Sep 2006 13:13:37 -0600Bill Todd <billtodd@metrocast.net> writes:
the original mid-60s implementation supported sparse files ... so there were various null pointers for indirect hyperblocks and datablocks that didn't actually exist.
one of the unofficial early 70s incremental improvements to the cms filesystem was the directory file block pointer would point directly at the data block ... instead of an indirect hyperblock ... for file that had only one data block (for small files, instead of having a minimum of two blocks, one indirect hyperblock and one data block, it would just have the single data block). another unofficial early/mid 70s incremental improvements was various kinds of data compression. I think both of these were originally done by perkin-elmer and made available on the share waterloo tape. there was some performance measurement for the p/e compression changes ... that the filesystem overhead to compress/decompress the data in the file was frequently more than offset by reduction in cpu overhead reading/writing the physical blocks to/from disk.
one of the things that the mid-70s EDF extensions brought to the cms filesystem was multiple logical block size (1k, 2k, & 4k) and more levels of indirect hyperblocks ... supporting up to five levels of indirection for large files ... i.e. a 4k filesystem with single hyperblock supported up to 1024 four byte data block pointers. a two level hyperblock had the first level pointing to up to 1024 first-level hyperblocks which each then would point to up to 1024 4k data blocks. As a file grew, the filesystem could transition to higher levels of hyperblock indirection.
in the early 70s, i had did a page-mapped layer for the original
(cp67) cms filesystem ... and then later upgraded the EDF filesystem
(by that time morphed into vm370) to also support page-mapped layer
construct
https://www.garlic.com/~lynn/submain.html#mmap
there is some folklore that various pieces of ibm/pc and os2 filesystem characteristics were taken from cms. note also that both unix and cms trace some common heritage back to ctss.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 10:27:31 -0600krw <krw@att.bizzzz> writes:
part of the issue was that the official/strategic communication product was SNA ... which had effectively large master/slave paradigm in support of mainframe controlling tens of thousands of (dumb) terminals (there were jokes about sna not being a system, not being a network, and not being an architecture).
the internal network was not SNA ...
https://www.garlic.com/~lynn/subnetwork.html#internalnet
misc. recent threads discussing the announcement of the 1000th
node on the internal network
https://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#3 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address
and reference to the approx size of the internet/arpanet in the same
timefame (possibly as low as 100 to possibly a high of 250)
https://www.garlic.com/~lynn/2006k.html#40 Arpa address
in the very early sna days, my wife had co-authored a (competitive)
peer-to-peer architecture (AWP39). she then went on to do a stint in
POK responsible for loosely-coupled architecture (aka mainframe
cluster) where she created Peer-Coupled Shared Data architecture
... except for IMS hot-standby, didn't see a lot of uptake until
parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata
there were some number of battles between the communication group
attempting to enforce the "strategic" communication solution for all
environments (even as things started to move away from the traditional
tens of thousands of dumb terminals controlled by a single mainframe).
san jose research had a eight-way 4341 cluster project using
trotter/3088 (effectively eight channel processor-to-processor switch)
that they wanted to release. in the research version using non-sna
protocol ... to do a full cluster synchronization function took
something under a second elapsed time. they were forced to migrate to
sna (vtam) based implementation which inflated the elapsed time to
over half a minute. recent reference to early days of the project
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
another situation was that terminal emulation contributed to early
heavy uptake of PCs in the business environment. you could get a PC
with dumb terminal emulation AND some local computing capability in a
single desktop footprint and for about the same price as a 327x
terminal that it would replace. later as PC programming became more
sophisticated, there were numerous efforts to significantly improve
the protocol paradigm between the desktop and the glasshouse. however,
all of these bypassed the communication sna infrastructure and
installed terminal controller product base.
https://www.garlic.com/~lynn/subnetwork.html#emulation
the limitations of terminal emulation later contributed heavily to data from the glasshouse being copied out to local harddisks (either on local servers or on the desktop itself). this continued leakage was the basis of some significant infighting between the disk product group and the communication product group. the disk product group had come up with a number of products that went a long way to correcting the terminal emulation limitations ... but the communicaton product group continually blocked their introduction (claiming that they had strategic product responsibility for anything that crossed the boundary between the glasshouse and the external world).
at one point (in the late 80s) a senior person from the disk product
group got a talk accepted at the communication product group's
worldwide, annual internal conference. his opening deviated from what
was listed for the talk by starting out stating that the head of the
communication product group was going to be responsible for the demise
of the (mainframe) disk product group. somewhat unrelated topic drift,
misc. collected posts mentioning work with blg14 (disk engineering)
and blg15 (disk product test)
https://www.garlic.com/~lynn/subtopic.html#disk
we were also doing high-speed data transport project (starting
in the early 80s)
https://www.garlic.com/~lynn/subnetwork.html#hsdt
a recent posting somewhat contrasting hsdt and sna
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
the late 80s was also in the period were we had started pitching
3-tier to customer executives.
https://www.garlic.com/~lynn/subnetwork.html#3tier
we had sort of melded work that had been going on for 2-tier
mainframe/PC and for 2-tier glasshouse/departmental computing (4341).
a few recent postings
https://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article
https://www.garlic.com/~lynn/2006p.html#36 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
however, this was also during the period that the communication product group was attempting to stem the tide away from terminal emulation with SAA (and we would take some amount of heat from the SAA forces).
part of our 3-tier effort we then forked off into ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
and oft repeated specific posting
https://www.garlic.com/~lynn/95.html#13
for other drift ... a side effort for hsdt in the mid-80s ... was
attempting to take some technology that had originally been developed
at one of the baby bells and ship it as an official product. this had
a lot of SNA emulation stuff at boundaries talking to mainframes. SNA
had evolved something called cross-domain ... where a mainframe that
didn't directly control a specific terminal ... still could interact
with a terminal ("owned" by some other mainframe). the technology
would tell all the boundary mainframes that (all) the terminals were
owned some other mainframe. In actuallity, the internal infrastructure
implemented a highly redundant peer-to-peer infrastructure ... and
then just regressed to SNA emulation talking to boundary mainframes.
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 11:11:00 -0600Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
it was worse than that ... NJE grew up out of HASP networking, some
amount of it had been done at TUCC. HASP had a one byte index for
table of 255 psuedo (spooled) devices that it implemented local
spooling. the original networking support scavenged unused entries
from that table to define networking nodes. a typical HASP node might
have 60-80 psuedo devices defined ... leaving a maximum of 170-190
entries for defining networking nodes. hasp/jes also would trash any
traffic where either the originating node or the destination node
wasn't defined in the local table. the internal network fairly quickly
exceeded 255 nodes
https://www.garlic.com/~lynn/subnetwork.html#internalnet
limiting hasp/jes to anything other than a boundary node (pretty useless as an intermediate node that would trash some percent of traffic flowing through). at some point, NJE increased maximum network size to 999 ... but that was after the internal network was over 1000 nodes (again creating network operational problems if JES was used for other than purely boundary nodes).
the other problem was that NJE protocol confused the header fields ... intermingling networking stuff with purely local stuff. not only would misconfigured hasp/jes systems crash other hasp/jes systems ... but it was possible for two different systems (properly configured) at slightly different release levels (with slightly different header formats) to crash each other. there was an infamous scenario where a system in san jose was causing systems in hursley to crash.
as a result, there was a body of technology that grew up in VM networking nodes for simulating NJE. there were a whole library of NJE drivers for various versions and releases of hasp/jes. A VM simulated NJE driver would be started for the specific boundary hasp/JES that it was talking to.
incoming traffic from a boundary NJE node would be taken and effectively translated into a generalized connonical format. outgoing traffic to boundary NJE node would have header formated for the specific hasp/jes release/version. all of this was countermeasure to keep the wide variety of different hasp/jes systems around the world from crashing each other.
misc. other hasp/jes related posts
https://www.garlic.com/~lynn/submain.html#hasp
another characteristic was that the native VM drivers tended to have much higher thruput and efficiency than the NJE protocol. however, at some point (possibly for strategic corporate compatibility purposes) they stopped shipping the native VM drivers ... and only shipped NJE drivers for VM networking.
at some point I believe that bitnet/earn network was also larger than
arpanet/internet
https://www.garlic.com/~lynn/subnetwork.html#bitnet
bitnet was US educational network using the vm networking technology (however, as mentioned, eventually only NJE drivers were shipping in the vm product). while the internal network and bitnet used similar technologies ... the sizes of the respective networks were totally independent.
earn was the european flavor of bitnet. for some drift, old post
mentioning founding/running earn
https://www.garlic.com/~lynn/2001h.html#65 UUCP email
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 11:28:42 -0600re:
as mentioned before, we put up a HSDT high-speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt
and i had done mainframe tcp/ip drivers supporting RFC1044.
https://www.garlic.com/~lynn/subnetwork.html#1044
at the time, the standard mainframe tcp/ip driver supported about 44kbytes/sec aggregate thruput using burning approx. a full 3090 processor. in some rfc 1044 tuning testing at cray research, was seeing 1mbyte/sec sustained thrutput between a cray and a 4341-clone ... using only a modest amount the 4341-clone process (nearly two order magnitude improvement in bytes per cpu second).
also for the original NSFNET backbone RFP (effectively the operational networking precursor to the modern internet), we weren't allowed to bid. However, my wife went to the director of NSF and got a technical audit of what we were running. one of the conclusions was effectively that what we already had running was at least five years ahead of all bid submissions (to build something new).
random past reference
https://www.garlic.com/~lynn/internet.htm#0
reference to the nsfnet backbone rfp
https://www.garlic.com/~lynn/internet.htm#nsfnet
copy of NSFNET backbone RFP announcement
https://www.garlic.com/~lynn/2002k.html#12
reference to award announcement
https://www.garlic.com/~lynn/2000e.html#10
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 14:31:09 -0600vjp2.at writes:
recent post
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
with respect to the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
repeat from the recent post:
one of the rex historical references (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20050309184016/http://www.computinghistorymuseum.org/ieee/af_forum/read.cfm?forum=10&id=21&thread=7
from above:
By far the most important influence on the development of Rexx was the
availability of the IBM electronic network, called VNET. In 1979, more
than three hundred of IBM's mainframe computers, mostly running the
Virtual Machine/370 (VM) operating system, were linked by VNET. This
store-and-forward network allowed very rapid exchange of messages
(chat) and e-mail, and reliable distribution of software. It made it
possible to design, develop, and distribute Rexx and its first
implementation from one country (the UK) even though most of its users
were five to eight time zones distant, in the USA.
... snip ...
and from earlier post:
https://www.garlic.com/~lynn/2006k.html#40 Arpa address
repeat from the above post ... in mid-1980, arpanet was hoping to have 100 nodes by 1983 (the year that the internal network hit the 1000th node mark):
ARPANET newsletter ftp://ftp.rfc-editor.org/in-notes/museum/ARPANET_News.mail from above: NEWS-1 DCA Code 531 1 July 1980 (DCACODE535@ISI) (202) 692-6175 ARPANET NEWSLETTER --------------------------------------------------------------------- Over the past eleven years, the ARPANET has grown considerably and has become the major U. S. Government research and development communications network. The ARPANET liaisons have made significant contributions to the network's success. Your efforts are voluntary, but are critical to successful operation of each Host, IMP, and TIP. Your continued support of the ARPANET is greatly appreciated and will facilitate continued smooth ARPANET operation. To aid you in performance of your duties, DCA will attempt to provide you with the latest information in network improvements. This information is grouped into two major areas: management and technical improvements. However, a brief discussion of where we are going with the ARPANET is in order. The ARPANET is still a rapidly growing network. It provides a service which is both cost and operationally effective. We predict the ARPANET will grow to approximately 100 nodes by 1983, when we will begin transferring some of the subscribers to DOD's AUTODIN II network.... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: should program call stack grow upward or downwards? Newsgroups: comp.arch,comp.lang.c,alt.folklore.computers Date: Sun, 24 Sep 2006 15:24:41 -0600gordonb.6hiy2@burditt.org (Gordon Burditt) writes:
STM 14,12,12(13)i.e. "store multiple" registers 14,15,0,...,12 ... starting at (decimal) 12 offset from location pointed to by register 13.
for more detailed discussion ... i've done a q&d conversion of the old
ios3270 green card to html ... and more detailed discussion of
call/save/return conventions can be found at:
https://www.garlic.com/~lynn/gcard.html#50
the called program only needed a new save area if it would, in turn call some other program. non-reentrant programs (that called other programs) could allocate a single static savearea. only when you had reentrant programs that also called other programs ... was there an issue regarding dynamic save area allocations.
the original cp67 kernel had a convention that was somewhat more like a stack. it had a contiguous subpool of 100 save areas. all module call/return linkages were via supervisor call. it was the responsibility of the supervisor call routine to allocate/deallocate savearea for the call.
an aside, cp67 and unix can trace somewhat common heritage back to
ctss, i.e. cp67 work was done at the science center on the 4th flr
of 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech
including some people that had worked on ctss. multics was on the 5th flr of 545 tech sq ... and also included some people that had worked on ctss.
as i was doing various performance and scale-up work on cp67 ... i made a number of changes to the cp67 calling conventions.
for some number of high-use non-reentrant routines (that didn't call any other routines), i changed the calling sequence from supervisor call to simple "branch and link register" ... and then used a static area for saving registers. for some number of high-use common library routines ... the supervisor call linkage scenario had higher pathlength that the function called ... so the switch to BALR call convention for these routings significantly improved performance.
the other problem found with increasing load ... was that it became more and more frequent that the system would exhaust the pool of 100 kernel save areas (which caused it to abort). i redid the logic so that it could dynamically increase and decrease the pool of save areas ... significantly reducing system failures under heavy load.
there was subsequent generalized subpool enhancement for cp67 kernel dynamic storage management ... which also significantly contributed to descreasing kernel overhead.
article from that work
Analysis of Free-storage Algorithms, B. Margolin, et all, IBM Systems
Journal v10n4, 283-304, 1971
and from the citation site:
http://citeseer.ist.psu.edu/context/418230/0
misc. past postings mentioning cp67 kernel generalized subpool work:
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2006e.html#40 transputers again was: The demise of Commodore
https://www.garlic.com/~lynn/2006j.html#21 virtual memory
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 17:11:08 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
oh and some of the dumb terminals weren't necessarily so dumb ... there were also things like huge numbers of ATM (automatic teller, aka cash) machines
for other drift ... recent post mentioning early work at los gatos lab
on cash machines
https://www.garlic.com/~lynn/2006q.html#5 Materiel and graft
for sna dumb terminal drift
http://www.enterasys.com/solutions/success/commercial/unitedairlines.pdf
from above:
The original United network environment consisted of approximately
20,000 dumb terminals connected to three separate networks: an
SNA-based network connecting into IBM mainframes for business
applications; a Unisys-based network whose processors did all of the
operational types of programs for the airline such as crew and flight
schedules and aircraft weights and balance; and the Apollo network,
which connected users to the airline's reservation system for all
passenger information, seat assignments, etc. That meant that for
every airport that United flew into, it had to have three separate
telephone circuits--one for each network. According to Ken Cieszynski,
United's senior engineer in Networking Services, it was a very costly,
cumbersome and labor-intensive system for operating and maintaining a
business.
... snip ...
my wife was in conflict with the SNA group from early on ... having
co-authored (competitive) AWP39 peer-to-peer networking architecture
during the early days of SNA, did battle with them when she was
in POK responsible for loosely-coupled (cluster mainframe) architecture,
https://www.garlic.com/~lynn/submain.html#shareddata
and then later when we were out pushing 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier
along the way she also did a short stint as chief architect for amadeus ... where she got into trouble backing x.25 based network design as an alternate to SNA based network implementation.
misc. past post mentioning amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Sun, 24 Sep 2006 18:35:20 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
and of course apollo system was also ibm mainframe ... acp (airline
control program) that morphed into TPF (transaction processing system)
a few references
http://www.blackbeard.com/tpf/tpfscoop.htm
https://en.wikipedia.org/wiki/Computer_reservations_system
http://www.eastmangroup.com/otwc/otwc~jun2006.html
http://www.prnewswire.com/cgi-bin/micro_stories.pl?ACCT=121034&TICK=GAL&STORY=/www/story/04-04-2000/0001181634&EDATE=Apr+4,+2000
http://www.answers.com/topic/sabre-computer-system
http://www.everything2.com/index.pl?node=GRS
http://www.cwhonors.org/laureates/Business/20055186.pdf
http://www.intervistas.com/4/presentations/orbitzfinalbook1.pdf
and
http://www.computerworld.com/managementtopics/outsourcing/story/0,10801,63472,00.html
from above:
IBM helped build the transaction processing facility (TPF) for
American Airlines Inc. in the late 1950s and early 1960s that would
become the Sabre global distribution system (GDS). IBM built a similar
TPF system for Chicago-based United Air Lines Inc. That system later
became the Apollo GDS.
... snip ...
galileo/apollo history
http://www.galileo.com/galileo/en-gb/about/History/
for other drift about airline systems
https://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#7 Impossible Database Design?
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#23 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#29 3 value logic. Why is SQL so special?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 09:25:18 -0600KR Williams <krw@att.bizzzz> writes:
was shipping chip design off to LSM (losgatos state machine or the logic simulation machine for publication, san jose bldg. 29) and EVE (endicott validation engine, there was one in san jose bldg. 86, disk engineering had been moved to offsite location while bldg. 14 was getting its seismic retrofit) for logic verification. there was claim that this helped contribute to bringing in the RIOS chipset (power) a year early.
i got blamed for some of that early conferencing ... doing a lot of
the stuff semi-automated. there was even an article in datamation.
there were then some number of internal corporate task forces to
investigate the phenomena. hiltz and turoff (network nation,
addison-wesley, 1978) were brought in as consultants for at least one
of the task force investigations. then a consultant was paid to sit in
the back of my office for nine months, taking notes on how i
communicated ... also had access to all my incoming and outgoing email
as well as logs of all my instant messaging activity. besides an
internal research report, (with some sanitizing) it also turned into a
stanford phd thesis (joint between language and computer ai) ... some
number of past posts mentioning computer mediated conversation (and/or
the stanford phd thesis on how i communicate)
https://www.garlic.com/~lynn/subnetwork.html#cmc
the ibmvm conferencing "disk" opened first ... followed by the ibmpc conferencing "disk". the facility (TOOLSRUN) was somewhat cross between usenet and listserv (recipient could specify configuration that worked either way). you could specify recipient options that worked like listserv. however, you could also install a copy of TOOLSRUN on your local machine ... and setup an environment that operated more like usenet (with local respository).
this discussions somewhat mirrored the (purely) online conferencing
that tymshare was providing to the IBM SHARE user group organization
with online vmshare (and later) pcshare (typical access via tymshare's
tymnet). ... misc. posts about (vm based) commercial timesharing
services (including tymshare)
https://www.garlic.com/~lynn/submain.html#timeshare
vmshare archive:
http://vm.marist.edu/~vmshare/
misc. past references to "tandem memos" (the referenced early computer
conferencing incident)
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#51 the new math: old battle of the sexes was: PDP-1
...
misc. past posts mentioning "TOOLSRUN":
https://www.garlic.com/~lynn/2001c.html#5 what makes a cpu fast
https://www.garlic.com/~lynn/2002d.html#33 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2003i.html#18 MVS 3.8
https://www.garlic.com/~lynn/2004o.html#48 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#22 z/VM Listserv?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them
...
misc. past posts mentioning LSM, EVE (and/or YSE)
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2003.html#31 asynchronous CPUs
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#65 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006q.html#42 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Trying to design low level hard disk manipulation program Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 25 Sep 2006 09:52:55 -0600Bill Todd <billtodd@metrocast.net> writes:
we had somewhat stumbled across file bundles (based on use, not necessarily any filesystem structure organization) in the work that started out doing traces of all record accesses for i/o cache simulation (circa 1980).
the strict cache simulation work showed that partitioned caches (aka "local LRU") was always lower performance than global cache (aka global LRU). for a fixed amount of electronic storage, a single global system i/o cache always had better thruput than partitioning the same amount of electronic storage between i/o channels, disk controllers, and/or individual disks (modulo a track cache for rotational delay compensation).
further work on the full record access traces started to show up some amount of repeated patterns that tended to access the same collection of files. for this collection of data access patterns, rather than disk arm motion with various kinds of distribution ... there was very strong bursty locality. this led down the path of maintaining more detailed information about files and their usage for optimizing thruput (and layout).
earlier at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
we had done detailed page reference traces and cluster analysis in support of semi-automated program reorganization ... which was eventually released as VS/REPACK product. the disk record i/o traces started down the path of doing something similar for filesystem organization/optimization.
i had done a backup/archive system that was used internally at a
number of locations. this eventually morphed into product called
workstation datasave facility and then adsm. it was later renamed tsm
(tivoli storage manager). this now supports bundles/containers for
file storage management (i.e. collections of files that tend to have
bursty locality of reference patterns)
https://www.garlic.com/~lynn/submain.html#backup
some number of other backup/archive and/or (hierarchical) storage management systems now also have similar constructs.
some recent posts that mention that i/o cache simulation work
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#18 how much swap size did you take?
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#7 virtual memory
https://www.garlic.com/~lynn/2006j.html#14 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)
some recent posts mentioning vs/repack activity
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006j.html#18 virtual memory
https://www.garlic.com/~lynn/2006j.html#22 virtual memory
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006l.html#11 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 10:44:35 -0600vjp2.at writes:
the mid-range then got hit in the mid-80s as that market segment started moving to workstations and larger PCs for servers and departmental computing.
to some extent popular press seemed to focus on the high-end mainframe iron doing commercial batch operations compared to some of the other vendors offerings in the mid-range market segment (even tho boxes like 4341 and 4331 were also extremely popular in that midrange market in the late 70s and early 80s).
a few old posts given domestic and world-wide vax shipments:
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2006k.html#31 PDP-1
various recent posts mentioning 2-tier/3-tier evoluation in the
mid-range market segment.
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006c.html#11 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006c.html#26 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#7 Google Architecture
https://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article
https://www.garlic.com/~lynn/2006p.html#36 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006q.html#4 Another BIG Mainframe Bites the Dust
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 11:11:12 -0600hancock4 writes:
can you imagine holding big festivities on the plant site that no longer belongs to you.
misc. posts mentioning san jose plant site is now hitachi
https://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?
https://www.garlic.com/~lynn/2003n.html#39 DASD history
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
not 50 years ago ... but some amount of postings related to activity
on the plant site 25-30 years ago
https://www.garlic.com/~lynn/subtopic.html#disk
during the early 80s there was some amount of friendly competition between san jose storage business and the pok large mainframe business on which location was contributing the most to the bottom line (which had traditional been pok, but there was a period where they were neck & neck ... and even quarters were san jose passed pok).
a lot of that has since all gone by the wayside ... recent post
mentioning a couple of the issues
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 16:52:31 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
you might find the marketing department from a line of business possibly taking small part of their budget ... say several million to drop on gala and press releases ... but since the original line of business is sold off to somebody else ... it is hard to imagine who is likely to drop even a couple million on such an activity.
how many remember the "last great dataprocessing IT party" (article in usatoday)? ... ibm had taken the san jose coliseum ... brought in jefferson starship and all sorts of other stuff (gala for the rsa show). between the time the contracting/funding for the event and the actual event ... the responsible executive got totally different responsibilities ... but they allowed him to play the greeter (all dressed up in a tux) at the front door as you went in.
this has copy (scroll to the right quite a bit, past the 2002 program,
to the "RSA Conference 2000 IBM Gala Program") of the program for that
gala event (if i still have mine someplace, maybe i can scan it) ...
http://www.joemonica.com/pages/print.html
https://web.archive.org/web/20040807023913/http://www.joemonica.com:80/pages/print.html
somebody's trip report
http://seclists.org/politech/2000/Jan/0058.html
other reference to the Gala
http://seclists.org/politech/2000/Jan/0054.html
IBM's gala at rsa '99 wasn't even remotely as extravagant (and only
$250k) ... somebody's pictures:
http://pix.paip.net/Party/IBM99/
not sure who's budget you could get to drop even a measly $250k on 50th disk anniversary.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 18:27:58 -0600William Hamblen <william.hamblen@earthlink.net> writes:
reference to a old "y2k" like problems somebody posted from the early 80s
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#233 Computer of the century
https://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...
repeat of somebody's email
Date: 7 December 1984, 14:35:02 CST
1.In 1969, Continental Airlines was the first (insisted on being the
first) customer to install PARS. Rushed things a bit, or so I hear. On
February 29, 1972, ALL of the PARS systems canceled certain
reservations automatically, but unintentionally. There were (and still
are) creatures called "coverage programmers" who deal with such
situations.
2.A bit of "cute" code I saw once operated on a year by loading a
byte of packed data into a register (using INSERT CHAR), then used LA
R,1(R) to bump the year. Got into a bit of trouble when the year 196A
followed 1969. I guess the problem is not everyone is aware of the odd
math in calendars. People even set up new religions when they discover
new calendars (sometimes).
3.We have an interesting calendar problem in Houston. The Shuttle
Orbiter carries a box called an MTU (Master Timing Unit). The MTU gives
yyyyddd for the date. That's ok, but it runs out to ddd=400 before it
rolls over. Mainly to keep the ongoing orbit calculations smooth. Our
simulator (hardware part) handles a date out to ddd=999. Our simulator
(software part) handles a date out to ddd=399. What we need to do, I
guess, is not ever have any 5-week long missions that start on New
Year's Eve. I wrote a requirements change once to try to straighten
this out, but chickened out when I started getting odd looks and
snickers (and enormous cost estimates).
... snip ... top of post, old email index
this was computer conferencing supported with TOOLSRUN technology
mentioned in recent post
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Greatest Software Ever Written? Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 20:48:07 -0600re:
recent eletronic product code (EPC) news item ... aka next generation product barcodes ...
Pfizer to Use RFID to Combat Fake Viagra
http://www.technewsworld.com/story/53218.html
from above ...
Pfizer claims it is the first pharmaceutical company with a program of
this type, focused on EPC authentication as a means of deterring
counterfeiting. However, Wal-Mart now requires its top 300 suppliers
to tag cases and pallets of select goods, and over 24 drug providers
tag bulk containers of Schedule II drugs, prescription painkillers and
drugs of abuse.
... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 21:37:08 -0600et472@FreeNet.Carleton.CA (Michael Black) writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Greatest Software Ever Written? Newsgroups: alt.folklore.computers Date: Mon, 25 Sep 2006 23:21:46 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
even more drift ... another recent news item
Identity's biggest guns form Secure ID Coalition to lobby for smart
cards
http://www.secureidnews.com/library/2006/09/25/identitys-biggest-guns-form-secure-id-coalition-to-lobby-for-smart-cards/
some recent related comments
https://www.garlic.com/~lynn/aadsm25.htm#30 On-card displays
and another related recent news item:
The touching story of NFC
http://www.techworld.com/mobility/features/index.cfm?featureID=2828&pagtype=all
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Tue, 26 Sep 2006 12:32:07 -0600scott@slp53.sl.home (Scott Lurndal) writes:
above has references to several pages at
http://www.ajnordley.com/
with pictures of the site from the air
http://www.ajnordley.com/IBM/Air/SSD/index.html
also as per the earlier posts, bldg. 50 was part of the massive manufacturing facility build-out done in the mid to late 80s ... part of armonk's prediction that world-wide business was going to double (from $60b/annum to $120b/annum). also as mentioned in the previous posts, it probably was a career limiting move to take the opposite position from corporate hdqtrs (that at least the hardware business wasn't going to be doubling).
past posts mentioning conjecture/comments in the 80s about the
possible demise of mainframe disk business
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
earlier posts in this thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Wed, 27 Sep 2006 09:28:30 -0600hancock4 writes:
comment was specifically ... san jose "plant site" ... disk division
where they actually had manufacturing line ... recent reference to
plant site "new" manufacturing bldg. 50 ... also to site with photos
of the plant site from the air
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
the earlier references in the above
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?
also has URLs for air photos of almaden research site and silicon valley lab site.
the "plant site" had bldg. 14 (disk engineering) and bldg. 15 (disk
product test) ... misc. postings
https://www.garlic.com/~lynn/subtopic.html#disk
san jose research had been in "plant site" bldg. 28 until the new
almaden facility was built up the hill in the mid-80s. bldg. 28 was
where the original relational/sql system/r was done
https://www.garlic.com/~lynn/submain.html#systemr
bldg. 29, "los gatos lab" ... was in san jose on the other
side of almaden valley. misc. past posts mentioning bldg. 29, los
gatos lab
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004f.html#7 The Network Data Model, foundation for Relational Model
https://www.garlic.com/~lynn/2004o.html#17 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC
https://www.garlic.com/~lynn/2005c.html#1 4shift schedule
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005n.html#17 Communications Computers - Data communications over telegraph
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2006.html#26 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#5 Materiel and graft
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
bldg. 90, "santa teresa lab" ... was built in mid-70s ... and
originally was going to be called the coyote lab ... more recently
renamed silicon valley lab. misc. past posts mentioning bldg. 90:
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001e.html#64 Design (Was Re: Server found behind drywall)
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#34 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#29 checking some myths.
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002b.html#15 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002k.html#9 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002o.html#11 Home mainframes
https://www.garlic.com/~lynn/2002o.html#69 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2002q.html#44 System vs. application programming?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003e.html#9 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003i.html#56 TGV in the USA?
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003o.html#2 Orthographical oddities
https://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
https://www.garlic.com/~lynn/2004e.html#22 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004n.html#18 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#17 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005c.html#1 4shift schedule
https://www.garlic.com/~lynn/2005c.html#45 History of performance counters
https://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005e.html#21 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005n.html#17 Communications Computers - Data communications over telegraph
https://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006n.html#8 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Wed, 27 Sep 2006 09:11:12 -0600KR Williams <krw@att.bizzzz> writes:
... and also as a reaction to the failure of FS
https://www.garlic.com/~lynn/submain.html#futuresys
where technical types had possibly been given too much latitude.
however, the person credited with leading the 3033 thru to its success (3031 and 3032 were primarily repackaged 158s & 168s to use channel director ... and even 3033 started out being 168 wiring diagram remapped to newer chips) ... was then brought in as replacement to head up disk division.
part of all this was that significant resources and time were diverted into FS ... and after it was killed, there was a lot of making up for lost time
we sort of got our hands slapped in the middle of pulling off 3033 success.
i previously had mentioned working on 5-way smp VAMPS
https://www.garlic.com/~lynn/submain.html#bounce
and after that was killed ... there was a 16-way smp project started called "logical machines" ... that had 16 370 (158) engines all ganged together with extremely limited memory/cache consistency. we had diverted the attention of some of the processor engineers that were dedicated to 3033 ... to spending a little time on "logical machine" effort. when the person driving 3033 eventually found out that we were meddling with some of his people ... there was some amount of attidude readjustment (and suggestion that maybe certain people shouldn't be seen in pok for awhile). during 3033, there were stories about him being in admin office running pok during first shift and being down on the line with the engineers second shift
other past posts mentioning "logical machine" effort:
https://www.garlic.com/~lynn/2002i.html#82 HONE
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons
these activities (and a couple others that I was involved in) were
going on concurrently to turning out my resource manager ... another
one of the reasons previously mentioned about resource manager was
something of a hobby ... as opposed to full time, dedicated effort:
https://www.garlic.com/~lynn/2006q.html#34 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006q.html#46 Was FORTRAN buggy?
the other part of the late 80s was that some amount of dataprocessing
was shifting out of the glass house ... and communication group had
their barb wire around the glass house perimeter. recent refernce
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#20 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Wed, 27 Sep 2006 14:56:08 -0600hancock4 writes:
in the early 70s with 370 came 3330-1 (100 mybytes) and then 3330-11 (200 mbytes) and the fixed-head disk 2305 (12mbytes) was replacement for 2301/2303 drums.
after that, electronic store was becaming plentiful enuf to start doing caching (somewhat mitigating requirement for fixed head disks).
when cp67 originally showed up at the univ. its disk i/o strategy was strictly FIFO and paging operations were done with a different/unique i/o operation per 4k page transfer.
one of the performance changes i did as an undergradudate at the univ. was put in ordered arm seek queueing ... and where possible whould (try and optimally) chain all queued page transfers into single i/o (for the same device on drums and for same cylinder on disk).
the ordered arm seek queueing allowed at least 50percent better thruput under nominal conditions and system degraded much more gracefully under heavy load.
the single page transfer per physical i/o would peek around 80 page transfer per second on 2301 drum (avg. rotational delay for each page). with chaining, a 2301 would peak around 300 page transfers per second.
later i did page mapped interface for the cms filesystem in which i
could do all sorts of fancy i/o optimizations (that was a lot more
difficult and/or not possible using the standard i/o interface
paradigm). post this year about some old performance stuff with
paged mapped interface
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux
misc. posts mentioning paged mapped interface work
https://www.garlic.com/~lynn/submain.html#mmap
various past postings mentioning 2301s and/or 2305s
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives
https://www.garlic.com/~lynn/95.html#12 slot chaining
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#6 3330 Disk Drives
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000.html#92 Ux's good points.
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#52 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/2000d.html#53 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/2000g.html#42 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#45 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001.html#17 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001h.html#36 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#37 Credit Card # encryption
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#57 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002.html#31 index searching
https://www.garlic.com/~lynn/2002b.html#8 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002b.html#23 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#31 bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#47 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#73 VLSI and "the real world"
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2003.html#70 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#6 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#9 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#18 Card Columns
https://www.garlic.com/~lynn/2003c.html#36 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003c.html#37 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#19 Disk prefetching
https://www.garlic.com/~lynn/2003m.html#6 The real history of comp arch: the short form
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#6 The BASIC Variations
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#54 [HTTP/1.0] Content-Type Header
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#2 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#13 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#3 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005o.html#43 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#51 winscape?
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#23 winscape?
https://www.garlic.com/~lynn/2005s.html#41 Random Access Tape?
https://www.garlic.com/~lynn/2005t.html#50 non ECC
https://www.garlic.com/~lynn/2006.html#2 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#41 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2006i.html#27 Really BIG disk platters?
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
https://www.garlic.com/~lynn/2006m.html#5 Track capacity?
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Day For Surprises (Astounding Itanium Tricks) Newsgroups: alt.folklore.computers,comp.arch Date: Wed, 27 Sep 2006 15:11:44 -0600jsavard writes:
in the late 70s, early 80s ... there was fort knox. the low-end 360 & 370 processors were typically implemented with "vertical" microcoded processors ... that avg. out something like 10 micro-instructions per 360/370 instruction. the higher end 360/370 used horizontal microcode engines (being somewhat more similar to itanium).
fort knox was to replace the vast array of microprocessor engines with
801s. this started out that the follow-on to 4341 was going to be an
801/risc engine. this was eventually killed ... i contributed to one
of the analysis that help kill it. part of the issue was that silicon
technology was getting to the point that you could start doing 370
almost completely in silicon.
https://www.garlic.com/~lynn/subtopic.html#801
one of the other efforts was 801/romp that was going to be used in the opd displaywriter follow-on. when this was killed, it was retargeted as a unix workstation and became pc/rt. this then spawned 801/rios (power) and then somerset and power/pc.
there was also some work in fort knox on a hybrid 370 simulation
effort using 801 ... that involved some JIT activity. i got dragged
into a little of it because i had written a PLI program in the early
70s that processed 360/370 assembler listings ... analyzed what was
going the program and tried to generate a higher level representation
of the program ... a couple recent postings
https://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Computer Artifacts Newsgroups: alt.folklore.computers Date: Wed, 27 Sep 2006 17:18:29 -0600Steve O'Hara-Smith <steveo@eircom.net> writes:
IBM Fellow John Cocke passed away on July 16th
http://domino.watson.ibm.com/comm/pr.nsf/pages/news.20020717_cocke.html
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Day For Surprises (Astounding Itanium Tricks) Newsgroups: alt.folklore.computers,comp.arch Date: Wed, 27 Sep 2006 18:36:25 -0600re:
for a little drift ... somebody that was involved in (among other things):
3033 dual-address space
fort knox/801
pa-risc
and itanium
a few posts this year on the subject:
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#1 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006o.html#67 How the Pentium Fell Short of a 360/195
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Day For Surprises (Astounding Itanium Tricks) Newsgroups: alt.folklore.computers,comp.arch Date: Thu, 28 Sep 2006 08:54:12 -0600re:
.... from long ago and far away
Increasing Circuitry in the 4300s 4331MG1 4331MG2 4341MG1 4341MG2 -------------- -------------- -------------- -------------- | | | | | | | | | H.L.L. Progs | | H.L.L. Progs | | H.L.L. Progs | | H.L.L. Progs | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |--------------| |--------------| |--------------| |--------------| | Architecture | | Architecture | | Architecture | | Architecture | |--------------| |--------------| |--------------| |--------------| | | | | | | | | | | | | | --- -| |-- --- -| | Microcode | | | | | | | | | |___| | | | | | | ----- | |----- __ | | _ | | -- | |__| | -| | | | | |__| |___ --| | |___| | | | | | | __| | | | | | | | | Circuitry | | Circuitry | | Circuitry | | Circuitry | -------------- -------------- -------------- --------------The Anton design is a step further than the 4341MG2 implementation. For a significant number of functions the Anton raises the circuitry interface almost to the architected interface.
for other topic drift, originally 3090 was going to use an embedded 4331 as the service processor, running a highly modifed version of vm370 release 6 and all the panels/menus done in ios3270. 3090 was eventually shipped with a pair of embedded 4361s as dedicated service processors (for redundancy and availability).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Greatest Software Ever Written? Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2006 09:21:34 -0600re:
continuing the drift with recent news items:
Contactless Cards: Are Privacy Jitters Legit?
http://www.ecommercetimes.com/story/53273.html
recent discussion on the difference between something you have authentication
and something you are authentication.
https://www.garlic.com/~lynn/aadsm25.htm#32 On-card displays
in the yes card vulnerability,
https://www.garlic.com/~lynn/subintegrity.html#yescard
the static data in the chip represents supposedly unique information as something you have authentication. copying/cloning the information was sufficient to enable fraudulent transactions.
however, in the passport case, the "static data" in the chip represents effectively biometric information (picture) about the individual, requiring a further step of matching the data against the person for something you are authentication. any copying/cloning of the information doesn't directly enable fraudulent transactions (as in the yes card scenario involving static data something you have authentication). however, as mentioned in the referenced post, there is other personal information which raises privacy issues.
for rfid/contractless, there is possibly increased ease of copying/cloning of information compared to some other technologies (analogous to using the internet can increase exposure of information). however, there can be radically different treat models associated with the information that is exposed.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Intel abandons USEnet news Newsgroups: comp.arch Date: Thu, 28 Sep 2006 10:23:16 -0600"comp.arch@patten-glew.net" <AndyGlew@gmail.com> writes:
and the swimming pool attractive nuisance scenario. there was civil litigation claiming several billion around 30 years ago involving industrial espionage and theft of trade secrets. the judge made statements effectively that countermeasures & protection have to be proportional to value (otherwise you can't really blame people for doing what comes naturally and stealing).
misc. past posts raising the issue:
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2005f.html#60 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
https://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006q.html#36 Was FORTRAN buggy?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2006 11:22:09 -0600hancock4 writes:
there was less of an issue with batch oriented systems, since the reduced latency (no arm motion) was less of an batch issue (than it might be in interactive computing environment).
picture of 2301 drum here:
http://www.columbia.edu/cu/computinghistory/drum.html
360/67 with picture of 2314 and 2301 in upper right background
https://web.archive.org/web/20030820174805/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/29.html
another picutre of 360/67
https://web.archive.org/web/20030429150339/www.cs.ncl.ac.uk/old/events/anniversaries/40th/images/ibm360_672/slide07.html
closeup picture of 2301
https://web.archive.org/web/20030820180331/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide12.html
the cp67-based (and later vm370-based) commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare
tended to have 2301 drums (and later 2305 fixed-head disks w/vm370) for interactive computing environments where interactive response was an issue.
again ... it was less of an issue in batch-oriented operations
other posts in this thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2006 15:30:50 -0600hancock4 writes:
there was another "compromise/trade-off" between disks and high speed core for 360s. disks (drums, datacells, etc) were referred to as "DASD" (direct access storage device) ... more specifically "CKD" DASD (count-key-data).
the trade-off was extremely scarce real storage vis-a-vis realatively abundant i/o resources. typically, filesystems have an index of where things are on the disk. most systems these days, use the relatively abundant real storage to cache these indexes (in addition to caching the data itself). however of 360, the indexes were kept on disk (saving real storage).
CKD allowed for essentially allowed filesystem metadata to be written along with the data itself. the indexes were kept on disk with filesystem metadata indexes. rather than reading the indexes into real storage (and possibly caching them), CKD DASD i/o programming provided for doing a sequential search of the indexes on disk ... trading off scarce real storage for abundant i/o capacity.
however, by at least the mid-70s, the trade-off was reversing ... with real storage starting to become abundant and disk i/o was becoming more and more of a system bottleneck.
in the late 70s, i was brought in to investigate a severe throughput/performance problem for a large national retail chain. they had central dataprocessing facility providing support for all stores nationally ... with several clustered mainframes sharing common application library. it turns out that the CKD/PDS program library dasd/disk search was taking approx. 1/2 second elapsed time (actual program load took maybe 10-20 milliseconds ..., but the on-disk index serial search was taking 500 milliseconds) and all retail store software application program loads were serialized through this process.
this trade-off left-over from the mid-60s included having the argument for the on-disk serial search kept in processor real storage (further optimizing real storage constraint) ... however it required that there was a dedicated exclusive i/o path between the device and the processor real storage for the duration of the search. this further exasherbated the throughput. typically multiple disks (between 8 to 32) might share a common disk controller and i/o channel/bus. not only was the disk performing the search, busy for the duration ... but because of the requirement for the dedicated open channel between the disk and processor storage (for accessing the search argument) busy for the duration of the search ... it wasn't possible to perform any operations for any of the other disks (sharing the same controller and/or i/o channel/bus).
misc. past posts discussing this subject
https://www.garlic.com/~lynn/submain.html#dasd
... not the above is a different collection of posts than
https://www.garlic.com/~lynn/subtopic.html#disk
which primarily references working with the people in bldg. 14 (disk engineering) and bldg. 15 (disk product test) on the san jose plant site.
in any case, this and other factors prompted my observation that over a period of ten to fifteen years, disk relative system performance had declined by an order of magnitude i.e. other system resources increased by a factor of fifty while disk resources (in terms of operations per second) increased by possibly only a factor of five.
the initial take was that the disk division assigned their disk performance and modeling group to refute my statements ... however, after several weeks they came back and said that I may have actually slightly understated the issue.
the change in the relative thruput of different system components ... especially
with respect to each other ... results in having to change in various
strategies and trade-offs ... which is also somewhat the recent thread
from comp.arch
https://www.garlic.com/~lynn/2006r.html#3 Trying to design low level hard disk manipulation program
https://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program
another series of posts about similar change in disk/memory trade-offs
involves system/r ... original relational/sql
https://www.garlic.com/~lynn/submain.html#systemr
and RDBMS. in the 70s, there were something of pro/con argument between the people in santa teresa lab (bldg 90) dealing with 60s "physical" databases and system/r work going on in bldg. 28. the stl people were claiming that system/r indexes doubled the typical physical disk space requirements and significantly increased the search time to find a specific record (requiring potentially reading multiple different indexes). this was compared to the 60s physical databases were physical record pointers were exposed as part of the data paradigm.
the counter argument was that there was significant manual and administrative effort required to managed the exposed physical record pointers ... that were eliminated in the RDBMS paradigm.
what you saw going into the 80s, was the significant increase in disk space (the number of bits per disk arm increased by an order of magnitude, the disk arm accesses/sec only showed slight improvement) and the significant decrease in the price per megabyte of disk space ... somewhat made the issue mute about the size of the RDBMS indexes. furthermore, the ever increasing abundance of real storage made it possible to cache a significant portion of RDBMS index in real storage (eliminating the significant number of additional I/Os to process the index ... vis-a-vis the physical databases from the 60s).
the issue during the 80s for RDBMS was that relative importance of the
"cons" against RDBMS were significantly reduced ... while the "cons"
against the 60 physical databases (manual people time and expertise)
significantly increased. a few past posts on the changing relative
amount of different resources for RDBMS:
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004p.html#38 funny article
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005s.html#9 Flat Query
https://www.garlic.com/~lynn/2005s.html#17 winscape?
misc. other past posts about change in relative system thruput and performance
of various system components over a period of years
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#27 oops
other posts in this disk thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: MIPS architecture question - Supervisor mode & who is using it? Newsgroups: comp.arch Date: Thu, 28 Sep 2006 16:41:11 -0600"John Mashey" <old_systems_guy@yahoo.com> writes:
in svs ... things were still somewhat real memory mvt paradigm ... except laid out in a somewhat larger (single) virtual address space; this included the kernel ... subsystem applications, that effectively acquired kernel mode, and standard applications.
the problem was that the whole infrastructure used a pointer passing paradigm ... everything required that you access the caller's storage.
the move to mvs ... gave each application its own virtual address space ... but with the MVS kernel appearing in 8mbytes of each one of these application address space .... allowed kernel code to access the application parameters pointed to by pointer passing invokation. This was nominally an 8mbyte/8mbyte split for kernel/application out of 16mbyte virtual address space.
however, this created a big problem for subsystem applications that were also now in their own unique virtual address space. it became a lot harder for a subsystem application to be invoked from "standard" application (running in their unique address spaces) via a pointer passing call ... and still reach over and obtain the relevant parameter information.
dual-address space mode was born with the 3033 ... where semi-privilege subsystem application could be given specific access to a calling application's virtual address space. part of what prompted dual-address space in 3033 ... was that the work around for subsystem accessing parameters had been the establishment of something called the "common segment" ... bascially each subsystem got a reserved space in every address space for placing calling parameters that then could be accessed by the passed pointer. larger installations providing a number of services had five megabyte common segment (out of every 16mbyte virtual address space in addition to the 8mbyte kernel) ... leaving only 3mbytes for application use.
there was still a performance problem (even with dual-address space) that the transition from standard application to subsystem application required an indirect transition through the kernel via a kernel call. this became more and more an issue as more system library functions were moved out of standard application space and into their own virtual address space.
dual-address space was expanded with access registers and program call/return instructions. basically something close to the performance of a library branch-and-link ... but with control about semi-privilege state change as well as switching virtual address space ... but still also providing access back to the caller's virtual address space.
misc. reference from esa/390 (not 64bit z/Architecture):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822
5.4 Authorization Mechanisms
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.4?SHELF=EZ2HW125&DT=19970613131822
... from above ...
The authorization mechanisms which are described in this section
permit the control program to establish the degree of function
which is provided to a particular semiprivileged program. (A
summary of the authorization mechanisms is given in Figure 5-5 in
topic 5.4.8.) The authorization mechanisms are intended for use by
programs considered to be semiprivileged, that is, programs which
are executed in the problem state but which may be authorized to
use additional capabilities. With these authorization controls, a
hierarchy of programs may be established, with programs at a higher
level having a greater degree of privilege or authority than
programs at a lower level. The range of functions available at
each level, and the ability to transfer control from a lower to a
higher level, are specified in tables which are managed by the
control program. When the linkage stack is used, a nonhierarchical
transfer of control also can be specified.
• 5.4.1 Mode Requirements
• 5.4.2 Extraction-Authority Control
• 5.4.3 PSW-Key Mask
• 5.4.4 Secondary-Space Control
• 5.4.5 Subsystem-Linkage Control
• 5.4.6 ASN-Translation Control
• 5.4.7 Authorization Index
• 5.4.8 Access-Register and Linkage-Stack Mechanisms
... snip ...
misc. past posts about common segment and/or dual address space
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#38 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#44 virtual memory
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 50th Anniversary of invention of disk drives Newsgroups: alt.folklore.computers Date: Thu, 28 Sep 2006 19:31:40 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
the characteristic of CKD DASD search i/o operations constantly referencing the search information in processor memory was taken advantage of by ISAM indexed files. ISAM could have multiple levels of indexes out on disk ... and an ISAM channel i/o program could get extremely complex. the channel i/o program could startoff with an initial metadata search argument ... which would search for the argument based on various criteria (less, greater, equal, etc) which then chained to read operation of the associated data (which could be the next level metadata search argument) ... and then chained to a new search operation using the data just read information as its search argument. all of this could be going on totally asynchronous to any processor execution.
lots of other CKD DASD related postings
https://www.garlic.com/~lynn/submain.html#dasd
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 29 Sep 2006 09:38:55 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
a copy of 360/67 functional characteristics at bitsavers
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf
max. storage on 360/67 uniprocessor was 1mbyte real storage (and a lot of 360/67 were installed with 512k or 768k real storage). out of that you had to take fixed storage belonging to the kernel ... so there would never be any 1mbyte real storage left over for virtual paging.
note that the 360/67 multiprocessor also had a channel director ... which had all sorts of capability ... including all processors in a multiprocessor environment could address all i/o channels ... but could still be partitioned into independently operating uniprocessors, each with their dedicated channels. standard 360 multiprocessor only allowed sharing of memory ... but a processor could only address their own dedicated i/o channels. the settings of the channel director could be "sensed" by settings in specific control registers (again see 360/67 functional characteristics).
equivalent capability allowing all processors to address all channels (in multiprocessor environment) and supporting more than 24bit addressing didn't show up again until 3081 and XA.
370 virtual memory had 2k and 4k page size option as well as 64k and 1mbyte segments.
vm370 used 4k pages size and 64k segments as default ... and supported 64k shared segments for cms.
however, when it was supporting guest operation systems with virtual memory ... the vm370 "shadow tables" had to be whatever the guest operating system was using (exactly mirror the guests' tables). dos/vs and vs1 used 2k paging ... os/vs2 (svs & mvs) used 4k paging.
there was an interesting problem at some customers with doubling of cache size going from 370/168-1 to 370/168-3. doubling the cache size, the needed one more bit from the address to index cache line entries and took the "2k" bit ... assuming that the machine was nominally for os/vs2 use. however, there was some number of customers running vs1 under vm on 168s. these customers saw degradation in performance when they upgraded from 168-1 to 168-3 with twice the cache size.
the problem was that the 168-3 ... every time there was a switch between 2k page mode and 4k page mode ... it would completely flush the cache ... and when in 2k page mode it would only used half the cache (same as 168-1) ... and use all the cache in 4k page mode. using only half the cache should have shown the same performance on 168-3 as on 168-1. however, the constant flushing of the cache, whenever the vm moved back & forth between (vs1's shadow table) 2k page mode and (standard vm) 4k page mode ... resulting in worse performance with 168-3 than straight 168-1.
for a little drift ... a number of recent postings about comparing
performance/thruput of 768kbyte 360/67 running cp67 at cambridge
science center with a 1mbyte 360/67 running cp67 at the grenoble
science center. the machine at cambridge was running a global LRU
replacement algorithm that i had created and grenoble was running a
local LRU replacement algorithm from academic literature. Cambridge
running effectively twice workload and 104 4k "available" pages (after
fixed kernel requires from 768k machine) had better performance than
Grenoble's system (with 155 4k "available" pages after fixed kernel
requirements).
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#42 virtual memory
https://www.garlic.com/~lynn/2006j.html#1 virtual memory
https://www.garlic.com/~lynn/2006j.html#17 virtual memory
https://www.garlic.com/~lynn/2006j.html#25 virtual memory
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
https://www.garlic.com/~lynn/2006q.html#19 virtual memory
https://www.garlic.com/~lynn/2006q.html#21 virtual memory
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 29 Sep 2006 10:31:36 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
in the early 80s ... "big pages" were implemented for both VM and MVS. this didn't change the virtual page size ... but changed the unit of moving pages between memory and 3380s ... i.e. "big pages" were 10 4k pages (3380) that moved to disk and were fetched back in from disk. a page fault for any 4k page in a "big page" ... would result in the whole "big page" being fetched from disk.
note the original expanded store ... wasn't so much an architecture issue, it was a packaging/technology issue. 3090s needed more electronic store than could be packaged within the prescribed latency of cache/memory fetch. the approach was to place the storage that couldn't be packaged for close access ... on a different bus under software control that burst transfers in (4k) page size units ... rather than the smaller cache line size units ... and then leverage the programming paradigm already in place for paging to/from disk.
this is somewhat LCS from 360 days (8mbytes of 8mic storage ... compared to 750ns storage on 360/67 or 2mic storage on 360/50). the simple strategy was to just consider it as adjunct of normal, faster storage and tolerate the longer fetch cycle. however, some installations tried to carefully allocate stuff in LCS ... that were lower use programs and/or purely cached data (like hasp buffers). however, some installactions actually implemented copying programs out of LCS to faster storage before execution.
misc. past posts mentioning big pages.
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#4 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#13 virtual memory
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 29 Sep 2006 15:21:31 -0600Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
press release on ecc from 1998
http://www-03.ibm.com/press/us/en/pressrelease/2631.wss
another discussion of memory ecc
http://www.research.ibm.com/journal/rd/435/spainhower.pdf
in response to off-list comment about 360 model storage sizes ... see
this reference:
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm
note that 1mbyte and 2mbyte, IBM "LCS" 2361 was offered ... but I remember a number of installations having 8mbyte "Ampex" LCS.
past posts in this thread
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
the claim was that 3090 expanded store memory chips was effectively the same as regular memory chips ... because ibm had really good memory yield. however, there was a vendor around 1980 that had some problems with its memory chip yield involving various kinds of failures that made the chips unusable for normal processor fetch/store (memory).
so a bunch of these "failed" memory chips were used to build 2305
(fixed head disk) clone ... and a fairly large number of them (maybe
all that the vendor could produce) were obtained for internal use
... using a "model" number of 1655 for use as dedicated paging devices
on internal VM timesharing systems. The claim was that they were able
to engineer compensation (for various chip problems) at 4k block
transfer boundary that wouldn't be practical if you were doing
standard processor fetch/store. some recent posts mentioning the 1655
2305-clone paging devices:
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
for other drift ... there was a lot of modeling for 3090 balanced speeds&feeds ... part of it was having sufficient electronic memory to keep the processor busy (which then led to the expanded store stuff)
part of the issue was using electronic memory to compensate for disk
thruput. starting in the late 70s, i was making statements about disk
relative system thruput had declined by an order of magnitude over a
period of years. the disk division assigned the performance and
modeling group to refute the statement. after a period of several
weeks, they came back and mentioned that i had actually slightly
understated the problem ... the analysis was then turned around into a
SHARE presentation on optimizing disk thruput (i.e. leveraging
strengths and compensating for weaknesses). misc. posting
referencing that share presentation
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
one of the issues that cropped up (somewhat unexpectantly?) was the significant increase in 3880 (disk controller) channel busy. the 3090 channel configuration had somewhat been modeled assuming 3830 control unit channel busy. the 3830 had a high performance horizontal microcode engine. for the 3880, they went to a separate processing for the data path (enabling supporting 3mbyte/sec and then 4.5mbyte transfers), but a much slower vertical microprogrammed engine for control commands. this slower processor significantly increased channel busy when processing channel controls/commands (compared to 3830).
a recent post discussion some of the problems that cropped up during
3880 development (these showed up before first customer ship and
allowed some work on improvement)
https://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?
however, there was still a fundamental issue that 3880 controller increased channel busy time per operation ... greater than had been anticipated. in order to get back to balanced speeds&feeds for 3090 ... the number of 3090 channels would have to be increased (to compensate for the increased 3880 channel busy overhead).
now, it was possible to build a 3090 with relatively few TCMs. the requirement (because of increased 3880 channel busy) to increase the number of channels resulted in requiring an additional TCM for 3090 build (for the additional channels) ... which wasn't an insignificant increase in manufacturing cost. at one point there was a suggestion (from pok) that the cost of the one additional TCM for every 3090 sold ... should be taken from sanjose's bottom line (as opposed to showing up against POK's bottom line).
the overall situation might be attributed to the after effects from
the failure of FS
https://www.garlic.com/~lynn/submain.html#futuresys
a big driving factor in FS was countermeasure to clone/plug compatible
controllers ... some collected postings having been involved in creating
plug compatible controller as an undergraduate
https://www.garlic.com/~lynn/submain.html#360pcm
however, from this article on FS (by one of the ibm executives involved)
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in
a partial narrowing of the price gap between IBM and its rivals.
... snip ...
i.e. the 3880 "box strategy" might be construed as sub-optimal from an overall system perspective.
for other drift ... recent postings about san jose disk
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#33 50th Anniversary of invention of disk drives
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 30 Sep 2006 06:18:36 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
"big pages" support shipped in VM HPO3.4 ... it was referred to as "swapper" ... however the traditional definition of swapping has been to move all storage associated with a task in single unit ... I've used the term of "big pages" ... since the implementation was more akin to demand paging ... but in 3380 track sized units (10 4k pages).
from vmshare archive ... discussion of hpo3.4
http://vm.marist.edu/~vmshare/browse.cgi?fn=34PERF&ft=MEMO
and mention of hpo3.4 swapper from melinda's vm history
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST05&ft=NOTE&args=swapper#hit
vmshare was online computer conferencing provided by tymshare to SHARE
organization starting in the mid-70s on tymshare's vm370 based
commercial timesharing service ... misc. past posts referencing
various vm370 based commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare
in the original 370, there was support for both 2k and 4k pages ... and the page size unit of managing real storage with virtual memory was also the unit of moving virtual memory between real storage and disk. the smaller page sizes tended to better optimize constrained real storage sizes (i.e. compared to 4k page sizes, an application might actually only need the first half or the last half of a specific 4k page, 2k page sizes could mean that the application could effectively execute in less total real storage).
the issue mentioned in this post
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
and
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
was that systems had shifted from having excess disk i/o resources to
disk i/o resources being a major system bottleneck ... issue also discussed
here about CKD DASD architecture
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#33 50th Anniversary of invention of disk drives
with the increasing amounts of real storage ... there was more and more a tendency to leveraging the additional real storage resources to compensate for the declining relative system disk i/o efficiency.
this was seen in mid-70s with the vs1 "hand-shaking" that was somewhat
done in conjunction with the ECPS microcode enhancement for 370
138/148.
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
VS1 was effectively MFT laid out to run in single 4mbyte virtual address space with 2k paging (somewhat akin to os/vs2 svs mapping MVT to a single 16mbyte virtual address space). In vs1 hand-shaking, vs1 was run in a 4mbyte virtual machine with a one-to-one correspondence between the vs1 4mbyte virtual address space 2k virtual pages and the 4mbyte virtual machine address space.
VS1 hand-shaking effectively turned over paging to the vm virtual machine handler (vm would present a special page fault interrupt to the vs1 supervisor ... and then when vm had finished handling the page fault, present a page complete interrupt to the vs1 supervisor). Part of the increase in efficiency was eliminating duplicate paging when VS1 was running under vm. However part of the efficiency improvement was VM was doing demand paging using 4k transfers rather than VS1 2k transfers. In fact, there were situations were VS1 running on 1mbyte 370/148 under VM had better thruput than VS1 running stand-alone w/o VM (the other part of this was my global LRU replacement algorithm and my code pathlength from handling page fault, to doing the page i/o to completion was much better than the equivalent VS1 code).
there were two issues with 3380, over the years, disk i/o had become increasingly a significant system bottleneck. more specifically latency per disk access (arm motion and avg. rotational delay) was significantly lagging behind improvements in other system components. so part of compensating for disk i/o access latency was to significantly increase amount transferred per operation. the other was that 3380 increased the transfer rate by a factor of ten while its access time only increased by a factor of 3-4. significantly increasing the amount transferred per access also better matched the changes in disk technology over time (note later technologies introduced raid that did large transfers across multiple disk arms in parallel)
full track caching is another approach that attempts to leverage the relative abundance of electronic memory (in the drive or controller) to compensiate for the relative high system cost of doing each disk arm access. part of this starts transfers (to the cache) as soon as the arm has settled ... even before the head has reached the specified requested record. disk rotation is part of the bottleneck ... so full track caching goes ahead and transfers the full track during the rotation ... on the off chance that the application might have some need for any of the rest of the data on the track (the electronic memory in the cache is relatively free compared to the high system cost of doing each arm access and rotational delay).
there is a separate system optimization with respect to increasing the physical page size. making the physical page size smaller allowed for better optimizing relatively scarce real storage sizes. with the shift in system bottleneck from constrained real storage to constrained i/o ... it was possible to increase the amount of data paged per operation w/o having to actually going to larger physical page size (by doing transfering multiple pages at a time ... as in the "big page" scenario).
there is periodic discussion in comp.arch about advantages going to much bigger (hardware) page sizes ... 64kbytes, 256kbytes, etc ... as part of increasing TLB (table look-aside buffer) performance. the actual translation of a virtual address to a physical real storage address is implemented in TLB. A task switch may result in the need to change TLB entries ... where hundreds of TLB entries ... one for each application 4k virtual page may be involved. For some loads/configuration, the TLB reload latency may become a significant portion of a task switch elapsed time. Going to much larger pages sizes ... reduces the number of TLB entries ... and possible TLB entry reloads ... that are necessary for running an application.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Trying to underdtand 2-factor authentication Newsgroups: comp.security.misc Date: Sat, 30 Sep 2006 06:56:17 -0600"not_here.5.species8350@xoxy.net" <not_here.5.species8350@xoxy.net> writes:
• something you have • something you know • something you are
a hardware token can represent something you have technology and a password can represent something you know technology. typically multi-factor authentication is considered more secure because the different factors have different/independent vulnerabilities (i.e. pin/password considered countermeasure to lost/stolen token, modulo not writing the pin/password on the token).
a couple old posts discussing one-time passwords implementation
and possible vulnerabilities/exploits
https://www.garlic.com/~lynn/2003n.html#1 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#2 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#3 public key vs passwd authentication?
it is also possible to have a common vulnerability for different
factors. misc posts discussing yes cards exploit
https://www.garlic.com/~lynn/subintegrity.html#yescard
where the token validates using static data (effectively a kind of pin/password). the static data can be skimmed and used to create a counterfeit token. the yes card operation involves the infrastructure validating the token ... and then asking the token if the entered pin was correct. the counterfeit yes cards are programmed to always answer YES, regardless of what pin is entered.
however, it is possible that the way that the token validates itself is via some sort of one-time password technology (as opposed to some purely static data technology). in such a situation, the one-time password isn't independent of the token ... it is equivalent to the token (and therefor doesn't represent multi-factor authentication).
another possible variation is using the token to transport information used for authentication. in the yes card scenario, the token was used for both transporting and verifying the user's PIN ... however there wasn't an independent method of verifying that the user actually knew the PIN ... which in turn invalidated the assumption about multi-factor authentication having different/independent vulnerabilities (and therefor being more secure)
in the following reference discussion about electronic passports, the
token is used to carry personal information that can be used for
something you are authentication (guard checks the photo in the
token against a person's face). the issue here is a question about the
integrity of the information carried in the token (can it be
compromised or altered). however, the token itself doesn't really
represent any kind of something you have authentication (it
purely is used to carry/transport the information for something you
are authentication)
https://www.garlic.com/~lynn/aadsm25.htm#32 On-card displays
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 30 Sep 2006 13:51:38 -0600edgould1948@ibm-main.lst (Ed Gould) writes:
at the time, cp67 was one of the few relatively successful operating systems that supported virtual memory, paging, etc (at least in the ibm camp). as a result some of the people working on os/vs2 svs was looking at pieces of cp67 for example.
one of the big issues facing transition from real memory mvt to virtual memory environment was what to do about channel programs.
in virtual machine environment, the guest operating system invokes channel programs ... that have virtual addresses. channel operation runs asynchronously with real addresses. as a result, cp67 had a lot of code (module CCWTRANs) to create an exact replica of the virtual channel program ... but with real addresses (along with fixing the associated virtual pages at real addresses for the duration of i/o operation). these were "shadow" channel programs.
svs had a comparable problem with channel programs generated in the application space and passing the address to the kernel with EXCP/SVC0. the svs kernel now was faced with also scanning the virtual channel program and created a replica/shadow version using real addreses. the initial work involved taking CCWTRANS from cp67 and crafting it into the said of the SVS development effort.
one of the other issues was that the POK performance modeling group got involved in doing low-level event modeling of os/vs2 paging operations. one of their conclusions ... which I argued with them about ... was that replacing non-changed pages was more efficient than selecting a change page for replacement. no matter how much arguing they were adament that on a page fault ... for a missing page ... the page replacement algorithm should look for a non-changed page to be replaced (rather than a changed page). This reasoning was that replacing a non-changed page took significantly less effort (there was no writing out required for the current page).
the issue is that in LRU (least recently used) page replacement strategy ... you are looking to replace pages that have the least likelyhood of being used in the near future. the non-changed/changed strategy resulted in less weight being placed on whether the page would be needed in the near future. this strategy went into svs and continued into the very late 70s (with mvs) before it was corrected.
finally it dawned on somebody that the non-changed/changed strategy resulted in replacing relatively high-use, comonly used linkpack executable (non-changed) pages before more lightly referenced, private application data (changed) pages.
these days there is a lot of trade-off trying to move data between memory in really large block transfers .... and using excess electronic memory to compensate for disk i/o bottlenecks. in the vs1 handshaking scenario ... vs1 letting vm do its paging in 4k blocks was frequently signifantly more efficient than paging in 2k blocks (made less efficient use of real storage, but it was a reasonable trade-off since there was effectively more real storage resources than there were disk i/o access resources).
later "big pages" went to 40k (10 4k page) 3380 track demand page transfers. vm/hpo3.4 would typically do more total 4k transfers than vm/hpo3.2 (for the same workload and thruput) ... however, it could do the transfers with much fewer disk accesses; it made less efficient use of real storage, but more efficient use of disk i/o accesses (again trading off real storage resource efficiency for disk i/o resource efficiency).
... or somewhat reminiscent of a line that I started using as an
undergraduate in connection with dynamic adaptive scheduling;
schedule to the (system thruput) bottleneck. misc. past posts
mentioning past dynamic adaptive scheduling work and/or the resource
manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
previous posts in this thread:
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
misc past posts mentioning os/vs2 starting out using CCWTRANS from cp67
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006o.html#27 oops
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 30 Sep 2006 18:51:32 -0600Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
as already discussed (in some detail) ... 3880 disk controller processed
control commands much slower than the previous 3830 disk controller
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
which met that it was taking longer elapsed time between commands ... while the disks continued to rotate.
there had been earlier studied in detail regarding elapsed time to do a head switch on 3330s ... in order to read/write "consecutive" blocks on different tracks (on the same cylinder) w/o unproductive disk rotation. intra-track head switch (3330) official specs called for a 110 dummy spacer record (between 4k page blocks) that allowed time for processing the head switch command ... while the disk continued to rotate. the rotation of the dummy spacer block overlapped with the processing of the head switch command ... allowing the head switch command processing to complete before the next 4k page block had rotated past the r/w head.
the problem was that 3330 track only had enuf room for three 4k page blocks with 101-byte dummary spacer records (i.e. by the time the head switch commnad had finished processing, the start of the next 4k record had already rotated past the r/w head).
it turns that both channels and disk controllers introduced processing delay/latency. so the i put together a test program that would format a 3330 track with different sized dummy spacer block and then test whether a head switch was performed fast enuf before the target record had rotated past the r/w head.
i tested the program with 3830 controllers on 4341, 158, 168, 3031, 3033, and 3081. it turns out that a 3830 in combination with 4341 and 370/168, the head switch command processed within the 101 byte rotation latency.
combination of 3830 and 158 didn't process the head switch command within the 101 byte rotation (resulting in a missed revolution). the 158 had integrated channel microcode sharing the 158 processor engine with the 370 microcode. all the 303x processors had a external "channel director" box. the 303x channel director boxes were a dedicated 158 processing engine with only the integrated channel microcode (w/o the 370 microcode) ... and none of the 303x processors could handle the head switch processing within the 101 byte dummy block rotation latency. the 3081 channels appeared to have similar processing latency as 158 and 303x channel director (not able to perform head switch operation within 101 dummy block rotation).
i also got a number of customer installations to run the test with a wide variety of processors and both 3830 controllers and oem clone disk controllers.
misc. past posts discussing the 3330 101/110 dummy block for
head switch latency:
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2004d.html#64 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main memory?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Very slow booting and running and brain-dead OS's? Newsgroups: alt.folklore.computers Date: Sun, 01 Oct 2006 09:48:59 -0600jmfbahciv writes:
started running into this problem as they started acquiring customers all around the world (early to mid 70s) ... and were faced with providing 7x24 service.
one of the increasing problem issues was that the field service people needed to take over a machine once a month (or sometimes more often) for service (and with 7x24 operation ... traditional weekend sat or sun midnight period was becoming less and less acceptable). at least some of the service required a whole system infrastructure .. where they would run various kinds of stand-alone diagnostics.
to compensate, they ran loosely-coupled (cluster) configurations and added software support for process migration across processors in cluster. they even claimed to being able to migrate a process from a cluster in datacenter on the east coast to cluster in datacenter on the west cost ... modulo amount of context/data that was required ... back in the days of 56kbit telco links.
much later when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
we coined the terms disaster survivability and geographic
survivability
https://www.garlic.com/~lynn/submain.html#available
now, fast reboot had already been done back in the late 60s for cp67 ... was cp67 systems were starting to move into more and more critical timesharing (and starting to offer 7x24 service). this then carried forward into vm370.
old tale about how fast cp67 rebooted (after a problem, in contrast to
multics)
https://www.multicians.org/thvv/360-67.html
mentioning cp67 crashing (and restarting) 27 times in one day.
cp67 had been done on 4th flr of 545 tech sq, multics on 5th flr of 545 tech sq ... and for some reason i believe MIT USL was in one of the other tech sq bldgs (across the courtyard). tech sq had three 10 story bldgs (9 office floors, there was 10th?) forming a courtyard ... with two-story Polaroid bldg on the 4th (street) side (i've told before 4th floor science center overlooked land's balcony and once watching demo of unannounced sx-70 being done on the balcony).
the cause of the multiple cp67 crashes was a local software modification that had been applied to the USL system. I had added ascii/tty support to cp67 when i was undergraduate at the univerisity ... and played some games with using one byte values. the local USL modification was to increase the maximum tty terminal size from 80chars to something like 1200(?) for some sort of new device (some sort of plotter?) over at harvard. the games with one byte value resulted in calculating incorrect lengths if the max. line size was increased past 255 (which then resulted in system failing).
some more on tech sq:
https://www.multicians.org/tech-square.html
note that in the above discription ... the (IBM) boston programming center also shared the 3rd floor of 545 tech sq. when the cp67 group split off from the science center, they moved to the 3rd flr, abosrbing the boston programming center. as the group expanded and morphed into the vm370 group ... it outgrew the 3rd floor and moved out to the old sbc bldg in burlington mall (vacated when sbc was sold/transferred to cdc).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 01 Oct 2006 14:16:40 -0600shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
reference in previous post
http://www.research.ibm.com/journal/rd/435/spainhower.pdf
... from reference above:
When a chip is b bits (b =|> 2) wide, an access to a 64-bit data word
may have a b-bit block or byte error. There are codes to variously
correct single b-bit errors and detect double b-bit errors. For G3 and
G4, a code with 4-bit correction capability (S4EC) was implemented.
Because the system design included dynamic on-line repair of chips
with massive failures, it was not necessary to design a (78, 64) code
which could both correct one 4-bit error and detect a second 4-bit
error (D4ED). Such a code would have required an extra chip per
checking block. The (76, 64) S4EC/DED ECC implemented on G3 and G4 is
designed to ensure that all single-bit failures of one chip (and a
very high probability of double- and triple-bit failures) occurring in
the same doubleword as a 1 4-bit error on a second chip are detected
[15]. G5 returns to single-bit-per-chip ECC and is therefore able to
again use a less costly (72, 64) SEC/DED code and still protect the
system from catastrophic failures caused by a single array-chip
failure.
... snip ...
and detailed 3090 description
http://www.research.ibm.com/journal/sj/251/tucker.pdf
... from above
Both the central and expanded storages have error-correcting codes. The
central storage has a single error-correcting, double-error-detecting
code on each double word of data. The code is designed to detect all
four-bit errors on a single card. The correcting code is passed to the
caches on a fetch operation so that it can cover transmission errors
as well as storage-array errors. The expanded storage is even more
fault-tolerant. Each quad-word of the expanded storage has a
double-error-correcting, triple-error-detecting code. Again, a
four-bit error is always detected if caused by a single-card-level
failure.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REAL memory column in SDSF Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 01 Oct 2006 14:56:57 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
thread from vmshare computer conferencing on how to get
old 2k key based operating systems to run under vm on
3081k having only support for 4k keys.
http://vm.marist.edu/~vmshare/browse.cgi?fn=2K_SCP&ft=MEMO
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Mon, 02 Oct 2006 07:31:23 -0600jmfbahciv writes:
tod clock was part of original 370 ... even before virtual memory for 370 had been announced.
i have some memory of spending 3 months in a taskforce/effort discussing tod clock ... one item was discussing the original specification that the clock epoch was the 1st day of the century ... and did the century start 01jan1900 or 01jan1901 (and for some reason, for lot of early internal testing, people repeatingly setting epoch to neither, but 01jan1970). the other topic of interest that went round and round was how to handle leap seconds.
past postings mentioning leap seconds
https://www.garlic.com/~lynn/2000.html#4 Computer of the century
https://www.garlic.com/~lynn/2000d.html#42 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2001f.html#53 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2004m.html#37 Multi-processor timing issue
https://www.garlic.com/~lynn/2005t.html#15 Best practice for TOD clock
https://www.garlic.com/~lynn/2006g.html#30 TOD Clock the same as the BIOS clock in PCs?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Was FORTRAN buggy? Newsgroups: alt.folklore.computers Date: Mon, 02 Oct 2006 07:38:46 -0600Anne & Lynn Wheeler <lynn@garlic.com> writes:
oops, finger slip, that should be
https://www.garlic.com/~lynn/gcard.html#16
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Trying to design low level hard disk manipulation program Newsgroups: comp.arch Date: Mon, 02 Oct 2006 11:53:40 -0600dgay writes:
it really was significant change for directories that happened to have a couple thousand filenames (some number of high use system).
i recently ran into something similar using sort on filenames and doing something other than linear search ... where sort command default collating sequence changed and it moved how period was handled (showed up between capital H and capital I). i had to explicitly set "LC_ALL=C" to get sort back working the way i was use to.
a similar, but different problem we did long ago and far away ... when we did online telephone book for several hundred thousand corporate employees. for lots of reasons ... the names/numbers was kept in linear flat file ... but sorted. the search was radix ... based on measured first letter frequency by taking the size of the file and probing part way into the file based on first letters of the search argument and the related letter frequencies for names (originally compiled into the search program). it could frequently get within appropriate physical record within a probe or two (w/o requiring separate index or other infrastructure).
we had special collating/sort order assuming that names (and search arguments) had no blanks (even tho any names with embedded blanks were carried in the actual data (the ignore blanks was a special sort charactieristic/option). in the name scenario .. name collisions/duplicates were allowed ... so search result might present multiple matches.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mickey and friends Newsgroups: alt.folklore.computers Date: Tue, 03 Oct 2006 08:49:21 -0600jmfbahciv writes:
from above:
my wife has just started a set of books that had been awarded her
father at west point ... they are from a series of univ. history
lectures from the (18)70/80s (and the books have some inscription
about being awarded to her father for some excellence by the colonial
daughters of the 17th century).
part of the series covers the religous extremists that colonized new
england and that the people finally got sick of the extreme stuff that
the clerics and leaders were responsible for and eventually migrated
to more moderation. it reads similar to some of lawarence's
description of religious extremism in the seven pillars of wisdom.
there is also some thread that notes that w/o the democratic influence
of virginia and some of the other moderate colonies ... the extreme
views of new england would have resulted in a different country.
somewhat related is a story that my wife had from one her uncles
several years ago. salem had sent out form letters to descendants of
the town's inhabitants asking for contributions for a memorial. the
uncle wrote back saying that since their family had provided the
entertainment at the original event ... that he felt that their family
had already contributed sufficiently.
... snip ... and ...
i was recently reading an old history book (published around 1880)
that claimed that it was extremely fortunate that the declaration of
independence (as well as other founding efforts) were much more
influenced by scottish descendants in the (state of) virginia area
... than any english influence from the (state of) mass. area ...
that the USA would be a markedly different nation if more of the
Massachusetts/English influence had prevailed (as opposed to the
Virginia/Scottish influence).
... snip ...
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: cold war again Newsgroups: alt.folklore.computers Date: Tue, 03 Oct 2006 11:52:37 -0600wclodius writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's Newsgroups: comp.arch Date: Tue, 03 Oct 2006 12:27:29 -0600guy.larri writes:
in the past couple yrs ... somebody advertised an "aos" rt/pc (machine, software, and documentation) in alt.folklore.computers.
originally ROMP was going to be an austin OPD office products follow-on for the displaywriter. when that got canceled ... the group look around and decided to try and revive the box as a unix workstation. they got the group that had done the at&t unix port for pc/ix ... to do one for romp ... and you got rt/pc and aix.
the palo alto group had been working on doing a Berkeley port to 370. at some point after the rt/pc first became available, the decision was to retarget the effort from 370 to rt/pc ... and you got "aos".
there was a little discord between austin and palo alto over aos.
the original austin group was using cp.r and pl.8 for the displaywriter work. as part of retargeting romp from displaywriter to unix workstation ... it was decided that the austin pl.8 could implement a VRM (virtual resource manager, in pl.8). the group that had done the pc/ix port, then would port to an abstract VRM layer ... rather than the bare metal.
palo alto then did the berkeley port for aos to the bare metal. the problem was that austin had claimed that the total VRM development effort plus port to VRM interface was less effort than any straight port to the bare metal. unfortunately(?), palo alto's port to the bare metal was done with very few reources and effort.
misc. past posts mentioning 801, romp, rios, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801
and doing q&d, trivial search with search engine ... the very first reference
http://domino.watson.ibm.com/tchjr/journalindex.nsf/0/f6570ad450831a2485256bfa00685bda?OpenDocument
then two wikipedia references ... and then 4, 5, 6, 7, ....
http://www.research.ibm.com/journal/sj/264/ibmsj2604D.pdf
http://www.research.ibm.com/journal/sj/261/ibmsj2601H.pdf
http://www.landley.net/history/mirror/ibm/Cocke.htm
http://www.thocp.net/timeline/1974.htm
http://www.islandnet.com/~kpolsson/workstat/
http://www.devx.com/ibm/Article/20944
http://www.experiencefestival.com/romp0944
http://www.rootvg.net/column_risc.htm
http://www.informatik.uni-trier.de/~ley/db/journals/ibmsj/ibmsj26.html