List of Archived Posts

2006 Newsgroup Postings (09/23 - 10/03)

Cray-1 Anniversary Event - September 21st
Greatest Software Ever Written?
Was FORTRAN buggy?
Trying to design low level hard disk manipulation program
Was FORTRAN buggy?
Was FORTRAN buggy?
Was FORTRAN buggy?
Was FORTRAN buggy?
should program call stack grow upward or downwards?
Was FORTRAN buggy?
Was FORTRAN buggy?
Was FORTRAN buggy?
Trying to design low level hard disk manipulation program
Was FORTRAN buggy?
50th Anniversary of invention of disk drives
50th Anniversary of invention of disk drives
Was FORTRAN buggy?
Greatest Software Ever Written?
50th Anniversary of invention of disk drives
Greatest Software Ever Written?
50th Anniversary of invention of disk drives
50th Anniversary of invention of disk drives
Was FORTRAN buggy?
50th Anniversary of invention of disk drives
A Day For Surprises (Astounding Itanium Tricks)
Computer Artifacts
A Day For Surprises (Astounding Itanium Tricks)
A Day For Surprises (Astounding Itanium Tricks)
Greatest Software Ever Written?
Intel abandons USEnet news
50th Anniversary of invention of disk drives
50th Anniversary of invention of disk drives
MIPS architecture question - Supervisor mode & who is using it?
50th Anniversary of invention of disk drives
REAL memory column in SDSF
REAL memory column in SDSF
REAL memory column in SDSF
REAL memory column in SDSF
Trying to underdtand 2-factor authentication
REAL memory column in SDSF
REAL memory column in SDSF
Very slow booting and running and brain-dead OS's?
REAL memory column in SDSF
REAL memory column in SDSF
Was FORTRAN buggy?
Was FORTRAN buggy?
Trying to design low level hard disk manipulation program
Mickey and friends
cold war again
Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's

Cray-1 Anniversary Event - September 21st

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Cray-1 Anniversary Event - September 21st
Newsgroups: alt.folklore.computers
Date: Sat, 23 Sep 2006 09:22:16 -0600
Morten Reistad <first@last.name> writes:
This is a welcome development in the "aid industry". They are so used to public monies where accountability for actual results is abysmal. This is a field where mr Gates can contribute a lot just by being his normal self. I wish him all the best.

from thread earlier this year ... reference to similar comment in a talk by the comptroller general
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor

other posts mentioning the same talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
https://www.garlic.com/~lynn/2006o.html#61 Health Care
https://www.garlic.com/~lynn/2006p.html#17 Health Care

Greatest Software Ever Written?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Greatest Software Ever Written?
Newsgroups: alt.folklore.computers
Date: Sat, 23 Sep 2006 10:03:07 -0600
Morten Reistad <first@last.name> writes:
No, not "will have computer chips"; "have had computer chips for a year or two". They are called RFID tags. They are tags that can be read at 2-200 ft range (depending on sophistication of equipment).

there is contactless/proximity ... like the wash. dc metro (cubic) ... or the HK octopus card (iso 14443, sony chip, distributed by mitsubishi, erg). these typically have the transit gate reading the contents, decrypting the contents (with derived symmetric key), updating the contents, encrypting the contents, and writing back the updated encrypted contents.
http://www.smartcardalliance.org/newsletter/september_04/feature_0904.html

these can be purely memory with no on-chip intelligence or processing.

old post mentioning iso 14443 (and other stuff)
https://www.garlic.com/~lynn/2004h.html#30 ECC Encryption

i had a weird experience with a wash dc metro card a couple years ago ... where I had left a metro station with something like (positive) $10 still on the card ... and the next time I tried to use the card, the reader claimed there was a negative $5 balance (while outside the transit system, card had lost $15 and actually $5 negative w/o being used)

a lot of RFID started out being next generation barcode; just read the number ... a lot more digits allowing unique chip identification down to individual item level (rather than just vendor and product) and being able to inventory w/o having to manually count each individual item. big driver recently has been walmart mandating them from suppliers. they would like to get these chips/technology into the penny range (or even less; along with new & less expensive methods of producing RFID signal w/o necessarily being traditional chip fabrication process)

with (pure barcode) RFID technology becoming more prevalent, there is other applications trying to leverage it.

a post with a lot of news URLs regarding RFID and passports
https://www.garlic.com/~lynn/aadsm25.htm#11 And another cloning tale

one of the objectives for the aads chip strawman was to be able to do ecdsa processing within transit gate iso 14443 requirements
https://www.garlic.com/~lynn/x959.html#aadsstraw

other references to aads technology and patents
https://www.garlic.com/~lynn/x959.html#aads

some other posts mentioning contactless/proximity
https://www.garlic.com/~lynn/aadsm22.htm#40 FraudWatch - Chip&Pin, a new tenner (USD10)
https://www.garlic.com/~lynn/aadsm22.htm#45 Court rules email addresses are not signatures, and signs death warrant for Digital Signatures
https://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#1 UK Detects Chip-And-PIN Security Flaw
https://www.garlic.com/~lynn/aadsm24.htm#2 UK Banks Expected To Move To DDA EMV Cards
https://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
https://www.garlic.com/~lynn/aadsm24.htm#7 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#8 Microsoft - will they bungle the security game?
https://www.garlic.com/~lynn/aadsm24.htm#27 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#28 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#30 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm25.htm#1 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm25.htm#8 smart cards with displays - at last!
https://www.garlic.com/~lynn/aadsm25.htm#24 DDA cards may address the UK Chip&Pin woes

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sat, 23 Sep 2006 10:43:30 -0600
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
I don't think that applies: modern desktop CPUs have all the features of high end mainframes and some supercomputer features; some instructions execute in 0.25ns, they support GB memory so need 3 level cache, Gb/s networking and I/O; hardware multi-threading because they can't keep the CPUs busy; and multi-CPUs per die because faster clocks aren't giving enough thruput increase.

the other issue with faster clocks is that latency across the chip is becoming significant. with multiple CPUs on the same chip ... you can reduce the distance (and therefor time) that a synchronous signal has to travel.

also as the chip sizes remained somewhat the same ... while the circuit sizes shrank ... you also had significantly more circuits per chip. you could use the additional circuits for multiple cores ... but you could also use the circuits for on-chip caches. you could have dedicated on-chip "L1" caches per cpu core ... and shared on-chip "L2" caches for all cpu cores on the same chip. That means that any off-chip cache becomes "L3".

the modern out-of-order execution is at least equivalent of anything that 370/195 (supercomputer) had ... and there is also branch prediction, speculative execution (down predicted branch path) and instruction nullification/abrogation (when prediction is wrong) ... which 370/195 didn't have.

the out-of-order execution helps with latency compensation (i.e. when one instruction is stalled on some fetch operation ... execution of other instructions may proceed somewhat independently). multi-threaded operation was also a form of latency compensation ... trying to keep the execution units filled with independent work/instructions.

370/195 did allow concurrent execution of instructions in the pipeline ... but branches would drain/stall processing. i had gotten involved in a program to add multi-threading to a 370/195, i.e. dual instruction stream; registers and instructions in the pipeline having one bit tag identifying which instruction stream they belong to (but not otherwise increasing the hardware or executable units). however, this project never shipped a product.

this was based on the peak thruput of 370/195 was around ten mips ... but that required careful management of branches ... most codes ran at five mips (because of the frequent branches that drained the pipeline). dual i-streams (running at five mips per) had a chance of keeping the "ten mip" peak executing units busy.

misc. past post mentioning 370/195 dual i-stream effort:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#29 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#10 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006m.html#51 The System/360 Model 20 Wasn't As Bad As All That

Trying to design low level hard disk manipulation program

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Trying to design low level hard disk manipulation program
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 23 Sep 2006 13:13:37 -0600
Bill Todd <billtodd@metrocast.net> writes:
Many industrial-strength file systems (including the major Unix variants, NTFS, VMS's ODS-2/5...) have a level of indirection between a file's directory entry and the file's data that FAT lacks: the on-disk inode in Unix, the MFT entry in NTFS, the index file entry in ODS2/5). Thus at each stage of a path look-up (directory -> subdirectory > subdirectory... -> file), there's an extra disk access (unless the inode-like structure happens to be cached) before you can get to the data at the next level.

MFD from mid-60s cms filesystem ... which had some number of incremental improvements and then saw the enhanced/extended (EDF) filesystem in the mid-70s.

the original mid-60s implementation supported sparse files ... so there were various null pointers for indirect hyperblocks and datablocks that didn't actually exist.

one of the unofficial early 70s incremental improvements to the cms filesystem was the directory file block pointer would point directly at the data block ... instead of an indirect hyperblock ... for file that had only one data block (for small files, instead of having a minimum of two blocks, one indirect hyperblock and one data block, it would just have the single data block). another unofficial early/mid 70s incremental improvements was various kinds of data compression. I think both of these were originally done by perkin-elmer and made available on the share waterloo tape. there was some performance measurement for the p/e compression changes ... that the filesystem overhead to compress/decompress the data in the file was frequently more than offset by reduction in cpu overhead reading/writing the physical blocks to/from disk.

one of the things that the mid-70s EDF extensions brought to the cms filesystem was multiple logical block size (1k, 2k, & 4k) and more levels of indirect hyperblocks ... supporting up to five levels of indirection for large files ... i.e. a 4k filesystem with single hyperblock supported up to 1024 four byte data block pointers. a two level hyperblock had the first level pointing to up to 1024 first-level hyperblocks which each then would point to up to 1024 4k data blocks. As a file grew, the filesystem could transition to higher levels of hyperblock indirection.

in the early 70s, i had did a page-mapped layer for the original (cp67) cms filesystem ... and then later upgraded the EDF filesystem (by that time morphed into vm370) to also support page-mapped layer construct
https://www.garlic.com/~lynn/submain.html#mmap

there is some folklore that various pieces of ibm/pc and os2 filesystem characteristics were taken from cms. note also that both unix and cms trace some common heritage back to ctss.

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 10:27:31 -0600
krw <krw@att.bizzzz> writes:
Yeah, right. That must be why IBM's internal network was bigger than the ARPAnet until '85 or so (queue LynnW).

:)

part of the issue was that the official/strategic communication product was SNA ... which had effectively large master/slave paradigm in support of mainframe controlling tens of thousands of (dumb) terminals (there were jokes about sna not being a system, not being a network, and not being an architecture).

the internal network was not SNA ...
https://www.garlic.com/~lynn/subnetwork.html#internalnet

misc. recent threads discussing the announcement of the 1000th node on the internal network
https://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#3 Arpa address
https://www.garlic.com/~lynn/2006k.html#8 Arpa address
https://www.garlic.com/~lynn/2006k.html#43 Arpa address

and reference to the approx size of the internet/arpanet in the same timefame (possibly as low as 100 to possibly a high of 250)
https://www.garlic.com/~lynn/2006k.html#40 Arpa address

in the very early sna days, my wife had co-authored a (competitive) peer-to-peer architecture (AWP39). she then went on to do a stint in POK responsible for loosely-coupled architecture (aka mainframe cluster) where she created Peer-Coupled Shared Data architecture ... except for IMS hot-standby, didn't see a lot of uptake until parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata

there were some number of battles between the communication group attempting to enforce the "strategic" communication solution for all environments (even as things started to move away from the traditional tens of thousands of dumb terminals controlled by a single mainframe). san jose research had a eight-way 4341 cluster project using trotter/3088 (effectively eight channel processor-to-processor switch) that they wanted to release. in the research version using non-sna protocol ... to do a full cluster synchronization function took something under a second elapsed time. they were forced to migrate to sna (vtam) based implementation which inflated the elapsed time to over half a minute. recent reference to early days of the project
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"

another situation was that terminal emulation contributed to early heavy uptake of PCs in the business environment. you could get a PC with dumb terminal emulation AND some local computing capability in a single desktop footprint and for about the same price as a 327x terminal that it would replace. later as PC programming became more sophisticated, there were numerous efforts to significantly improve the protocol paradigm between the desktop and the glasshouse. however, all of these bypassed the communication sna infrastructure and installed terminal controller product base.
https://www.garlic.com/~lynn/subnetwork.html#emulation

the limitations of terminal emulation later contributed heavily to data from the glasshouse being copied out to local harddisks (either on local servers or on the desktop itself). this continued leakage was the basis of some significant infighting between the disk product group and the communication product group. the disk product group had come up with a number of products that went a long way to correcting the terminal emulation limitations ... but the communicaton product group continually blocked their introduction (claiming that they had strategic product responsibility for anything that crossed the boundary between the glasshouse and the external world).

at one point (in the late 80s) a senior person from the disk product group got a talk accepted at the communication product group's worldwide, annual internal conference. his opening deviated from what was listed for the talk by starting out stating that the head of the communication product group was going to be responsible for the demise of the (mainframe) disk product group. somewhat unrelated topic drift, misc. collected posts mentioning work with blg14 (disk engineering) and blg15 (disk product test)
https://www.garlic.com/~lynn/subtopic.html#disk

we were also doing high-speed data transport project (starting in the early 80s)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

a recent posting somewhat contrasting hsdt and sna
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

the late 80s was also in the period were we had started pitching 3-tier to customer executives.
https://www.garlic.com/~lynn/subnetwork.html#3tier

we had sort of melded work that had been going on for 2-tier mainframe/PC and for 2-tier glasshouse/departmental computing (4341). a few recent postings
https://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article
https://www.garlic.com/~lynn/2006p.html#36 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"

however, this was also during the period that the communication product group was attempting to stem the tide away from terminal emulation with SAA (and we would take some amount of heat from the SAA forces).

part of our 3-tier effort we then forked off into ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and oft repeated specific posting
https://www.garlic.com/~lynn/95.html#13

for other drift ... a side effort for hsdt in the mid-80s ... was attempting to take some technology that had originally been developed at one of the baby bells and ship it as an official product. this had a lot of SNA emulation stuff at boundaries talking to mainframes. SNA had evolved something called cross-domain ... where a mainframe that didn't directly control a specific terminal ... still could interact with a terminal ("owned" by some other mainframe). the technology would tell all the boundary mainframes that (all) the terminals were owned some other mainframe. In actuallity, the internal infrastructure implemented a highly redundant peer-to-peer infrastructure ... and then just regressed to SNA emulation talking to boundary mainframes.
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 11:11:00 -0600
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
IBM VM systems never had problems talking to each other, other IBM systems, the ARPAnet, or Internet. AFAIK DEC systems only supported RJE not NJE, and unlikely ever supported NJE over SNA. NJE was the JES-JES (JES2/HASP, JES3/ASP) internode protocol for file transfer which could cause system crashes if either end was improperly configured.

re:
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?

it was worse than that ... NJE grew up out of HASP networking, some amount of it had been done at TUCC. HASP had a one byte index for table of 255 psuedo (spooled) devices that it implemented local spooling. the original networking support scavenged unused entries from that table to define networking nodes. a typical HASP node might have 60-80 psuedo devices defined ... leaving a maximum of 170-190 entries for defining networking nodes. hasp/jes also would trash any traffic where either the originating node or the destination node wasn't defined in the local table. the internal network fairly quickly exceeded 255 nodes
https://www.garlic.com/~lynn/subnetwork.html#internalnet

limiting hasp/jes to anything other than a boundary node (pretty useless as an intermediate node that would trash some percent of traffic flowing through). at some point, NJE increased maximum network size to 999 ... but that was after the internal network was over 1000 nodes (again creating network operational problems if JES was used for other than purely boundary nodes).

the other problem was that NJE protocol confused the header fields ... intermingling networking stuff with purely local stuff. not only would misconfigured hasp/jes systems crash other hasp/jes systems ... but it was possible for two different systems (properly configured) at slightly different release levels (with slightly different header formats) to crash each other. there was an infamous scenario where a system in san jose was causing systems in hursley to crash.

as a result, there was a body of technology that grew up in VM networking nodes for simulating NJE. there were a whole library of NJE drivers for various versions and releases of hasp/jes. A VM simulated NJE driver would be started for the specific boundary hasp/JES that it was talking to.

incoming traffic from a boundary NJE node would be taken and effectively translated into a generalized connonical format. outgoing traffic to boundary NJE node would have header formated for the specific hasp/jes release/version. all of this was countermeasure to keep the wide variety of different hasp/jes systems around the world from crashing each other.

misc. other hasp/jes related posts
https://www.garlic.com/~lynn/submain.html#hasp

another characteristic was that the native VM drivers tended to have much higher thruput and efficiency than the NJE protocol. however, at some point (possibly for strategic corporate compatibility purposes) they stopped shipping the native VM drivers ... and only shipped NJE drivers for VM networking.

at some point I believe that bitnet/earn network was also larger than arpanet/internet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

bitnet was US educational network using the vm networking technology (however, as mentioned, eventually only NJE drivers were shipping in the vm product). while the internal network and bitnet used similar technologies ... the sizes of the respective networks were totally independent.

earn was the european flavor of bitnet. for some drift, old post mentioning founding/running earn
https://www.garlic.com/~lynn/2001h.html#65 UUCP email

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 11:28:42 -0600
re:
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#5 Was FORTRAN buggy?

as mentioned before, we put up a HSDT high-speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and i had done mainframe tcp/ip drivers supporting RFC1044.
https://www.garlic.com/~lynn/subnetwork.html#1044

at the time, the standard mainframe tcp/ip driver supported about 44kbytes/sec aggregate thruput using burning approx. a full 3090 processor. in some rfc 1044 tuning testing at cray research, was seeing 1mbyte/sec sustained thrutput between a cray and a 4341-clone ... using only a modest amount the 4341-clone process (nearly two order magnitude improvement in bytes per cpu second).

also for the original NSFNET backbone RFP (effectively the operational networking precursor to the modern internet), we weren't allowed to bid. However, my wife went to the director of NSF and got a technical audit of what we were running. one of the conclusions was effectively that what we already had running was at least five years ahead of all bid submissions (to build something new).

random past reference
https://www.garlic.com/~lynn/internet.htm#0

reference to the nsfnet backbone rfp
https://www.garlic.com/~lynn/internet.htm#nsfnet

copy of NSFNET backbone RFP announcement
https://www.garlic.com/~lynn/2002k.html#12

reference to award announcement
https://www.garlic.com/~lynn/2000e.html#10

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 14:31:09 -0600
vjp2.at writes:
I didn't see networked IBM big iron before the late 80s. I saw DEC ethernet/telnet as early as 79.

If someone sent me a tape, I sent it back and told them to send me punched cards. Tapes were never compatible. There was always something different about each one.

We had a hard enough time moving stuff between VM and Wylbur on the same IBM mainframe.

Then again the control freaky DP dudes would ban full screen editors on production machines because they slowed them down.


re:
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#5 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#6 Was FORTRAN buggy?

recent post
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"

with respect to the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

repeat from the recent post:

one of the rex historical references (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20050309184016/http://www.computinghistorymuseum.org/ieee/af_forum/read.cfm?forum=10&id=21&thread=7

from above:
By far the most important influence on the development of Rexx was the availability of the IBM electronic network, called VNET. In 1979, more than three hundred of IBM's mainframe computers, mostly running the Virtual Machine/370 (VM) operating system, were linked by VNET. This store-and-forward network allowed very rapid exchange of messages (chat) and e-mail, and reliable distribution of software. It made it possible to design, develop, and distribute Rexx and its first implementation from one country (the UK) even though most of its users were five to eight time zones distant, in the USA.

... snip ...

and from earlier post:
https://www.garlic.com/~lynn/2006k.html#40 Arpa address

repeat from the above post ... in mid-1980, arpanet was hoping to have 100 nodes by 1983 (the year that the internal network hit the 1000th node mark):


ARPANET newsletter
ftp://ftp.rfc-editor.org/in-notes/museum/ARPANET_News.mail

from above:

NEWS-1                                                   DCA Code 531
1 July 1980                                           (DCACODE535@ISI)
                                                        (202) 692-6175

ARPANET NEWSLETTER

---------------------------------------------------------------------

Over the past eleven years, the ARPANET has grown considerably and has
become the major U. S. Government research and development
communications network.  The ARPANET liaisons have made significant
contributions to the network's success.  Your efforts are voluntary,
but are critical to successful operation of each Host, IMP, and TIP.
Your continued support of the ARPANET is greatly appreciated and will
facilitate continued smooth ARPANET operation.

To aid you in performance of your duties, DCA will attempt to provide
you with the latest information in network improvements.  This
information is grouped into two major areas: management and technical
improvements.  However, a brief discussion of where we are going with
the ARPANET is in order.

The ARPANET is still a rapidly growing network.  It provides a service
which is both cost and operationally effective.  We predict the
ARPANET will grow to approximately 100 nodes by 1983, when we
will begin transferring some of the subscribers to DOD's AUTODIN II
network.

... snip ...

should program call stack grow upward or downwards?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: should program call stack grow upward or downwards?
Newsgroups: comp.arch,comp.lang.c,alt.folklore.computers
Date: Sun, 24 Sep 2006 15:24:41 -0600
gordonb.6hiy2@burditt.org (Gordon Burditt) writes:
OS/360 used a linked list of "save areas" containing saved registers, return addresses, and if desired, local variables. (Now, granted, when I was working with it, C didn't exist yet, or at least it wasn't available outside Bell Labs.) Reentrant functions (required in C unless the compiler could prove it wasn't necessary) would allocate a new save area with GETMAIN and free it with FREEMAIN. Non-reentrant functions would allocate a single static save area.

minor note ... the savearea allocation was the responsibility of the calling program ... but the saving of registers were the responsibility of the called program ... i.e. on program entry, the instruction sequence was typically:

STM   14,12,12(13)

i.e. "store multiple" registers 14,15,0,...,12 ... starting at (decimal) 12 offset from location pointed to by register 13.

for more detailed discussion ... i've done a q&d conversion of the old ios3270 green card to html ... and more detailed discussion of call/save/return conventions can be found at:
https://www.garlic.com/~lynn/gcard.html#50

the called program only needed a new save area if it would, in turn call some other program. non-reentrant programs (that called other programs) could allocate a single static savearea. only when you had reentrant programs that also called other programs ... was there an issue regarding dynamic save area allocations.

the original cp67 kernel had a convention that was somewhat more like a stack. it had a contiguous subpool of 100 save areas. all module call/return linkages were via supervisor call. it was the responsibility of the supervisor call routine to allocate/deallocate savearea for the call.

an aside, cp67 and unix can trace somewhat common heritage back to ctss, i.e. cp67 work was done at the science center on the 4th flr of 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

including some people that had worked on ctss. multics was on the 5th flr of 545 tech sq ... and also included some people that had worked on ctss.

as i was doing various performance and scale-up work on cp67 ... i made a number of changes to the cp67 calling conventions.

for some number of high-use non-reentrant routines (that didn't call any other routines), i changed the calling sequence from supervisor call to simple "branch and link register" ... and then used a static area for saving registers. for some number of high-use common library routines ... the supervisor call linkage scenario had higher pathlength that the function called ... so the switch to BALR call convention for these routings significantly improved performance.

the other problem found with increasing load ... was that it became more and more frequent that the system would exhaust the pool of 100 kernel save areas (which caused it to abort). i redid the logic so that it could dynamically increase and decrease the pool of save areas ... significantly reducing system failures under heavy load.

there was subsequent generalized subpool enhancement for cp67 kernel dynamic storage management ... which also significantly contributed to descreasing kernel overhead.

article from that work
Analysis of Free-storage Algorithms, B. Margolin, et all, IBM Systems Journal v10n4, 283-304, 1971

and from the citation site:
http://citeseer.ist.psu.edu/context/418230/0

misc. past postings mentioning cp67 kernel generalized subpool work:
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2006e.html#40 transputers again was: The demise of Commodore
https://www.garlic.com/~lynn/2006j.html#21 virtual memory
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 17:11:08 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
part of the issue was that the official/strategic communication product was SNA ... which had effectively large master/slave paradigm in support of mainframe controlling tens of thousands of (dumb) terminals (there were jokes about sna not being a system, not being a network, and not being an architecture).

re:
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?

oh and some of the dumb terminals weren't necessarily so dumb ... there were also things like huge numbers of ATM (automatic teller, aka cash) machines

for other drift ... recent post mentioning early work at los gatos lab on cash machines
https://www.garlic.com/~lynn/2006q.html#5 Materiel and graft

for sna dumb terminal drift
http://www.enterasys.com/solutions/success/commercial/unitedairlines.pdf

from above:
The original United network environment consisted of approximately 20,000 dumb terminals connected to three separate networks: an SNA-based network connecting into IBM mainframes for business applications; a Unisys-based network whose processors did all of the operational types of programs for the airline such as crew and flight schedules and aircraft weights and balance; and the Apollo network, which connected users to the airline's reservation system for all passenger information, seat assignments, etc. That meant that for every airport that United flew into, it had to have three separate telephone circuits--one for each network. According to Ken Cieszynski, United's senior engineer in Networking Services, it was a very costly, cumbersome and labor-intensive system for operating and maintaining a business.

... snip ...

my wife was in conflict with the SNA group from early on ... having co-authored (competitive) AWP39 peer-to-peer networking architecture during the early days of SNA, did battle with them when she was in POK responsible for loosely-coupled (cluster mainframe) architecture,
https://www.garlic.com/~lynn/submain.html#shareddata

and then later when we were out pushing 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

along the way she also did a short stint as chief architect for amadeus ... where she got into trouble backing x.25 based network design as an alternate to SNA based network implementation.

misc. past post mentioning amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Sun, 24 Sep 2006 18:35:20 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
for sna dumb terminal drift
http://www.enterasys.com/solutions/success/commercial/unitedairlines.pdf

from above:

The original United network environment consisted of approximately 20,000 dumb terminals connected to three separate networks: an SNA-based network connecting into IBM mainframes for business applications; a Unisys-based network whose processors did all of the operational types of programs for the airline such as crew and flight schedules and aircraft weights and balance; and the Apollo network, which connected users to the airline's reservation system for all passenger information, seat assignments, etc. That meant that for every airport that United flew into, it had to have three separate telephone circuits--one for each network. According to Ken Cieszynski, United's senior engineer in Networking Services, it was a very costly, cumbersome and labor-intensive system for operating and maintaining a business.


... snip ...

and of course apollo system was also ibm mainframe ... acp (airline control program) that morphed into TPF (transaction processing system) a few references
http://www.blackbeard.com/tpf/tpfscoop.htm
https://en.wikipedia.org/wiki/Computer_reservations_system
http://www.eastmangroup.com/otwc/otwc~jun2006.html
http://www.prnewswire.com/cgi-bin/micro_stories.pl?ACCT=121034&TICK=GAL&STORY=/www/story/04-04-2000/0001181634&EDATE=Apr+4,+2000
http://www.answers.com/topic/sabre-computer-system
http://www.everything2.com/index.pl?node=GRS
http://www.cwhonors.org/laureates/Business/20055186.pdf
http://www.intervistas.com/4/presentations/orbitzfinalbook1.pdf

and
http://www.computerworld.com/managementtopics/outsourcing/story/0,10801,63472,00.html

from above:
IBM helped build the transaction processing facility (TPF) for American Airlines Inc. in the late 1950s and early 1960s that would become the Sabre global distribution system (GDS). IBM built a similar TPF system for Chicago-based United Air Lines Inc. That system later became the Apollo GDS.

... snip ...

galileo/apollo history
http://www.galileo.com/galileo/en-gb/about/History/

for other drift about airline systems
https://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#7 Impossible Database Design?
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006n.html#16 On the 370/165 and the 360/85
https://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#23 3 value logic. Why is SQL so special?
https://www.garlic.com/~lynn/2006q.html#29 3 value logic. Why is SQL so special?

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 09:25:18 -0600
KR Williams <krw@att.bizzzz> writes:
You were blind. We were "widebanding" tape images for ICs and circuit cards by the time I got there in the mid '70s. They were called RITs (Release Interface Tapes), though they never existed as mag tapes. Lynn talked about VNET before that. By the early '80s conferencing (similar function as the USENET) appeared. The IBMPC conferencing disk opened in '81, IIRC.

one of the uses of hsdt high-speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

was shipping chip design off to LSM (losgatos state machine or the logic simulation machine for publication, san jose bldg. 29) and EVE (endicott validation engine, there was one in san jose bldg. 86, disk engineering had been moved to offsite location while bldg. 14 was getting its seismic retrofit) for logic verification. there was claim that this helped contribute to bringing in the RIOS chipset (power) a year early.

i got blamed for some of that early conferencing ... doing a lot of the stuff semi-automated. there was even an article in datamation. there were then some number of internal corporate task forces to investigate the phenomena. hiltz and turoff (network nation, addison-wesley, 1978) were brought in as consultants for at least one of the task force investigations. then a consultant was paid to sit in the back of my office for nine months, taking notes on how i communicated ... also had access to all my incoming and outgoing email as well as logs of all my instant messaging activity. besides an internal research report, (with some sanitizing) it also turned into a stanford phd thesis (joint between language and computer ai) ... some number of past posts mentioning computer mediated conversation (and/or the stanford phd thesis on how i communicate)
https://www.garlic.com/~lynn/subnetwork.html#cmc

the ibmvm conferencing "disk" opened first ... followed by the ibmpc conferencing "disk". the facility (TOOLSRUN) was somewhat cross between usenet and listserv (recipient could specify configuration that worked either way). you could specify recipient options that worked like listserv. however, you could also install a copy of TOOLSRUN on your local machine ... and setup an environment that operated more like usenet (with local respository).

this discussions somewhat mirrored the (purely) online conferencing that tymshare was providing to the IBM SHARE user group organization with online vmshare (and later) pcshare (typical access via tymshare's tymnet). ... misc. posts about (vm based) commercial timesharing services (including tymshare)
https://www.garlic.com/~lynn/submain.html#timeshare

vmshare archive:
http://vm.marist.edu/~vmshare/

misc. past references to "tandem memos" (the referenced early computer conferencing incident)
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them
https://www.garlic.com/~lynn/2006l.html#24 Google Architecture
https://www.garlic.com/~lynn/2006l.html#51 the new math: old battle of the sexes was: PDP-1

...

misc. past posts mentioning "TOOLSRUN":
https://www.garlic.com/~lynn/2001c.html#5 what makes a cpu fast
https://www.garlic.com/~lynn/2002d.html#33 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2003i.html#18 MVS 3.8
https://www.garlic.com/~lynn/2004o.html#48 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#22 z/VM Listserv?
https://www.garlic.com/~lynn/2006h.html#9 It's official: "nuke" infected Windows PCs instead of fixing them

...

misc. past posts mentioning LSM, EVE (and/or YSE)
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2003.html#31 asynchronous CPUs
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#65 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006q.html#42 Was FORTRAN buggy?

Trying to design low level hard disk manipulation program

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Trying to design low level hard disk manipulation program
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 25 Sep 2006 09:52:55 -0600
Bill Todd <billtodd@metrocast.net> writes:
But it is indeed a gray area as soon as one introduces the idea of a CopyFile() operation (that clearly needs to include network copying to be of general use). The recent introduction of 'bundles' ('files' that are actually more like directories in terms of containing a hierarchical multitude of parts - considerably richer IIRC than IBM's old 'partitioned data sets') as a means of handling multi-'fork' and/or attribute-enriched files in a manner that simple file systems can at least store (though applications then need to understand that form of storage to handle it effectively) may be applicable here.

re:
https://www.garlic.com/~lynn/2006r.html#3 Trying to design low level hard disk manipulation program

we had somewhat stumbled across file bundles (based on use, not necessarily any filesystem structure organization) in the work that started out doing traces of all record accesses for i/o cache simulation (circa 1980).

the strict cache simulation work showed that partitioned caches (aka "local LRU") was always lower performance than global cache (aka global LRU). for a fixed amount of electronic storage, a single global system i/o cache always had better thruput than partitioning the same amount of electronic storage between i/o channels, disk controllers, and/or individual disks (modulo a track cache for rotational delay compensation).

further work on the full record access traces started to show up some amount of repeated patterns that tended to access the same collection of files. for this collection of data access patterns, rather than disk arm motion with various kinds of distribution ... there was very strong bursty locality. this led down the path of maintaining more detailed information about files and their usage for optimizing thruput (and layout).

earlier at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

we had done detailed page reference traces and cluster analysis in support of semi-automated program reorganization ... which was eventually released as VS/REPACK product. the disk record i/o traces started down the path of doing something similar for filesystem organization/optimization.

i had done a backup/archive system that was used internally at a number of locations. this eventually morphed into product called workstation datasave facility and then adsm. it was later renamed tsm (tivoli storage manager). this now supports bundles/containers for file storage management (i.e. collections of files that tend to have bursty locality of reference patterns)
https://www.garlic.com/~lynn/submain.html#backup

some number of other backup/archive and/or (hierarchical) storage management systems now also have similar constructs.

some recent posts that mention that i/o cache simulation work
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#18 how much swap size did you take?
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#7 virtual memory
https://www.garlic.com/~lynn/2006j.html#14 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006p.html#0 DASD Response Time (on antique 3390?)

some recent posts mentioning vs/repack activity
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
https://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006j.html#18 virtual memory
https://www.garlic.com/~lynn/2006j.html#22 virtual memory
https://www.garlic.com/~lynn/2006j.html#24 virtual memory
https://www.garlic.com/~lynn/2006l.html#11 virtual memory
https://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
https://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance

Was FORTRAN buggy?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 10:44:35 -0600
vjp2.at writes:
Maybe the problem was the pretentious creeps got drawn to being "safe" (F.U.D.) on IBM while the innovators were all on DEC.

i've commented before that there were more 4341s sold than vax'es into the same mid-range market segment (besides the large numbers deployed internally). there were numerous cases of customer orders in hundreds at a time for departmental computing type operations. posting giving example:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

the mid-range then got hit in the mid-80s as that market segment started moving to workstations and larger PCs for servers and departmental computing.

to some extent popular press seemed to focus on the high-end mainframe iron doing commercial batch operations compared to some of the other vendors offerings in the mid-range market segment (even tho boxes like 4341 and 4331 were also extremely popular in that midrange market in the late 70s and early 80s).

a few old posts given domestic and world-wide vax shipments:
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2006k.html#31 PDP-1

various recent posts mentioning 2-tier/3-tier evoluation in the mid-range market segment.
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006c.html#11 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006c.html#26 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2006k.html#9 Arpa address
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
https://www.garlic.com/~lynn/2006k.html#42 Arpa address
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#7 Google Architecture
https://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article
https://www.garlic.com/~lynn/2006p.html#36 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#39 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006q.html#4 Another BIG Mainframe Bites the Dust
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 11:11:12 -0600
hancock4 writes:
IBM announced the first disk drive 50 years ago. Modern computing would not exist without the economical random access memory afforded by the disk drive. Could you imagine loading a separate cassette tape every time you wanted to run a program or access a file? All on-line processing wouldn't exist since there'd be no way to locate and store information in real time.

Apparently this anniversary is a yawner. The 40th Anniv of S/360 got attention.


note that the san jose plant site, where all of this was done ... now belongs to hitachi. there used to be all sorts of stuff on various ibm san jose web sites about early activity ... but a lot of that seemed to go missing when the location changed hands.

can you imagine holding big festivities on the plant site that no longer belongs to you.

misc. posts mentioning san jose plant site is now hitachi
https://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003i.html#25 TGV in the USA?
https://www.garlic.com/~lynn/2003n.html#39 DASD history
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)

not 50 years ago ... but some amount of postings related to activity on the plant site 25-30 years ago
https://www.garlic.com/~lynn/subtopic.html#disk

during the early 80s there was some amount of friendly competition between san jose storage business and the pok large mainframe business on which location was contributing the most to the bottom line (which had traditional been pok, but there was a period where they were neck & neck ... and even quarters were san jose passed pok).

a lot of that has since all gone by the wayside ... recent post mentioning a couple of the issues
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 16:52:31 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
note that the san jose plant site, where all of this was done ... now belongs to hitachi. there used to be all sorts of stuff on various ibm san jose web sites about early activity ... but a lot of that seemed to go missing when the location changed hands.

can you imagine holding big festivities on the plant site that no longer belongs to you.


re:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives

you might find the marketing department from a line of business possibly taking small part of their budget ... say several million to drop on gala and press releases ... but since the original line of business is sold off to somebody else ... it is hard to imagine who is likely to drop even a couple million on such an activity.

how many remember the "last great dataprocessing IT party" (article in usatoday)? ... ibm had taken the san jose coliseum ... brought in jefferson starship and all sorts of other stuff (gala for the rsa show). between the time the contracting/funding for the event and the actual event ... the responsible executive got totally different responsibilities ... but they allowed him to play the greeter (all dressed up in a tux) at the front door as you went in.

this has copy (scroll to the right quite a bit, past the 2002 program, to the "RSA Conference 2000 IBM Gala Program") of the program for that gala event (if i still have mine someplace, maybe i can scan it) ...
http://www.joemonica.com/pages/print.html
https://web.archive.org/web/20040807023913/http://www.joemonica.com:80/pages/print.html

somebody's trip report
http://seclists.org/politech/2000/Jan/0058.html
other reference to the Gala
http://seclists.org/politech/2000/Jan/0054.html

IBM's gala at rsa '99 wasn't even remotely as extravagant (and only $250k) ... somebody's pictures:
http://pix.paip.net/Party/IBM99/

not sure who's budget you could get to drop even a measly $250k on 50th disk anniversary.

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 18:27:58 -0600
William Hamblen <william.hamblen@earthlink.net> writes:
Banks that issued 30 year mortgages had to think about Y2K 36 years ago.

in the last half of the last century (as computerized dataprocessing proliferated), there were still quite a few people that had been born before 1900.

reference to a old "y2k" like problems somebody posted from the early 80s
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/99.html#233 Computer of the century
https://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...

repeat of somebody's email

Date: 7 December 1984, 14:35:02 CST

1.In 1969, Continental Airlines was the first (insisted on being the first) customer to install PARS. Rushed things a bit, or so I hear. On February 29, 1972, ALL of the PARS systems canceled certain reservations automatically, but unintentionally. There were (and still are) creatures called "coverage programmers" who deal with such situations.

2.A bit of "cute" code I saw once operated on a year by loading a byte of packed data into a register (using INSERT CHAR), then used LA R,1(R) to bump the year. Got into a bit of trouble when the year 196A followed 1969. I guess the problem is not everyone is aware of the odd math in calendars. People even set up new religions when they discover new calendars (sometimes).

3.We have an interesting calendar problem in Houston. The Shuttle Orbiter carries a box called an MTU (Master Timing Unit). The MTU gives yyyyddd for the date. That's ok, but it runs out to ddd=400 before it rolls over. Mainly to keep the ongoing orbit calculations smooth. Our simulator (hardware part) handles a date out to ddd=999. Our simulator (software part) handles a date out to ddd=399. What we need to do, I guess, is not ever have any 5-week long missions that start on New Year's Eve. I wrote a requirements change once to try to straighten this out, but chickened out when I started getting odd looks and snickers (and enormous cost estimates).


... snip ... top of post, old email index

this was computer conferencing supported with TOOLSRUN technology mentioned in recent post
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?

Greatest Software Ever Written?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Greatest Software Ever Written?
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 20:48:07 -0600
re:
https://www.garlic.com/~lynn/2006r.html#1 Greatest Software Ever Written?

recent eletronic product code (EPC) news item ... aka next generation product barcodes ...

Pfizer to Use RFID to Combat Fake Viagra
http://www.technewsworld.com/story/53218.html

from above ...
Pfizer claims it is the first pharmaceutical company with a program of this type, focused on EPC authentication as a means of deterring counterfeiting. However, Wal-Mart now requires its top 300 suppliers to tag cases and pallets of select goods, and over 24 drug providers tag bulk containers of Schedule II drugs, prescription painkillers and drugs of abuse.

... snip ...

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 21:37:08 -0600
et472@FreeNet.Carleton.CA (Michael Black) writes:
Or, the hard drive would be invented, but later. I'm less certain that would have impacted things that much. I got by without a hard drive until the end of 1993, so while a hard drive likely made things easier before that, they could be lived without.

you are thinking about your personal use ... but it wasn't originally invented for personal use ... but for large dataprocessing commercial operations. all the real-time, online transaction stuff starting in the 60s were built on hard drives, electronic point-of-sale credit cards, atm machines, online airline reservation systems, etc, ... the lack of hard drive would have had enormous impact on large number of online/realtime things that people were starting to take for granted.

Greatest Software Ever Written?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Greatest Software Ever Written?
Newsgroups: alt.folklore.computers
Date: Mon, 25 Sep 2006 23:21:46 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
one of the objectives for the aads chip strawman was to be able to do ecdsa processing within transit gate iso 14443 requirements
https://www.garlic.com/~lynn/x959.html#aadsstraw


re:
https://www.garlic.com/~lynn/2006r.html#1 Greatest Software Ever Written?

even more drift ... another recent news item

Identity's biggest guns form Secure ID Coalition to lobby for smart cards
http://www.secureidnews.com/library/2006/09/25/identitys-biggest-guns-form-secure-id-coalition-to-lobby-for-smart-cards/

some recent related comments
https://www.garlic.com/~lynn/aadsm25.htm#30 On-card displays

and another related recent news item:

The touching story of NFC
http://www.techworld.com/mobility/features/index.cfm?featureID=2828&pagtype=all

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Tue, 26 Sep 2006 12:32:07 -0600
scott@slp53.sl.home (Scott Lurndal) writes:
Most of the San Jose plant buildings have been torn down or are being torn down as we speak to make room for 3000 homes and various shopping and big-box stores. Replacing office and manufacturing space with homes is pretty damn stupid when the office/manufacturing space is in a counter-commute area. Those 3000 homes are really going to help traffic suck on 85, 87 and 101.

... a couple posts from earlier this year about san jose plant site, hitachi, etc
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?

above has references to several pages at
http://www.ajnordley.com/

with pictures of the site from the air
http://www.ajnordley.com/IBM/Air/SSD/index.html

also as per the earlier posts, bldg. 50 was part of the massive manufacturing facility build-out done in the mid to late 80s ... part of armonk's prediction that world-wide business was going to double (from $60b/annum to $120b/annum). also as mentioned in the previous posts, it probably was a career limiting move to take the opposite position from corporate hdqtrs (that at least the hardware business wasn't going to be doubling).

past posts mentioning conjecture/comments in the 80s about the possible demise of mainframe disk business
https://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?

earlier posts in this thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Wed, 27 Sep 2006 09:28:30 -0600
hancock4 writes:
I thought there were several "sites" in San Jose. That is, wasn't the earliest disk work done in a former supermarket building? The actual nice complex came later?

re:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives

comment was specifically ... san jose "plant site" ... disk division where they actually had manufacturing line ... recent reference to plant site "new" manufacturing bldg. 50 ... also to site with photos of the plant site from the air
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives

the earlier references in the above
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?

also has URLs for air photos of almaden research site and silicon valley lab site.

the "plant site" had bldg. 14 (disk engineering) and bldg. 15 (disk product test) ... misc. postings
https://www.garlic.com/~lynn/subtopic.html#disk

san jose research had been in "plant site" bldg. 28 until the new almaden facility was built up the hill in the mid-80s. bldg. 28 was where the original relational/sql system/r was done
https://www.garlic.com/~lynn/submain.html#systemr

bldg. 29, "los gatos lab" ... was in san jose on the other side of almaden valley. misc. past posts mentioning bldg. 29, los gatos lab
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004f.html#7 The Network Data Model, foundation for Relational Model
https://www.garlic.com/~lynn/2004o.html#17 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC
https://www.garlic.com/~lynn/2005c.html#1 4shift schedule
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005n.html#17 Communications Computers - Data communications over telegraph
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2006.html#26 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#5 Materiel and graft
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?

bldg. 90, "santa teresa lab" ... was built in mid-70s ... and originally was going to be called the coyote lab ... more recently renamed silicon valley lab. misc. past posts mentioning bldg. 90:
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001e.html#64 Design (Was Re: Server found behind drywall)
https://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#34 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#29 checking some myths.
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002b.html#15 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002k.html#9 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002o.html#11 Home mainframes
https://www.garlic.com/~lynn/2002o.html#69 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2002q.html#44 System vs. application programming?
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003e.html#9 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003i.html#56 TGV in the USA?
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003o.html#2 Orthographical oddities
https://www.garlic.com/~lynn/2004c.html#31 Moribund TSO/E
https://www.garlic.com/~lynn/2004e.html#22 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004n.html#18 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#17 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005c.html#1 4shift schedule
https://www.garlic.com/~lynn/2005c.html#45 History of performance counters
https://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005e.html#21 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005n.html#17 Communications Computers - Data communications over telegraph
https://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006n.html#8 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???

Was FORTRAN buggy?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Wed, 27 Sep 2006 09:11:12 -0600
KR Williams <krw@att.bizzzz> writes:
The only major projects I saw canceled in the '70s were *LOSERS* (e.g. FS) and were replaced my products that made gazillion$ (303x and 308x). Maybe IBM was behind DEC in its slide down the MBA slope. IBM certainly got there, but in the late '80s, not '70s.

some claim that putting in a non-engineering type as head of the disk division in the late 70s was possible similar. recent post:
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft

... and also as a reaction to the failure of FS
https://www.garlic.com/~lynn/submain.html#futuresys

where technical types had possibly been given too much latitude.

however, the person credited with leading the 3033 thru to its success (3031 and 3032 were primarily repackaged 158s & 168s to use channel director ... and even 3033 started out being 168 wiring diagram remapped to newer chips) ... was then brought in as replacement to head up disk division.

part of all this was that significant resources and time were diverted into FS ... and after it was killed, there was a lot of making up for lost time

we sort of got our hands slapped in the middle of pulling off 3033 success.

i previously had mentioned working on 5-way smp VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

and after that was killed ... there was a 16-way smp project started called "logical machines" ... that had 16 370 (158) engines all ganged together with extremely limited memory/cache consistency. we had diverted the attention of some of the processor engineers that were dedicated to 3033 ... to spending a little time on "logical machine" effort. when the person driving 3033 eventually found out that we were meddling with some of his people ... there was some amount of attidude readjustment (and suggestion that maybe certain people shouldn't be seen in pok for awhile). during 3033, there were stories about him being in admin office running pok during first shift and being down on the line with the engineers second shift

other past posts mentioning "logical machine" effort:
https://www.garlic.com/~lynn/2002i.html#82 HONE
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons

these activities (and a couple others that I was involved in) were going on concurrently to turning out my resource manager ... another one of the reasons previously mentioned about resource manager was something of a hobby ... as opposed to full time, dedicated effort:
https://www.garlic.com/~lynn/2006q.html#34 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006q.html#46 Was FORTRAN buggy?

the other part of the late 80s was that some amount of dataprocessing was shifting out of the glass house ... and communication group had their barb wire around the glass house perimeter. recent refernce
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006r.html#20 Was FORTRAN buggy?

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Wed, 27 Sep 2006 14:56:08 -0600
hancock4 writes:
By the standards of the 1950s and 1960s, main memory was measured in thousands while disk space was measured in millions. The first disk drive had [only] 5 meg but that was enormous compared to main memory of those days, maybe 80K. In a few years they got the disk up to 50 Meg. I don't think the drums of that era could get anywhere near that.

360 had fixed head 2303 & 2301 drums (2301 effectively a 2303 but read/write four heads in parallel) and 4mbytes capacity in the era of 2311 (7mbytes) and 2314 (29mbytes) disks.

in the early 70s with 370 came 3330-1 (100 mybytes) and then 3330-11 (200 mbytes) and the fixed-head disk 2305 (12mbytes) was replacement for 2301/2303 drums.

after that, electronic store was becaming plentiful enuf to start doing caching (somewhat mitigating requirement for fixed head disks).

when cp67 originally showed up at the univ. its disk i/o strategy was strictly FIFO and paging operations were done with a different/unique i/o operation per 4k page transfer.

one of the performance changes i did as an undergradudate at the univ. was put in ordered arm seek queueing ... and where possible whould (try and optimally) chain all queued page transfers into single i/o (for the same device on drums and for same cylinder on disk).

the ordered arm seek queueing allowed at least 50percent better thruput under nominal conditions and system degraded much more gracefully under heavy load.

the single page transfer per physical i/o would peek around 80 page transfer per second on 2301 drum (avg. rotational delay for each page). with chaining, a 2301 would peak around 300 page transfers per second.

later i did page mapped interface for the cms filesystem in which i could do all sorts of fancy i/o optimizations (that was a lot more difficult and/or not possible using the standard i/o interface paradigm). post this year about some old performance stuff with paged mapped interface
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux

misc. posts mentioning paged mapped interface work
https://www.garlic.com/~lynn/submain.html#mmap

various past postings mentioning 2301s and/or 2305s
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#8 3330 Disk Drives
https://www.garlic.com/~lynn/95.html#12 slot chaining
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#6 3330 Disk Drives
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000.html#92 Ux's good points.
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#52 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/2000d.html#53 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/2000g.html#42 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#45 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001.html#17 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001h.html#36 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#37 Credit Card # encryption
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#57 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002.html#31 index searching
https://www.garlic.com/~lynn/2002b.html#8 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002b.html#23 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#31 bzip2 vs gzip (was Re: PDP-10 Archive migration plan)
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#47 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#73 VLSI and "the real world"
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2003.html#70 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#6 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#9 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#18 Card Columns
https://www.garlic.com/~lynn/2003c.html#36 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003c.html#37 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#19 Disk prefetching
https://www.garlic.com/~lynn/2003m.html#6 The real history of comp arch: the short form
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#6 The BASIC Variations
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#54 [HTTP/1.0] Content-Type Header
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004l.html#2 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#13 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#3 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005o.html#43 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#51 winscape?
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#23 winscape?
https://www.garlic.com/~lynn/2005s.html#41 Random Access Tape?
https://www.garlic.com/~lynn/2005t.html#50 non ECC
https://www.garlic.com/~lynn/2006.html#2 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#41 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2006i.html#27 Really BIG disk platters?
https://www.garlic.com/~lynn/2006i.html#41 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#57 virtual memory
https://www.garlic.com/~lynn/2006m.html#5 Track capacity?
https://www.garlic.com/~lynn/2006q.html#1 Materiel and graft
https://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?

A Day For Surprises (Astounding Itanium Tricks)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Day For Surprises (Astounding Itanium Tricks)
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 27 Sep 2006 15:11:44 -0600
jsavard writes:
As it happens, the technique of "Just-in-Time compilation", recently discovered, *is* a highly efficient way of emulating other architectures entirely in software. And some Itanium chips were claimed to execute x86 code with what was essentially an independent chip of 486-style design on the die. I'm surprised at that: given that the Itanium shares data types with the x86, it should have been possible to have an Itanium control unit and an x86 control unit share the same ALUs for more equal performance.

there have been various looks at doing 360/370 simulation on itanium (porting existing i86 simualtors to itanium) going back to the earliest days of itanium design.

in the late 70s, early 80s ... there was fort knox. the low-end 360 & 370 processors were typically implemented with "vertical" microcoded processors ... that avg. out something like 10 micro-instructions per 360/370 instruction. the higher end 360/370 used horizontal microcode engines (being somewhat more similar to itanium).

fort knox was to replace the vast array of microprocessor engines with 801s. this started out that the follow-on to 4341 was going to be an 801/risc engine. this was eventually killed ... i contributed to one of the analysis that help kill it. part of the issue was that silicon technology was getting to the point that you could start doing 370 almost completely in silicon.
https://www.garlic.com/~lynn/subtopic.html#801

one of the other efforts was 801/romp that was going to be used in the opd displaywriter follow-on. when this was killed, it was retargeted as a unix workstation and became pc/rt. this then spawned 801/rios (power) and then somerset and power/pc.

there was also some work in fort knox on a hybrid 370 simulation effort using 801 ... that involved some JIT activity. i got dragged into a little of it because i had written a PLI program in the early 70s that processed 360/370 assembler listings ... analyzed what was going the program and tried to generate a higher level representation of the program ... a couple recent postings
https://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?

Computer Artifacts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Computer Artifacts
Newsgroups: alt.folklore.computers
Date: Wed, 27 Sep 2006 17:18:29 -0600
Steve O'Hara-Smith <steveo@eircom.net> writes:
There were various optical disc arrangements in use. Around 1990 I saw a large read only store based on a jukebox like affair with (IIRC) 12in optical WORM discs holding 2GB a piece. Eventually the CD standardised the physical format of course.

76 sometime, in san jose there was a lathe like arrangement with something like 200 spinning floppies. the spinning providing some strength/structure to the floppies but also there was problem with floppy material stretching with the constant spinning. single head was on an assembly parallel to the rating "axle" ... it would possition itself at the floppy it wanted to read/write, small blade parted the floppies and then compressed air further parted the floppies providing enuf room for the head to be inserted. the spinning provided enuf structure for the head/floppy contact for read/write. single head for all two hundred floppy "platters" is somewhat analogous to early disk assemblies. i remember john cocke referring to it as some like "tail dragger" (as a contrast to all the bleading edge stuff that was going on).

IBM Fellow John Cocke passed away on July 16th
http://domino.watson.ibm.com/comm/pr.nsf/pages/news.20020717_cocke.html

A Day For Surprises (Astounding Itanium Tricks)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Day For Surprises (Astounding Itanium Tricks)
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 27 Sep 2006 18:36:25 -0600
re:
https://www.garlic.com/~lynn/2006r.html#24 A Day For Surprises (Astounding Itanium Tricks)

for a little drift ... somebody that was involved in (among other things):

3033 dual-address space

fort knox/801

pa-risc

and itanium

a few posts this year on the subject:
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#1 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006o.html#67 How the Pentium Fell Short of a 360/195
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email

A Day For Surprises (Astounding Itanium Tricks)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Day For Surprises (Astounding Itanium Tricks)
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 28 Sep 2006 08:54:12 -0600
re:
https://www.garlic.com/~lynn/2006r.html#24 A Day For Surprises (Astounding Itanium Tricks)
https://www.garlic.com/~lynn/2006r.html#26 A Day For Surprises (Astounding Itanium Tricks)

.... from long ago and far away


Increasing Circuitry in the 4300s

     4331MG1          4331MG2          4341MG1          4341MG2
--------------   --------------   --------------   --------------
|              | |              | |              | |              |
| H.L.L. Progs | | H.L.L. Progs | | H.L.L. Progs | | H.L.L. Progs |
 |              | |              | |              | |              |
|              | |              | |              | |              |
 |              | |              | |              | |              |
|              | |              | |              | |              |
|--------------| |--------------| |--------------| |--------------|
| Architecture | | Architecture | | Architecture | | Architecture |
 |--------------| |--------------| |--------------| |--------------|
|              | |              | |              | |              |
 |              | |              | |      ---    -| |--     ---   -|
| Microcode    | |              | |     |   |  | | |  |___|   | | |
|              | |   -----      | |-----     __  | |           _  |
|   --         | |__|     |    -| |              | |              |
 |__|  |___   --| |        |___| | |              | |              |
|         __|  | |              | |              | |              |
 |  Circuitry   | |  Circuitry   | |  Circuitry   | |  Circuitry   |
--------------   --------------   --------------   --------------

The Anton design is a step further than the 4341MG2 implementation. For a significant number of functions the Anton raises the circuitry interface almost to the architected interface.

Note the circuitry interfaces across the 4300s are not identical. The 4331s are significantly different from the 4341s. Differences do exist even within model groups. Some of this increased circuitry expands existing components (a larger cache) and some of it functional alters the circuitry to microcode interface.

On the 4300s, in fact on all S/370 compatible processors, compatibility and portability are accomplished at the artificial, architected machine interface (aka 370 architecture interface)


... snip ...

for other topic drift, originally 3090 was going to use an embedded 4331 as the service processor, running a highly modifed version of vm370 release 6 and all the panels/menus done in ios3270. 3090 was eventually shipped with a pair of embedded 4361s as dedicated service processors (for redundancy and availability).

Greatest Software Ever Written?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Greatest Software Ever Written?
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2006 09:21:34 -0600
re:
https://www.garlic.com/~lynn/2006r.html#1 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006r.html#17 Greatest Software Ever Written?
https://www.garlic.com/~lynn/2006r.html#19 Greatest Software Ever Written?

continuing the drift with recent news items:

Contactless Cards: Are Privacy Jitters Legit?
http://www.ecommercetimes.com/story/53273.html

recent discussion on the difference between something you have authentication and something you are authentication.
https://www.garlic.com/~lynn/aadsm25.htm#32 On-card displays

in the yes card vulnerability,
https://www.garlic.com/~lynn/subintegrity.html#yescard

the static data in the chip represents supposedly unique information as something you have authentication. copying/cloning the information was sufficient to enable fraudulent transactions.

however, in the passport case, the "static data" in the chip represents effectively biometric information (picture) about the individual, requiring a further step of matching the data against the person for something you are authentication. any copying/cloning of the information doesn't directly enable fraudulent transactions (as in the yes card scenario involving static data something you have authentication). however, as mentioned in the referenced post, there is other personal information which raises privacy issues.

for rfid/contractless, there is possibly increased ease of copying/cloning of information compared to some other technologies (analogous to using the internet can increase exposure of information). however, there can be radically different treat models associated with the information that is exposed.

Intel abandons USEnet news

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intel abandons USEnet news
Newsgroups: comp.arch
Date: Thu, 28 Sep 2006 10:23:16 -0600
"comp.arch@patten-glew.net" <AndyGlew@gmail.com> writes:
I started hearing about this 4 years ago from somebody at Microsoft. It appears that some big brokerage company lost a lawsuit, because customer lists which were stored on an incorrectly configured laptop computer owned by an employe, were lost. Hypothesis is/was that, if the computer lost had been company owned and configured, e.g. with hard disk encryption, the company would have been deemed less negligent. My Microsoft acquaintance at that time predicted the demise of "dial in from your own personal computer" telecommuting.

long standing issue ... my oft repeated theme of security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

and the swimming pool attractive nuisance scenario. there was civil litigation claiming several billion around 30 years ago involving industrial espionage and theft of trade secrets. the judge made statements effectively that countermeasures & protection have to be proportional to value (otherwise you can't really blame people for doing what comes naturally and stealing).

misc. past posts raising the issue:
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2005f.html#60 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
https://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006q.html#36 Was FORTRAN buggy?

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2006 11:22:09 -0600
hancock4 writes:
I don't think they were very popular. Drums were a "compromise" between the high capacity of disk and the high speed of core. Since drums had fixed heads over each track they were faster, indeed, the IBM 650 and other small machines of the 1950s used drums as the sole main memory. I believe drums were invented by ERA (originally a secret firm* but then part of Rem Rand Univac**) in the late 1940s and quite popular in that era.

most 360/67 had 2301 drums for virtual memory paging .... supporting interactive computing (tss/360 and cp67 ... as well as mts, michigan terminal system).

there was less of an issue with batch oriented systems, since the reduced latency (no arm motion) was less of an batch issue (than it might be in interactive computing environment).

picture of 2301 drum here:
http://www.columbia.edu/cu/computinghistory/drum.html

360/67 with picture of 2314 and 2301 in upper right background
https://web.archive.org/web/20030820174805/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/29.html

another picutre of 360/67
https://web.archive.org/web/20030429150339/www.cs.ncl.ac.uk/old/events/anniversaries/40th/images/ibm360_672/slide07.html

closeup picture of 2301
https://web.archive.org/web/20030820180331/www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide12.html

the cp67-based (and later vm370-based) commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare

tended to have 2301 drums (and later 2305 fixed-head disks w/vm370) for interactive computing environments where interactive response was an issue.

again ... it was less of an issue in batch-oriented operations

other posts in this thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives

50th Anniversary of invention of disk drives

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2006 15:30:50 -0600
hancock4 writes:
I don't think they were very popular. Drums were a "compromise" between the high capacity of disk and the high speed of core. Since drums had fixed heads over each track they were faster, indeed, the IBM 650 and other small machines of the 1950s used drums as the sole main memory. I believe drums were invented by ERA (originally a secret firm* but then part of Rem Rand Univac**) in the late 1940s and quite popular in that era.

there was another "compromise/trade-off" between disks and high speed core for 360s. disks (drums, datacells, etc) were referred to as "DASD" (direct access storage device) ... more specifically "CKD" DASD (count-key-data).

the trade-off was extremely scarce real storage vis-a-vis realatively abundant i/o resources. typically, filesystems have an index of where things are on the disk. most systems these days, use the relatively abundant real storage to cache these indexes (in addition to caching the data itself). however of 360, the indexes were kept on disk (saving real storage).

CKD allowed for essentially allowed filesystem metadata to be written along with the data itself. the indexes were kept on disk with filesystem metadata indexes. rather than reading the indexes into real storage (and possibly caching them), CKD DASD i/o programming provided for doing a sequential search of the indexes on disk ... trading off scarce real storage for abundant i/o capacity.

however, by at least the mid-70s, the trade-off was reversing ... with real storage starting to become abundant and disk i/o was becoming more and more of a system bottleneck.

in the late 70s, i was brought in to investigate a severe throughput/performance problem for a large national retail chain. they had central dataprocessing facility providing support for all stores nationally ... with several clustered mainframes sharing common application library. it turns out that the CKD/PDS program library dasd/disk search was taking approx. 1/2 second elapsed time (actual program load took maybe 10-20 milliseconds ..., but the on-disk index serial search was taking 500 milliseconds) and all retail store software application program loads were serialized through this process.

this trade-off left-over from the mid-60s included having the argument for the on-disk serial search kept in processor real storage (further optimizing real storage constraint) ... however it required that there was a dedicated exclusive i/o path between the device and the processor real storage for the duration of the search. this further exasherbated the throughput. typically multiple disks (between 8 to 32) might share a common disk controller and i/o channel/bus. not only was the disk performing the search, busy for the duration ... but because of the requirement for the dedicated open channel between the disk and processor storage (for accessing the search argument) busy for the duration of the search ... it wasn't possible to perform any operations for any of the other disks (sharing the same controller and/or i/o channel/bus).

misc. past posts discussing this subject
https://www.garlic.com/~lynn/submain.html#dasd

... not the above is a different collection of posts than
https://www.garlic.com/~lynn/subtopic.html#disk

which primarily references working with the people in bldg. 14 (disk engineering) and bldg. 15 (disk product test) on the san jose plant site.

in any case, this and other factors prompted my observation that over a period of ten to fifteen years, disk relative system performance had declined by an order of magnitude i.e. other system resources increased by a factor of fifty while disk resources (in terms of operations per second) increased by possibly only a factor of five.

the initial take was that the disk division assigned their disk performance and modeling group to refute my statements ... however, after several weeks they came back and said that I may have actually slightly understated the issue.

the change in the relative thruput of different system components ... especially with respect to each other ... results in having to change in various strategies and trade-offs ... which is also somewhat the recent thread from comp.arch
https://www.garlic.com/~lynn/2006r.html#3 Trying to design low level hard disk manipulation program
https://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program

another series of posts about similar change in disk/memory trade-offs involves system/r ... original relational/sql
https://www.garlic.com/~lynn/submain.html#systemr

and RDBMS. in the 70s, there were something of pro/con argument between the people in santa teresa lab (bldg 90) dealing with 60s "physical" databases and system/r work going on in bldg. 28. the stl people were claiming that system/r indexes doubled the typical physical disk space requirements and significantly increased the search time to find a specific record (requiring potentially reading multiple different indexes). this was compared to the 60s physical databases were physical record pointers were exposed as part of the data paradigm.

the counter argument was that there was significant manual and administrative effort required to managed the exposed physical record pointers ... that were eliminated in the RDBMS paradigm.

what you saw going into the 80s, was the significant increase in disk space (the number of bits per disk arm increased by an order of magnitude, the disk arm accesses/sec only showed slight improvement) and the significant decrease in the price per megabyte of disk space ... somewhat made the issue mute about the size of the RDBMS indexes. furthermore, the ever increasing abundance of real storage made it possible to cache a significant portion of RDBMS index in real storage (eliminating the significant number of additional I/Os to process the index ... vis-a-vis the physical databases from the 60s).

the issue during the 80s for RDBMS was that relative importance of the "cons" against RDBMS were significantly reduced ... while the "cons" against the 60 physical databases (manual people time and expertise) significantly increased. a few past posts on the changing relative amount of different resources for RDBMS:
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004p.html#38 funny article
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005s.html#9 Flat Query
https://www.garlic.com/~lynn/2005s.html#17 winscape?

misc. other past posts about change in relative system thruput and performance of various system components over a period of years
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
https://www.garlic.com/~lynn/2006o.html#27 oops

other posts in this disk thread:
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives

MIPS architecture question - Supervisor mode & who is using it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MIPS architecture question - Supervisor mode & who is using it?
Newsgroups: comp.arch
Date: Thu, 28 Sep 2006 16:41:11 -0600
"John Mashey" <old_systems_guy@yahoo.com> writes:
I say again: the hardware cost of this was really minimal, or we wouldn't have done it. If an OS used it, there would be more transitions. Nobody wanted the supervisor state code to access the low-level resources.

of course os/vs2 started down the path when it went from svs (single virtual storage) to mvs (multiple virtual storage).

in svs ... things were still somewhat real memory mvt paradigm ... except laid out in a somewhat larger (single) virtual address space; this included the kernel ... subsystem applications, that effectively acquired kernel mode, and standard applications.

the problem was that the whole infrastructure used a pointer passing paradigm ... everything required that you access the caller's storage.

the move to mvs ... gave each application its own virtual address space ... but with the MVS kernel appearing in 8mbytes of each one of these application address space .... allowed kernel code to access the application parameters pointed to by pointer passing invokation. This was nominally an 8mbyte/8mbyte split for kernel/application out of 16mbyte virtual address space.

however, this created a big problem for subsystem applications that were also now in their own unique virtual address space. it became a lot harder for a subsystem application to be invoked from "standard" application (running in their unique address spaces) via a pointer passing call ... and still reach over and obtain the relevant parameter information.

dual-address space mode was born with the 3033 ... where semi-privilege subsystem application could be given specific access to a calling application's virtual address space. part of what prompted dual-address space in 3033 ... was that the work around for subsystem accessing parameters had been the establishment of something called the "common segment" ... bascially each subsystem got a reserved space in every address space for placing calling parameters that then could be accessed by the passed pointer. larger installations providing a number of services had five megabyte common segment (out of every 16mbyte virtual address space in addition to the 8mbyte kernel) ... leaving only 3mbytes for application use.

there was still a performance problem (even with dual-address space) that the transition from standard application to subsystem application required an indirect transition through the kernel via a kernel call. this became more and more an issue as more system library functions were moved out of standard application space and into their own virtual address space.

dual-address space was expanded with access registers and program call/return instructions. basically something close to the performance of a library branch-and-link ... but with control about semi-privilege state change as well as switching virtual address space ... but still also providing access back to the caller's virtual address space.

misc. reference from esa/390 (not 64bit z/Architecture):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

5.4 Authorization Mechanisms
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.4?SHELF=EZ2HW125&DT=19970613131822

... from above ...
The authorization mechanisms which are described in this section permit the control program to establish the degree of function which is provided to a particular semiprivileged program. (A summary of the authorization mechanisms is given in Figure 5-5 in topic 5.4.8.) The authorization mechanisms are intended for use by programs considered to be semiprivileged, that is, programs which are executed in the problem state but which may be authorized to use additional capabilities. With these authorization controls, a hierarchy of programs may be established, with programs at a higher level having a greater degree of privilege or authority than programs at a lower level. The range of functions available at each level, and the ability to transfer control from a lower to a higher level, are specified in tables which are managed by the control program. When the linkage stack is used, a nonhierarchical transfer of control also can be specified.

• 5.4.1 Mode Requirements
• 5.4.2 Extraction-Authority Control
• 5.4.3 PSW-Key Mask
• 5.4.4 Secondary-Space Control
• 5.4.5 Subsystem-Linkage Control
• 5.4.6 ASN-Translation Control
• 5.4.7 Authorization Index
• 5.4.8 Access-Register and Linkage-Stack Mechanisms


... snip ...

misc. past posts about common segment and/or dual address space
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#38 The Pankian Metaphor
https://www.garlic.com/~lynn/2006k.html#44 virtual memory
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?

50th Anniversary of invention of disk drives

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50th Anniversary of invention of disk drives
Newsgroups: alt.folklore.computers
Date: Thu, 28 Sep 2006 19:31:40 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
this trade-off left-over from the mid-60s included having the argument for the on-disk serial search kept in processor real storage (further optimizing real storage constraint) ... however it required that there was a dedicated exclusive i/o path between the device and the processor real storage for the duration of the search. this further exasherbated the throughput. typically multiple disks (between 8 to 32) might share a common disk controller and i/o channel/bus. not only was the disk performing the search, busy for the duration ... but because of the requirement for the dedicated open channel between the disk and processor storage (for accessing the search argument) busy for the duration of the search ... it wasn't possible to perform any operations for any of the other disks (sharing the same controller and/or i/o channel/bus).

re:
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives

the characteristic of CKD DASD search i/o operations constantly referencing the search information in processor memory was taken advantage of by ISAM indexed files. ISAM could have multiple levels of indexes out on disk ... and an ISAM channel i/o program could get extremely complex. the channel i/o program could startoff with an initial metadata search argument ... which would search for the argument based on various criteria (less, greater, equal, etc) which then chained to read operation of the associated data (which could be the next level metadata search argument) ... and then chained to a new search operation using the data just read information as its search argument. all of this could be going on totally asynchronous to any processor execution.

lots of other CKD DASD related postings
https://www.garlic.com/~lynn/submain.html#dasd

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 29 Sep 2006 09:38:55 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
Have the presenter review ancient history in the S/360 line -- the 360 line generally supported differing page sizes (2K and 4K) and the 360/67 supported 2K, 4K and even 1M page sizes. (I don't recall whether any SCP shipped that dealt with 1M page sizes, especially in the VERY expensive storage era of the S360 line though. That could be why the idea lurked for lo these many years.)

360/67 was the only 360 that supported virtual memory (other than a custom 360/40 with special hardware modifications that cambridge did before it got a 360/67). 360/67 supported only 4k pages sizes and 1mbyte segments ... however 360/67 supported both 24-bit and 32-bit virtual addressing

a copy of 360/67 functional characteristics at bitsavers
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf

max. storage on 360/67 uniprocessor was 1mbyte real storage (and a lot of 360/67 were installed with 512k or 768k real storage). out of that you had to take fixed storage belonging to the kernel ... so there would never be any 1mbyte real storage left over for virtual paging.

note that the 360/67 multiprocessor also had a channel director ... which had all sorts of capability ... including all processors in a multiprocessor environment could address all i/o channels ... but could still be partitioned into independently operating uniprocessors, each with their dedicated channels. standard 360 multiprocessor only allowed sharing of memory ... but a processor could only address their own dedicated i/o channels. the settings of the channel director could be "sensed" by settings in specific control registers (again see 360/67 functional characteristics).

equivalent capability allowing all processors to address all channels (in multiprocessor environment) and supporting more than 24bit addressing didn't show up again until 3081 and XA.

370 virtual memory had 2k and 4k page size option as well as 64k and 1mbyte segments.

vm370 used 4k pages size and 64k segments as default ... and supported 64k shared segments for cms.

however, when it was supporting guest operation systems with virtual memory ... the vm370 "shadow tables" had to be whatever the guest operating system was using (exactly mirror the guests' tables). dos/vs and vs1 used 2k paging ... os/vs2 (svs & mvs) used 4k paging.

there was an interesting problem at some customers with doubling of cache size going from 370/168-1 to 370/168-3. doubling the cache size, the needed one more bit from the address to index cache line entries and took the "2k" bit ... assuming that the machine was nominally for os/vs2 use. however, there was some number of customers running vs1 under vm on 168s. these customers saw degradation in performance when they upgraded from 168-1 to 168-3 with twice the cache size.

the problem was that the 168-3 ... every time there was a switch between 2k page mode and 4k page mode ... it would completely flush the cache ... and when in 2k page mode it would only used half the cache (same as 168-1) ... and use all the cache in 4k page mode. using only half the cache should have shown the same performance on 168-3 as on 168-1. however, the constant flushing of the cache, whenever the vm moved back & forth between (vs1's shadow table) 2k page mode and (standard vm) 4k page mode ... resulting in worse performance with 168-3 than straight 168-1.

for a little drift ... a number of recent postings about comparing performance/thruput of 768kbyte 360/67 running cp67 at cambridge science center with a 1mbyte 360/67 running cp67 at the grenoble science center. the machine at cambridge was running a global LRU replacement algorithm that i had created and grenoble was running a local LRU replacement algorithm from academic literature. Cambridge running effectively twice workload and 104 4k "available" pages (after fixed kernel requires from 768k machine) had better performance than Grenoble's system (with 155 4k "available" pages after fixed kernel requirements).
https://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#37 virtual memory
https://www.garlic.com/~lynn/2006i.html#42 virtual memory
https://www.garlic.com/~lynn/2006j.html#1 virtual memory
https://www.garlic.com/~lynn/2006j.html#17 virtual memory
https://www.garlic.com/~lynn/2006j.html#25 virtual memory
https://www.garlic.com/~lynn/2006l.html#14 virtual memory
https://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
https://www.garlic.com/~lynn/2006q.html#19 virtual memory
https://www.garlic.com/~lynn/2006q.html#21 virtual memory

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 29 Sep 2006 10:31:36 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
For zSeries to do it you would either be looking at creative use of MIDAW to read/write the 1M pages from/to existing DASD (with less-then-ideal performance) or you would be looking at new DASD (or son-of-DASD maybe). Perhaps it would be a good excuse to resurrect expanded storage (ESTORE) with an also-resurrected Asynchronous Page Mover (of 1M)?

related post in this thread
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF

in the early 80s ... "big pages" were implemented for both VM and MVS. this didn't change the virtual page size ... but changed the unit of moving pages between memory and 3380s ... i.e. "big pages" were 10 4k pages (3380) that moved to disk and were fetched back in from disk. a page fault for any 4k page in a "big page" ... would result in the whole "big page" being fetched from disk.

note the original expanded store ... wasn't so much an architecture issue, it was a packaging/technology issue. 3090s needed more electronic store than could be packaged within the prescribed latency of cache/memory fetch. the approach was to place the storage that couldn't be packaged for close access ... on a different bus under software control that burst transfers in (4k) page size units ... rather than the smaller cache line size units ... and then leverage the programming paradigm already in place for paging to/from disk.

this is somewhat LCS from 360 days (8mbytes of 8mic storage ... compared to 750ns storage on 360/67 or 2mic storage on 360/50). the simple strategy was to just consider it as adjunct of normal, faster storage and tolerate the longer fetch cycle. however, some installations tried to carefully allocate stuff in LCS ... that were lower use programs and/or purely cached data (like hasp buffers). however, some installactions actually implemented copying programs out of LCS to faster storage before execution.

misc. past posts mentioning big pages.
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
https://www.garlic.com/~lynn/2006j.html#2 virtual memory
https://www.garlic.com/~lynn/2006j.html#3 virtual memory
https://www.garlic.com/~lynn/2006j.html#4 virtual memory
https://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
https://www.garlic.com/~lynn/2006l.html#13 virtual memory

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 29 Sep 2006 15:21:31 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
When I lived in POK I was told that a part of the reason for extended storage also had to do with its lack of requirement for storage protect key arrays. The expanded storage memory was then allowed to be more like the memory of the competitors in terms of cost, structure & simplicity. (By the early 1990s the competition was considered to be HP, not Amdahl nor HDS).

say 6bits of storage key per 4k bytes is lost in the noise? (2k storage keys as well as 2k virtual pages having been dropped around 3081 xa time-frame) ... if you wanted to worry about something ... there was 16bit ecc for every 64bit double word (or 2bits per 8bit byte ... as opposed to parity bit per 8bit byte) ... optimizations were trying to get failure coverage (better than simple 1bit/byte parity) with less than 80bits (for 64bit of data) ... like 78bits, 72bits, etc ...

press release on ecc from 1998
http://www-03.ibm.com/press/us/en/pressrelease/2631.wss

another discussion of memory ecc
http://www.research.ibm.com/journal/rd/435/spainhower.pdf

in response to off-list comment about 360 model storage sizes ... see this reference:
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm

note that 1mbyte and 2mbyte, IBM "LCS" 2361 was offered ... but I remember a number of installations having 8mbyte "Ampex" LCS.

past posts in this thread
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF

the claim was that 3090 expanded store memory chips was effectively the same as regular memory chips ... because ibm had really good memory yield. however, there was a vendor around 1980 that had some problems with its memory chip yield involving various kinds of failures that made the chips unusable for normal processor fetch/store (memory).

so a bunch of these "failed" memory chips were used to build 2305 (fixed head disk) clone ... and a fairly large number of them (maybe all that the vendor could produce) were obtained for internal use ... using a "model" number of 1655 for use as dedicated paging devices on internal VM timesharing systems. The claim was that they were able to engineer compensation (for various chip problems) at 4k block transfer boundary that wouldn't be practical if you were doing standard processor fetch/store. some recent posts mentioning the 1655 2305-clone paging devices:
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
https://www.garlic.com/~lynn/2006k.html#57 virtual memory

for other drift ... there was a lot of modeling for 3090 balanced speeds&feeds ... part of it was having sufficient electronic memory to keep the processor busy (which then led to the expanded store stuff)

part of the issue was using electronic memory to compensate for disk thruput. starting in the late 70s, i was making statements about disk relative system thruput had declined by an order of magnitude over a period of years. the disk division assigned the performance and modeling group to refute the statement. after a period of several weeks, they came back and mentioned that i had actually slightly understated the problem ... the analysis was then turned around into a SHARE presentation on optimizing disk thruput (i.e. leveraging strengths and compensating for weaknesses). misc. posting referencing that share presentation
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

one of the issues that cropped up (somewhat unexpectantly?) was the significant increase in 3880 (disk controller) channel busy. the 3090 channel configuration had somewhat been modeled assuming 3830 control unit channel busy. the 3830 had a high performance horizontal microcode engine. for the 3880, they went to a separate processing for the data path (enabling supporting 3mbyte/sec and then 4.5mbyte transfers), but a much slower vertical microprogrammed engine for control commands. this slower processor significantly increased channel busy when processing channel controls/commands (compared to 3830).

a recent post discussion some of the problems that cropped up during 3880 development (these showed up before first customer ship and allowed some work on improvement)
https://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?

however, there was still a fundamental issue that 3880 controller increased channel busy time per operation ... greater than had been anticipated. in order to get back to balanced speeds&feeds for 3090 ... the number of 3090 channels would have to be increased (to compensate for the increased 3880 channel busy overhead).

now, it was possible to build a 3090 with relatively few TCMs. the requirement (because of increased 3880 channel busy) to increase the number of channels resulted in requiring an additional TCM for 3090 build (for the additional channels) ... which wasn't an insignificant increase in manufacturing cost. at one point there was a suggestion (from pok) that the cost of the one additional TCM for every 3090 sold ... should be taken from sanjose's bottom line (as opposed to showing up against POK's bottom line).

the overall situation might be attributed to the after effects from the failure of FS
https://www.garlic.com/~lynn/submain.html#futuresys

a big driving factor in FS was countermeasure to clone/plug compatible controllers ... some collected postings having been involved in creating plug compatible controller as an undergraduate
https://www.garlic.com/~lynn/submain.html#360pcm

however, from this article on FS (by one of the ibm executives involved)
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

i.e. the 3880 "box strategy" might be construed as sub-optimal from an overall system perspective.

for other drift ... recent postings about san jose disk
https://www.garlic.com/~lynn/2006r.html#14 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#15 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#18 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#21 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#23 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#30 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#33 50th Anniversary of invention of disk drives

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 30 Sep 2006 06:18:36 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
in the early 80s ... "big pages" were implemented for both VM and MVS. this didn't change the virtual page size ... but changed the unit of moving pages between memory and 3380s ... i.e. "big pages" were 10 4k pages (3380) that moved to disk and were fetched back in from disk. a page fault for any 4k page in a "big page" ... would result in the whole "big page" being fetched from disk.

re:
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF

"big pages" support shipped in VM HPO3.4 ... it was referred to as "swapper" ... however the traditional definition of swapping has been to move all storage associated with a task in single unit ... I've used the term of "big pages" ... since the implementation was more akin to demand paging ... but in 3380 track sized units (10 4k pages).

from vmshare archive ... discussion of hpo3.4
http://vm.marist.edu/~vmshare/browse.cgi?fn=34PERF&ft=MEMO

and mention of hpo3.4 swapper from melinda's vm history
http://vm.marist.edu/~vmshare/browse.cgi?fn=VMHIST05&ft=NOTE&args=swapper#hit

vmshare was online computer conferencing provided by tymshare to SHARE organization starting in the mid-70s on tymshare's vm370 based commercial timesharing service ... misc. past posts referencing various vm370 based commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare

in the original 370, there was support for both 2k and 4k pages ... and the page size unit of managing real storage with virtual memory was also the unit of moving virtual memory between real storage and disk. the smaller page sizes tended to better optimize constrained real storage sizes (i.e. compared to 4k page sizes, an application might actually only need the first half or the last half of a specific 4k page, 2k page sizes could mean that the application could effectively execute in less total real storage).

the issue mentioned in this post
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
and
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

was that systems had shifted from having excess disk i/o resources to disk i/o resources being a major system bottleneck ... issue also discussed here about CKD DASD architecture
https://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
https://www.garlic.com/~lynn/2006r.html#33 50th Anniversary of invention of disk drives

with the increasing amounts of real storage ... there was more and more a tendency to leveraging the additional real storage resources to compensate for the declining relative system disk i/o efficiency.

this was seen in mid-70s with the vs1 "hand-shaking" that was somewhat done in conjunction with the ECPS microcode enhancement for 370 138/148.
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

VS1 was effectively MFT laid out to run in single 4mbyte virtual address space with 2k paging (somewhat akin to os/vs2 svs mapping MVT to a single 16mbyte virtual address space). In vs1 hand-shaking, vs1 was run in a 4mbyte virtual machine with a one-to-one correspondence between the vs1 4mbyte virtual address space 2k virtual pages and the 4mbyte virtual machine address space.

VS1 hand-shaking effectively turned over paging to the vm virtual machine handler (vm would present a special page fault interrupt to the vs1 supervisor ... and then when vm had finished handling the page fault, present a page complete interrupt to the vs1 supervisor). Part of the increase in efficiency was eliminating duplicate paging when VS1 was running under vm. However part of the efficiency improvement was VM was doing demand paging using 4k transfers rather than VS1 2k transfers. In fact, there were situations were VS1 running on 1mbyte 370/148 under VM had better thruput than VS1 running stand-alone w/o VM (the other part of this was my global LRU replacement algorithm and my code pathlength from handling page fault, to doing the page i/o to completion was much better than the equivalent VS1 code).

there were two issues with 3380, over the years, disk i/o had become increasingly a significant system bottleneck. more specifically latency per disk access (arm motion and avg. rotational delay) was significantly lagging behind improvements in other system components. so part of compensating for disk i/o access latency was to significantly increase amount transferred per operation. the other was that 3380 increased the transfer rate by a factor of ten while its access time only increased by a factor of 3-4. significantly increasing the amount transferred per access also better matched the changes in disk technology over time (note later technologies introduced raid that did large transfers across multiple disk arms in parallel)

full track caching is another approach that attempts to leverage the relative abundance of electronic memory (in the drive or controller) to compensiate for the relative high system cost of doing each disk arm access. part of this starts transfers (to the cache) as soon as the arm has settled ... even before the head has reached the specified requested record. disk rotation is part of the bottleneck ... so full track caching goes ahead and transfers the full track during the rotation ... on the off chance that the application might have some need for any of the rest of the data on the track (the electronic memory in the cache is relatively free compared to the high system cost of doing each arm access and rotational delay).

there is a separate system optimization with respect to increasing the physical page size. making the physical page size smaller allowed for better optimizing relatively scarce real storage sizes. with the shift in system bottleneck from constrained real storage to constrained i/o ... it was possible to increase the amount of data paged per operation w/o having to actually going to larger physical page size (by doing transfering multiple pages at a time ... as in the "big page" scenario).

there is periodic discussion in comp.arch about advantages going to much bigger (hardware) page sizes ... 64kbytes, 256kbytes, etc ... as part of increasing TLB (table look-aside buffer) performance. the actual translation of a virtual address to a physical real storage address is implemented in TLB. A task switch may result in the need to change TLB entries ... where hundreds of TLB entries ... one for each application 4k virtual page may be involved. For some loads/configuration, the TLB reload latency may become a significant portion of a task switch elapsed time. Going to much larger pages sizes ... reduces the number of TLB entries ... and possible TLB entry reloads ... that are necessary for running an application.

Trying to underdtand 2-factor authentication

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Trying to underdtand 2-factor authentication
Newsgroups: comp.security.misc
Date: Sat, 30 Sep 2006 06:56:17 -0600
"not_here.5.species8350@xoxy.net" <not_here.5.species8350@xoxy.net> writes:
Evidently one-time passwords can be used in concert with tokens.

How does this work?


from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you havesomething you knowsomething you are

a hardware token can represent something you have technology and a password can represent something you know technology. typically multi-factor authentication is considered more secure because the different factors have different/independent vulnerabilities (i.e. pin/password considered countermeasure to lost/stolen token, modulo not writing the pin/password on the token).

a couple old posts discussing one-time passwords implementation and possible vulnerabilities/exploits
https://www.garlic.com/~lynn/2003n.html#1 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#2 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#3 public key vs passwd authentication?

it is also possible to have a common vulnerability for different factors. misc posts discussing yes cards exploit
https://www.garlic.com/~lynn/subintegrity.html#yescard

where the token validates using static data (effectively a kind of pin/password). the static data can be skimmed and used to create a counterfeit token. the yes card operation involves the infrastructure validating the token ... and then asking the token if the entered pin was correct. the counterfeit yes cards are programmed to always answer YES, regardless of what pin is entered.

however, it is possible that the way that the token validates itself is via some sort of one-time password technology (as opposed to some purely static data technology). in such a situation, the one-time password isn't independent of the token ... it is equivalent to the token (and therefor doesn't represent multi-factor authentication).

another possible variation is using the token to transport information used for authentication. in the yes card scenario, the token was used for both transporting and verifying the user's PIN ... however there wasn't an independent method of verifying that the user actually knew the PIN ... which in turn invalidated the assumption about multi-factor authentication having different/independent vulnerabilities (and therefor being more secure)

in the following reference discussion about electronic passports, the token is used to carry personal information that can be used for something you are authentication (guard checks the photo in the token against a person's face). the issue here is a question about the integrity of the information carried in the token (can it be compromised or altered). however, the token itself doesn't really represent any kind of something you have authentication (it purely is used to carry/transport the information for something you are authentication)
https://www.garlic.com/~lynn/aadsm25.htm#32 On-card displays

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 30 Sep 2006 13:51:38 -0600
edgould1948@ibm-main.lst (Ed Gould) writes:
It would be interesting, I would think to have the "old timers" compare the code that was used in the "old days" against what is used today.

The code I think has been recoded many a time. Do you think the new people could show the old people new tricks or would it be the other way around?


some of this cropped up during the early days of os/vs2 svs development.

at the time, cp67 was one of the few relatively successful operating systems that supported virtual memory, paging, etc (at least in the ibm camp). as a result some of the people working on os/vs2 svs was looking at pieces of cp67 for example.

one of the big issues facing transition from real memory mvt to virtual memory environment was what to do about channel programs.

in virtual machine environment, the guest operating system invokes channel programs ... that have virtual addresses. channel operation runs asynchronously with real addresses. as a result, cp67 had a lot of code (module CCWTRANs) to create an exact replica of the virtual channel program ... but with real addresses (along with fixing the associated virtual pages at real addresses for the duration of i/o operation). these were "shadow" channel programs.

svs had a comparable problem with channel programs generated in the application space and passing the address to the kernel with EXCP/SVC0. the svs kernel now was faced with also scanning the virtual channel program and created a replica/shadow version using real addreses. the initial work involved taking CCWTRANS from cp67 and crafting it into the said of the SVS development effort.

one of the other issues was that the POK performance modeling group got involved in doing low-level event modeling of os/vs2 paging operations. one of their conclusions ... which I argued with them about ... was that replacing non-changed pages was more efficient than selecting a change page for replacement. no matter how much arguing they were adament that on a page fault ... for a missing page ... the page replacement algorithm should look for a non-changed page to be replaced (rather than a changed page). This reasoning was that replacing a non-changed page took significantly less effort (there was no writing out required for the current page).

the issue is that in LRU (least recently used) page replacement strategy ... you are looking to replace pages that have the least likelyhood of being used in the near future. the non-changed/changed strategy resulted in less weight being placed on whether the page would be needed in the near future. this strategy went into svs and continued into the very late 70s (with mvs) before it was corrected.

finally it dawned on somebody that the non-changed/changed strategy resulted in replacing relatively high-use, comonly used linkpack executable (non-changed) pages before more lightly referenced, private application data (changed) pages.

these days there is a lot of trade-off trying to move data between memory in really large block transfers .... and using excess electronic memory to compensate for disk i/o bottlenecks. in the vs1 handshaking scenario ... vs1 letting vm do its paging in 4k blocks was frequently signifantly more efficient than paging in 2k blocks (made less efficient use of real storage, but it was a reasonable trade-off since there was effectively more real storage resources than there were disk i/o access resources).

later "big pages" went to 40k (10 4k page) 3380 track demand page transfers. vm/hpo3.4 would typically do more total 4k transfers than vm/hpo3.2 (for the same workload and thruput) ... however, it could do the transfers with much fewer disk accesses; it made less efficient use of real storage, but more efficient use of disk i/o accesses (again trading off real storage resource efficiency for disk i/o resource efficiency).

... or somewhat reminiscent of a line that I started using as an undergraduate in connection with dynamic adaptive scheduling; schedule to the (system thruput) bottleneck. misc. past posts mentioning past dynamic adaptive scheduling work and/or the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

previous posts in this thread:
https://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF

misc past posts mentioning os/vs2 starting out using CCWTRANS from cp67
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006o.html#27 oops

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 30 Sep 2006 18:51:32 -0600
Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
Had to be reduced to 9 pages (36KB) because the 3880/3380 would miss the start of the next track (RPS miss) on a chained multi-block big page transfer because of overhead.

processing latency ... this was if you would to do multiple consecutive full track transfers ... with head-switch to different tracks (on the same cycliner; aka arm position) w/o loosing unnecessary revolutions ... aka being to do multiple full track transfers in the same number of disk rotations.

as already discussed (in some detail) ... 3880 disk controller processed control commands much slower than the previous 3830 disk controller
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

which met that it was taking longer elapsed time between commands ... while the disks continued to rotate.

there had been earlier studied in detail regarding elapsed time to do a head switch on 3330s ... in order to read/write "consecutive" blocks on different tracks (on the same cylinder) w/o unproductive disk rotation. intra-track head switch (3330) official specs called for a 110 dummy spacer record (between 4k page blocks) that allowed time for processing the head switch command ... while the disk continued to rotate. the rotation of the dummy spacer block overlapped with the processing of the head switch command ... allowing the head switch command processing to complete before the next 4k page block had rotated past the r/w head.

the problem was that 3330 track only had enuf room for three 4k page blocks with 101-byte dummary spacer records (i.e. by the time the head switch commnad had finished processing, the start of the next 4k record had already rotated past the r/w head).

it turns that both channels and disk controllers introduced processing delay/latency. so the i put together a test program that would format a 3330 track with different sized dummy spacer block and then test whether a head switch was performed fast enuf before the target record had rotated past the r/w head.

i tested the program with 3830 controllers on 4341, 158, 168, 3031, 3033, and 3081. it turns out that a 3830 in combination with 4341 and 370/168, the head switch command processed within the 101 byte rotation latency.

combination of 3830 and 158 didn't process the head switch command within the 101 byte rotation (resulting in a missed revolution). the 158 had integrated channel microcode sharing the 158 processor engine with the 370 microcode. all the 303x processors had a external "channel director" box. the 303x channel director boxes were a dedicated 158 processing engine with only the integrated channel microcode (w/o the 370 microcode) ... and none of the 303x processors could handle the head switch processing within the 101 byte dummy block rotation latency. the 3081 channels appeared to have similar processing latency as 158 and 303x channel director (not able to perform head switch operation within 101 dummy block rotation).

i also got a number of customer installations to run the test with a wide variety of processors and both 3830 controllers and oem clone disk controllers.

misc. past posts discussing the 3330 101/110 dummy block for head switch latency:
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2004d.html#64 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main memory?

Very slow booting and running and brain-dead OS's?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Very slow booting and running and brain-dead OS's?
Newsgroups: alt.folklore.computers
Date: Sun, 01 Oct 2006 09:48:59 -0600
jmfbahciv writes:
Yes. It was very crude. When hardware was as iffy as it used to be, being able to swap disk drives was nice to do. In addition that's what SMP was starting to allow with _any_ piece of gear. Remember the RP04s and RP06s had removable drive number plugs?

the vm370 based commercial timesharing services
https://www.garlic.com/~lynn/submain.html#timeshare

started running into this problem as they started acquiring customers all around the world (early to mid 70s) ... and were faced with providing 7x24 service.

one of the increasing problem issues was that the field service people needed to take over a machine once a month (or sometimes more often) for service (and with 7x24 operation ... traditional weekend sat or sun midnight period was becoming less and less acceptable). at least some of the service required a whole system infrastructure .. where they would run various kinds of stand-alone diagnostics.

to compensate, they ran loosely-coupled (cluster) configurations and added software support for process migration across processors in cluster. they even claimed to being able to migrate a process from a cluster in datacenter on the east coast to cluster in datacenter on the west cost ... modulo amount of context/data that was required ... back in the days of 56kbit telco links.

much later when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
we coined the terms disaster survivability and geographic survivability
https://www.garlic.com/~lynn/submain.html#available

now, fast reboot had already been done back in the late 60s for cp67 ... was cp67 systems were starting to move into more and more critical timesharing (and starting to offer 7x24 service). this then carried forward into vm370.

old tale about how fast cp67 rebooted (after a problem, in contrast to multics)
http://www.multicians.org/thvv/360-67.html

mentioning cp67 crashing (and restarting) 27 times in one day.

cp67 had been done on 4th flr of 545 tech sq, multics on 5th flr of 545 tech sq ... and for some reason i believe MIT USL was in one of the other tech sq bldgs (across the courtyard). tech sq had three 10 story bldgs (9 office floors, there was 10th?) forming a courtyard ... with two-story Polaroid bldg on the 4th (street) side (i've told before 4th floor science center overlooked land's balcony and once watching demo of unannounced sx-70 being done on the balcony).

the cause of the multiple cp67 crashes was a local software modification that had been applied to the USL system. I had added ascii/tty support to cp67 when i was undergraduate at the univerisity ... and played some games with using one byte values. the local USL modification was to increase the maximum tty terminal size from 80chars to something like 1200(?) for some sort of new device (some sort of plotter?) over at harvard. the games with one byte value resulted in calculating incorrect lengths if the max. line size was increased past 255 (which then resulted in system failing).

some more on tech sq:
http://www.multicians.org/tech-square.html

note that in the above discription ... the (IBM) boston programming center also shared the 3rd floor of 545 tech sq. when the cp67 group split off from the science center, they moved to the 3rd flr, abosrbing the boston programming center. as the group expanded and morphed into the vm370 group ... it outgrew the 3rd floor and moved out to the old sbc bldg in burlington mall (vacated when sbc was sold/transferred to cdc).

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 01 Oct 2006 14:16:40 -0600
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
8 bits of ECC for 64 bits of data.

At one point the trade press was talking about low cost block oriented random access memory (BORAM), which would have been a natural for ES. Unfortunately, that doesn't seem to have materialized, or at least BORAM failed to maintain an adequate price lead.


from previous post
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

reference in previous post
http://www.research.ibm.com/journal/rd/435/spainhower.pdf

... from reference above:
When a chip is b bits (b =|> 2) wide, an access to a 64-bit data word may have a b-bit block or byte error. There are codes to variously correct single b-bit errors and detect double b-bit errors. For G3 and G4, a code with 4-bit correction capability (S4EC) was implemented. Because the system design included dynamic on-line repair of chips with massive failures, it was not necessary to design a (78, 64) code which could both correct one 4-bit error and detect a second 4-bit error (D4ED). Such a code would have required an extra chip per checking block. The (76, 64) S4EC/DED ECC implemented on G3 and G4 is designed to ensure that all single-bit failures of one chip (and a very high probability of double- and triple-bit failures) occurring in the same doubleword as a 1­ 4-bit error on a second chip are detected [15]. G5 returns to single-bit-per-chip ECC and is therefore able to again use a less costly (72, 64) SEC/DED code and still protect the system from catastrophic failures caused by a single array-chip failure.

... snip ...

and detailed 3090 description
http://www.research.ibm.com/journal/sj/251/tucker.pdf

... from above
Both the central and expanded storages have error-correcting codes. The central storage has a single error-correcting, double-error-detecting code on each double word of data. The code is designed to detect all four-bit errors on a single card. The correcting code is passed to the caches on a fetch operation so that it can cover transmission errors as well as storage-array errors. The expanded storage is even more fault-tolerant. Each quad-word of the expanded storage has a double-error-correcting, triple-error-detecting code. Again, a four-bit error is always detected if caused by a single-card-level failure.

REAL memory column in SDSF

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REAL memory column in SDSF
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 01 Oct 2006 14:56:57 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
say 6bits of storage key per 4k bytes is lost in the noise? (2k storage keys as well as 2k virtual pages having been dropped around 3081 xa time-frame) ... if you wanted to worry about something ... there was 16bit ecc for every 64bit double word (or 2bits per 8bit byte ... as opposed to parity bit per 8bit byte) ... optimizations were trying to get failure coverage (better than simple 1bit/byte parity) with less than 80bits (for 64bit of data) ... like 78bits, 72bits, etc ...

re:
https://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

thread from vmshare computer conferencing on how to get old 2k key based operating systems to run under vm on 3081k having only support for 4k keys.
http://vm.marist.edu/~vmshare/browse.cgi?fn=2K_SCP&ft=MEMO

Was FORTRAN buggy?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Mon, 02 Oct 2006 07:31:23 -0600
jmfbahciv writes:
DEC had its first date problem in 1975. The project was called DATE75 and was a specification in the software notebooks. I don't know if Al ever got those old specs that have been removed. It would also have been a DOC file on BLKC:.

date stuff from old gcard ios3270 that i did some q&d conversion to html
https://www.garlic.com/~lynn/gcard.html#16

tod clock was part of original 370 ... even before virtual memory for 370 had been announced.

i have some memory of spending 3 months in a taskforce/effort discussing tod clock ... one item was discussing the original specification that the clock epoch was the 1st day of the century ... and did the century start 01jan1900 or 01jan1901 (and for some reason, for lot of early internal testing, people repeatingly setting epoch to neither, but 01jan1970). the other topic of interest that went round and round was how to handle leap seconds.

past postings mentioning leap seconds
https://www.garlic.com/~lynn/2000.html#4 Computer of the century
https://www.garlic.com/~lynn/2000d.html#42 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2001f.html#53 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2004m.html#37 Multi-processor timing issue
https://www.garlic.com/~lynn/2005t.html#15 Best practice for TOD clock
https://www.garlic.com/~lynn/2006g.html#30 TOD Clock the same as the BIOS clock in PCs?

Was FORTRAN buggy?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was FORTRAN buggy?
Newsgroups: alt.folklore.computers
Date: Mon, 02 Oct 2006 07:38:46 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
date stuff from old gcard ios3270 that i did some q&d conversion to html
https://www.garlic.com/~lynn/gcard.html#17


re:
https://www.garlic.com/~lynn/2006r.html#44 Was FORTRAN buggy?

oops, finger slip, that should be
https://www.garlic.com/~lynn/gcard.html#16

Trying to design low level hard disk manipulation program

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Trying to design low level hard disk manipulation program
Newsgroups: comp.arch
Date: Mon, 02 Oct 2006 11:53:40 -0600
dgay writes:
I think you missed the point that the output of readdir is (and should be) unrelated to the order presented to the user. Why is the file system collating anyway? Now I can see the value of a library that collates file names according to some system-wide convention...

one of the results of changes original made by (i think ?) perkin/elmer to cms mfd in the early 70s was to sort the filenames. then when application was looking for specific filename ... lookup could do much better than linear search (and searches better than linear were dependent on being matched to collating/sort sequences).

it really was significant change for directories that happened to have a couple thousand filenames (some number of high use system).

i recently ran into something similar using sort on filenames and doing something other than linear search ... where sort command default collating sequence changed and it moved how period was handled (showed up between capital H and capital I). i had to explicitly set "LC_ALL=C" to get sort back working the way i was use to.

a similar, but different problem we did long ago and far away ... when we did online telephone book for several hundred thousand corporate employees. for lots of reasons ... the names/numbers was kept in linear flat file ... but sorted. the search was radix ... based on measured first letter frequency by taking the size of the file and probing part way into the file based on first letters of the search argument and the related letter frequencies for names (originally compiled into the search program). it could frequently get within appropriate physical record within a probe or two (w/o requiring separate index or other infrastructure).

we had special collating/sort order assuming that names (and search arguments) had no blanks (even tho any names with embedded blanks were carried in the actual data (the ignore blanks was a special sort charactieristic/option). in the name scenario .. name collisions/duplicates were allowed ... so search result might present multiple matches.

Mickey and friends

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mickey and friends
Newsgroups: alt.folklore.computers
Date: Tue, 03 Oct 2006 08:49:21 -0600
jmfbahciv writes:
But it wasn't working. From what little I've read so far, New York was getting big eyes for acquisitions. Mass had a revolt. I'm pretty sure that Britain and France weren't staying out of it. I don't know about Spain. I found a book that supposedly talks about all the various democracy experiments that were tried by all the states during those years. It looks like the book is the m volume of an n set so it may not have the data I want.

past threads/posts
https://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War
https://www.garlic.com/~lynn/2006b.html#30 Empires and Imperialism

from above:
my wife has just started a set of books that had been awarded her father at west point ... they are from a series of univ. history lectures from the (18)70/80s (and the books have some inscription about being awarded to her father for some excellence by the colonial daughters of the 17th century).

part of the series covers the religous extremists that colonized new england and that the people finally got sick of the extreme stuff that the clerics and leaders were responsible for and eventually migrated to more moderation. it reads similar to some of lawarence's description of religious extremism in the seven pillars of wisdom. there is also some thread that notes that w/o the democratic influence of virginia and some of the other moderate colonies ... the extreme views of new england would have resulted in a different country.

somewhat related is a story that my wife had from one her uncles several years ago. salem had sent out form letters to descendants of the town's inhabitants asking for contributions for a memorial. the uncle wrote back saying that since their family had provided the entertainment at the original event ... that he felt that their family had already contributed sufficiently.


... snip ... and ...
i was recently reading an old history book (published around 1880) that claimed that it was extremely fortunate that the declaration of independence (as well as other founding efforts) were much more influenced by scottish descendants in the (state of) virginia area ... than any english influence from the (state of) mass. area ... that the USA would be a markedly different nation if more of the Massachusetts/English influence had prevailed (as opposed to the Virginia/Scottish influence).

... snip ...

cold war again

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: cold war again
Newsgroups: alt.folklore.computers
Date: Tue, 03 Oct 2006 11:52:37 -0600
wclodius writes:
What was not known untill much later was that the Russians inial attempt at an ICBM system, required extensive maintenance and was difficult to fuel quickly. It was the primary ancestoor the their current spece launch systems, but was a failure as an ICBM system.

for something completely different, a recently scanned uniform patch ... heavily post processed to try clean up effects from the fabric
https://www.garlic.com/~lynn/spcommand.jpg

uniform patch

Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking info on HP FOCUS (HP 9000 Series 500) and IBM ROMP CPUs from early 80's
Newsgroups: comp.arch
Date: Tue, 03 Oct 2006 12:27:29 -0600
guy.larri writes:
IBM ROMP (IBM RT PC)
==================

This was a RISC machine from IBM released in 1986. Again a 32-bit machine with 16 and 32 bit instructions.


in the past couple yrs ... somebody advertised an "aos" rt/pc (machine, software, and documentation) in alt.folklore.computers.

originally ROMP was going to be an austin OPD office products follow-on for the displaywriter. when that got canceled ... the group look around and decided to try and revive the box as a unix workstation. they got the group that had done the at&t unix port for pc/ix ... to do one for romp ... and you got rt/pc and aix.

the palo alto group had been working on doing a Berkeley port to 370. at some point after the rt/pc first became available, the decision was to retarget the effort from 370 to rt/pc ... and you got "aos".

there was a little discord between austin and palo alto over aos.

the original austin group was using cp.r and pl.8 for the displaywriter work. as part of retargeting romp from displaywriter to unix workstation ... it was decided that the austin pl.8 could implement a VRM (virtual resource manager, in pl.8). the group that had done the pc/ix port, then would port to an abstract VRM layer ... rather than the bare metal.

palo alto then did the berkeley port for aos to the bare metal. the problem was that austin had claimed that the total VRM development effort plus port to VRM interface was less effort than any straight port to the bare metal. unfortunately(?), palo alto's port to the bare metal was done with very few reources and effort.

misc. past posts mentioning 801, romp, rios, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

and doing q&d, trivial search with search engine ... the very first reference
http://domino.watson.ibm.com/tchjr/journalindex.nsf/0/f6570ad450831a2485256bfa00685bda?OpenDocument

then two wikipedia references ... and then 4, 5, 6, 7, ....
http://www.research.ibm.com/journal/sj/264/ibmsj2604D.pdf
http://www.research.ibm.com/journal/sj/261/ibmsj2601H.pdf
http://www.landley.net/history/mirror/ibm/Cocke.htm
http://www.thocp.net/timeline/1974.htm
http://www.islandnet.com/~kpolsson/workstat/
http://www.devx.com/ibm/Article/20944
http://www.experiencefestival.com/romp0944
http://www.rootvg.net/column_risc.htm
http://www.informatik.uni-trier.de/~ley/db/journals/ibmsj/ibmsj26.html




previous, next, index - home