List of Archived Posts
2025 Newsgroup Postings (07/26 - )
- Liibrary Catalog
- Chip Design (LSM & EVE)
- Mainframe Networking and LANs
- Mainframe Networking and LANs
- Mainframe Networking and LANs
- SLAC and CERN
- SLAC and CERN
- IBM ES/9000
- IBM ES/9000
- IBM ES/9000
- IBM Mainframe Efficiency
- IBM 4341
Liibrary Catalog
From: Lynn Wheeler <lynn@garlic.com>
Subject: Liibrary Catalog
Date: 26 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#118 Liibrary Catalog
By early 80s, online NIH NLM had a problem with answers to queries
where it would return thousands of answers, as additional terms were
added out around 6-8 terms, it would go bimodel between thousands of
answers and zero. Along came "Grateful Med" query app on Apple
... instead of returning the answers, it returned the count of answers
and the holy grail become finding query with move than zero and less
than 100 answers.
"Grateful Med" refs:
https://pubmed.ncbi.nlm.nih.gov/10304249/
https://pubmed.ncbi.nlm.nih.gov/2407046/
https://pubmed.ncbi.nlm.nih.gov/35102837/
--
virtualization experience starting Jan1968, online at home since Mar1970
Chip Design (LSM & EVE)
From: Lynn Wheeler <lynn@garlic.com>
Subject: Chip Design (LSM & EVE)
Date: 27 Jul, 2025
Blog: Facebook
70s, IBM Los Gatos lab did the LSM (Los Gatos State Machine) ... that
ran chip design logic verification, 50k times faster than IBM 3033
... included clock support that could be used for chips with
asynchronous clocks and analog circuits ... like electronic/thin-film
disk head chips.
Then in the 80s there was EVE (Endicott Verification Engine) that ran
faster and handled larger VSLI chips (than LSM), but assumed
synchronous clock designs. Disk Engineering had been moved offsite
(temporarily to bldg "86", just south of main plant site, while bldg
"14" was getting seismic retrofit) and got an EVE.
I also had HSDT project (T1 and faster computer links, both
terrestrial and satellite) mostly done out of LSG, that included
custom designed 3-dish Ku-band satellite system (Los Gatos, Yorktown,
and Austin). IBM San Jose had done T3 Collins digital radio microwave
complex (centered bldg 12 on main plant site). Set up T1 circuit from
bldg29 (LSG) to bldg12, and then bldg12 to bldg86. Austin was in
process of doing 6chip RIOS for what becomes RS/6000 ... and being
able to get fast turn around chip designs between Austin and bldg86
EVE is credited with helping bring RIOS chip design in a year early.
trivia: when transferred from Science Center to Research in San Jose,
got to wander around Silicon Valley datacenters, including disk
engineering/bldg14 and product test/bldg15 across the street. They
were running 7x24, prescheduled, stand-alone testing and commented
that they had recently tried MVS, but it had 15min MTBF (in that
environment), requiring manual reboot. I offered to rewrite I/O
supervisor, making it bullet-proof and never fail, allowing any amount
of ondemand, concurrent testing ... greatly improving productivity.
Bldg15 then got engineering 3033 (first outside of POK 3033 processor
engineering) and since disk testing only used a percent or two of CPU,
scrounge a 3830 disk controller and 3330 disk drive string and setup
our own private online service. At the time the air-bearing simulation
(for thin-film disk head) was getting a couple turn arounds a month on
SJR 370/195. We set it up on bld15 3033 and they were able to get
several turn arounds a day. 3370 was first thin-film head.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/
1988, get HA/6000 project (also IBM Los Gatos lab), initially for
NYTimes to migrate their newspaper system (ATEX) off VAXCluster to
RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (that have VAXCluster support in same source base with UNIX
.... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s
and hoping can upgrade to interoperable with FCS (planning for HA/CMP
high-end).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid-92 and 128-system
clusters ye-92. Mid Jan1992 presentations with FSD convinces them to
use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992,
cluster scaleup is transferred to be announced as IBM Supercomputer
(for technical/scientific *ONLY*) and we are told we can't work with
anything that has more than 4-systems (we leave IBM a few months
later).
Some concern that cluster scaleup would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS
The executive we had been reporting to, goes over to head up
Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with
Motorola 88k bus enabling shared-memory, tightly-coupled,
multiprocessor system implementations
Sometime after leaving IBM, brought into small client/server startup
as consultant. Two former Oracle people (that were in the
Ellison/Hester meeting) are there responsible for something they call
"commerce server" and want to do payment transactions on the
server. The startup also invented this technology they call SSL/HTTPS,
that they want to use. The result is now frequently called
e-commerce. I have responsibility for everything between webservers
and the payment networks.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce & payment networks
https://www.garlic.com/~lynn/subnetwork.html#gateway
posts mentioning Los Gatos LSM and EVE (endicott verification engine)
https://www.garlic.com/~lynn/2023f.html#16 Internet
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#5 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2010m.html#52 Basic question about CPU instructions
https://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360
https://www.garlic.com/~lynn/2007l.html#53 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2007h.html#61 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe Networking and LANs
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook
Mid-80s, the communication group was fighting release of mainframe
tcp/ip support. When they lost, they change tactic and said that since
they had corporate responsibility for everything that crossed
datacenter walls, it had to be released through them. What shipped got
aggregate 44kbyte/sec using nearly whole 3090 processor. I then do
RFC1044 support and in some tuning tests at Cray Research between Cray
and 4341, got sustained 4341 channel support using only modest amount
of 4341 CPU (somethng like 500 times improvement in bytes moved per
instruction executed)
There were also claims about how much better token-ring was than
ethernet. IBM AWD (workstation) had done their own cards for PC/RT
(16bit, PC/AT bus) including 4mbit token-ring card. Then for RS/6000
(w/microchannel), they were told they could not do their own cards,
but had to use the (communication group heavily performance kneecapped)
PS2 cards (example PS2 16mbit T/R card had lower card throughput than
the PC/RT 4mbit T/R card).
New Almaden Research bldg was heavily provisioned with IBM CAT wiring,
supposedly for 16mbit T/R, but found that running 10mbit ethernet
(over same wiring) had higher aggregate throughput (8.5mbit/sec) and
lower latency. Also that $69 10mbit ethernet cards had much higher
card throughput (8.5mbit/sec) than the $800 PS2 16mbit T/R cards. Also
for 300 workstation configuration, the price difference
(300*$69=$20,700)-(300*$800=$240,000)=$219,300, could get several high
performance TCP/IP routers with IBM (or non-IBM) mainframe channel
interfaces, 16 10mbit Ethernet LAN interfaces, Telco T1 & T3 options,
100mbit/sec FDDI LAN options and other features ... say 300
workstations could be spread across 80 high-performance 10mbit
Ethernet LANs.
Late 80s, a senior disk engineer got a talk scheduled at internal,
annual, world-wide communication group conference, supposedly on 3174
performance. However he open the talk with comment that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe to more distributed computing friendly
platforms. They had come up with a number of solutions, but they were
constantly being vetoed by the communication group (having
stranglehold on mainframe datacenters with their corporate ownership
of everything that crossed datacenter walls). Disk division
partial countermeasure was investing in distributed computing
startups using IBM disks, and we would periodically get asked to drop
by the investments to see if we could offer any help.
Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 "baby blues" in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").
other trivia: 1980, STL (since renamed SVL) was bursting at the seams
and were moving 300 people (& 3270s) from the IMS group to offsite
bldg with dataprocessing back to the STL datacenter. They had tried
"remote 3270", but found the human factors totally unacceptable. I get
con'ed into doing channel extender support, allowing channel attached
3270 controllers to be placed at offsite bldg with no perceptible
difference in human factors. Unintended side-effect was those IMS
168-3 systems saw 10-15% improvement in throughput. The issue was STL
had been spreading the directly 3270 channel attached controllers
across channels with 3830/3330 disks. The channel extender boxes had
much lower channel busy (for same amount of 3270 activity) reducing
interferance with disk throughput (and there some consideration moving
*ALL* 3270 channel attached controllers to channel extender boxes).
more trivia: After channel-extender, early 80s, I had got HSDT, T1 and
faster computer links (both satellite and terrestrial) and lots of
battles with communication group (60s, IBM had 2701 supporting T1 but
in the 70s move to SNA/VTAM and issues ... controller links were caped
at 56kbits/sec). Was also working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputing centers. Then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.
1988, IBM branch asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes
fibre-channel standard ("FCS", including some stuff I had done in
1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then
POK manages to get their stuff released as ESCON (when it is already
obsolete, initially 10mbyte/sec, later upgraded to 17mbyte/sec). Then
some POK engineers become involved with "FCS" and define a
heavy-weight protocol that significantly reduces throughput,
eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M
IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for
E5-2600 server blades claiming over million IOPS (two such FCS higher
throughput than 104 FICON). Note: IBM docs has SAPs (system assist
processors that do actual I/O) be kept to 70% CPU or about 1.5M
IOPS. Also no CKD DASD has been made for decades, all being simulated
on industry standard fixed-block devices.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe Networking and LANs
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
long-ago and far way: co-worker responsible for the science center
wide-area network (that grows into the internal corporate, non-SNA,
network; larger than arpanet/internet from just about the beginning
until sometime mid/late 80s about the time it was forced to convert to
SNA; technology had also been used for the corporate sponsored univ
BITNET). ref by one of the science center inventors of GML (precursor
to SGML&HTML) in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
--
virtualization experience starting Jan1968, online at home since Mar1970
Mainframe Networking and LANs
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025d.html#3 Mainframe Networking and LANs
misc. other details ...
OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."
... snip ...
Original JES NJE came from HASP (that had "TUCC" in card cols 68-71)
... and had numerous problems with the internal network. It started
out using spare entries in the 255 entry psuedo device table
... usually about 160-180 ... however the internal network had quickly
passed 255 entries in the 1st half of 70s (before NJE & VNET/RSCS
release to customers) ... and JES would trash any traffic where the
origin or destination node wasn't in their local table. Also the
network fields had been somewhat intermixed with job control fields
(compared to the cleanly layered VM370 VNET/RSCS) and traffic between
MVS/JES systems at different release levels had habit of crashing
destination MVS (infamous case of Hursley (UK) MVS systems crashing
because of changes in a San Jose MVS JES). As a result, MVS/JES
systems were restricted to boundary nodes behind a protected
VM370/RSCS system (where a library of code had accumulated that knew
how to rewrite NJE headers between origin node and the immediately
connected destination node). JES NJE was finally upgraded to support
999 node network ... but after the internal network had passed 1000
nodes.
HASP, ASP, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
For a time, the person responsible for AWP164 (becomes APPN) and I
reported to same executive ... and I would periodically kid him that
he should come over and work on real networking (TCP/IP) because the
SNA people would never appreciate him. When it came time to announce
APPN, the SNA group "non-concurred" ... the APPN announcement then was
carefully rewritten to NOT imply any relationship between APPN and
SNA.
Late 80s, univ. did analysis of VTAM LU6.2 ... finding 160k pathlength
compared to UNIX workstation (BSD reno/tahoe) TCP ... 5k pathlength.
First half of 90s, the communication group hired silicon valley
contractor to implement TCP/IP directly in VTAM. When he demonstrated
was TCP running much faster than LU6.2. He was then told that
"everybody" knows that a "proper" TCP implementation is much slower
than LU6.2 ... and they would only be paying for a "proper" TCP
implementation.
I had taken two credit intro to fortran/computers. The univ was
getting 360/67 for tss/360 replacing 709/1401, but tss/360 didn't come
to fruition, so 360/67 came in within a year of taking intro class and
I was hired fulltime responsible for OS/360 (univ. shutdown datacenter
on weekends and I had place dedicated, but 48hrs w/o sleep made my
monday classes hard). Then CSC came out to install CP67 (precursor to
vm370 virtual machine, 3rd install after CSC itself and MIT Lincoln
Labs) and I mostly play with it during my dedicated weekend time. It
came with 1052 & 2741 terminal support, including automagic terminal
type identification (used SAD CCW to change terminal type port
scanner). Univ had some number of ASCII terminals (TTY 33&35) and I
add TTY terminal support to CP67 (integrated with automagic terminal
type id). I then want to have single dialup number ("hunt group") for
all terminals. Didn't quite work, although could change port scanner
type, IBM had taken short cut and hard wired line speed.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
This kicks off univ. project to build our own IBM terminal controller,
build 360 channel interface card for Interdata/3 programmed to emulate
IBM 360 controller with addition doing line auto-baud. Then
Interdata/3 is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sells it as 360 clone controller, and four of us are
written up for (some part of) IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
trivia: when ASCII/TTY port scanner first arrived for IBM controller,
it came in Heathkit box.
Selectric based terminals ... 1052, 2740, 2741 ... used tilt/rotate
code to select ball character position to strike paper. Different
balls could have different character sets .... and could translate
back&forth between whatever character set used by a computer and
the selectric ball that was currently loaded.
Selectric 1961
https://en.wikipedia.org/wiki/IBM_Selectric
Use as a computere terminal
https://en.wikipedia.org/wiki/IBM_Selectric#Use_as_a_computer_terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
SLAC and CERN
From: Lynn Wheeler <lynn@garlic.com>
Subject: SLAC and CERN
Date: 28 Jul, 2025
Blog: Facebook
Stanford SLAC was CERN "sister" institution.
HTML done at CERN (GML invented at CSC in 1969, decade later morphs
into ISO SGML and after another decade morphs into HTML at CERN)
Co-worker responsible for the science center CP67 wide-area network
(non-SNA), account by one of the 1969 GML inventors at science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
CSC CP67-based wide-area network then grows into the corporate
internal network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s when the internal network was
forced to convert to SNA) and technology used for corporate sponsored
univ. BITNET
First webserver in the states (outside of europe) was Stanford SLAC on VM370 system (descendant of CSC CP67)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
SLAC/CERN, initially 168E & then 3081E ... sufficient 370 instructions
implementated to run fortran programs to do initial data reduction
along accelerator line.
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
SLAC also hosted the monthly BAYBUNCH VM370 user group meetings.
CSC co-worker responsible for CSC wide-area network, Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
posts mentioning slac/cern 168e/3081e
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2023b.html#92 IRS and legacy COBOL
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2020.html#40 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017k.html#47 When did the home computer die?
https://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#78 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016e.html#24 Is it a lost cause?
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2015b.html#28 The joy of simplicity?
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#79 Ancient computers in use today
https://www.garlic.com/~lynn/2015.html#69 Remembrance of things past
https://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing
--
virtualization experience starting Jan1968, online at home since Mar1970
SLAC and CERN
From: Lynn Wheeler <lynn@garlic.com>
Subject: SLAC and CERN
Date: 28 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025d.html#5 SLAC and CERN
note: 1974, CERN did analysis comparing VM370/CMS and MVS/TSO, paper
and presentation given at SHARE. Within IBM, copies of the paper were
classified "IBM Confidential - Restricted" (2nd highest security
classification, required "Need To Know"). While freely available
outside IBM, IBM wanted to restrict internal IBMers access. Within
2yrs, head of POK managed to convince corporate to kill the VM370
product, shutdown the development group and and transfer all the
people to POK for MVS/XA. Eventually, Endicott managed to save the
VM370/CMS product mission (for the midrange), but had to recreate a
development group from scratch.
Plans were to not inform the VM370 group until the very last minute,
to minimize the numbers escaping into the local Boston/Cambridge area
(it was in the days of DEC VAX/VMS infancy and joke was that head of
POK was a major contributor to DEC VMS). The shutdown managed to leak
early and there was hunt for the leak source (fortunately for me,
nobody gave up the source).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning CERN 1974 SHARE paper
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2014l.html#13 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM ES/9000
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 28 Jul, 2025
Blog: Facebook
ES9000, well ... Amdahl won the battle to make ACS, 360 compatible
... then it was canceled (and Amdahl departs IBM). Folklore; concern
that ACS/360 would advance state of the art too fast, and IBM would
loose control of the market ... ACS/360 end ... including things that
show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html
1988, got HA/6000, originally for NYTimes to move their newspaper
system (ATEX) off DEC VAXCluster to RS/6000 (run out of Los Gatos lab,
bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (that have VAXCluster support in same source base with
UNIX .... Oracle, Sybase, Ingres, Informix).
Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid-92 and 128-system
clusters ye-92. Mid Jan1992 presentations with FSD convinces them to
use HA/CMP cluster scale-up for gov. supercomputer bids. Late Jan1992,
cluster scale-up is transferred to be announced as IBM Supercomputer
(for technical/scientific *ONLY*) and we are told we can't work with
anything that has more than 4-systems (we leave IBM a few months
later).
Some concern that cluster scale-up would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Executive we had reported to for HA/CMP goes over to head up
Somerset/AIM (Apple, IBM, Motorola), do single chip Power/PC with
Motorola cache/bus enabling SMP, tightly-coupled, shared-memory,
multiprocessor configurations.
i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:
• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)
Dec2000, IBM ships 1st 16-processor mainframe (industry benchmark):
• z900, 16 processors 2.5BIPS (156MIPS/processor)
mid-80s, communication group was fighting announce of mainframe
TCP/IP, when they lost, they change strategy; since they had corporate
strategic ownership of everything that crossed datacenter walls, it
had to ship through them; what shipped got aggregate 44kbytes/sec
using nearly whole 3090 processor. I then add RFC1044 support and in
some tuning tests at Cray Research between Cray and 4341, get
sustained 4341 channel throughput using only modest amount of 4341 CPU
(something like 500 times improvement in bytes moved per instruction
executed).
RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
posts mentioning 70s 16-cpu multiprocessor project
https://www.garlic.com/~lynn/2025c.html#111 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#57 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#73 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#89 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#107 NSFnet
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#37 IBM 370/168
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#61 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2013h.html#14 The cloud is killing traditional hardware and software
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM ES/9000
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 28 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
IBM AWD (workstation) had done their own cards for PC/RT (16bit, PC/AT
bus) including 4mbit token-ring card. Then for RS/6000
(w/microchannel), they were told they could not do their own cards,
but had to use the (communication group heavily performance kneecapped)
PS2 cards (example PS2 16mbit T/R card had lower card throughput than
the PC/RT 4mbit T/R card). New Almaden Research bldg was heavily
provisioned with IBM CAT wiring, supposedly for 16mbit T/R, but found
that running 10mbit ethernet (over same wiring) had higher aggregate
throughput (8.5mbit/sec) and lower latency. Also that $69 10mbit
ethernet cards had much higher card throughput (8.5mbit/sec) than the
$800 PS2 16mbit T/R cards.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Late 80s, a senior disk engineer got a talk scheduled at internal,
annual, world-wide communication group conference, supposedly on 3174
performance. However he open the talk with comment that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe to more distributed computing friendly
platforms. They had come up with a number of solutions, but they were
constantly being vetoed by the communication group (having
stranglehold on mainframe datacenters with their corporate ownership
of everything that crossed datacenter walls). Disk division
partial countermeasure was investing in distributed computing
startups using IBM disks, and we would periodically get asked to drop
by the investments to see if we could offer any help.
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 "baby blues" in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
20yrs before one of the largest losses in US company history, Learson
tried (and failed) to block the bureaucrats, careerists, and MBAs from
destroying Watsons culture & legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
Oh, also 1988, IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, aggregate
200mbyte/sec). Then POK manages to get their stuff released as ESCON
(when it is already obsolete, initially 10mbyte/sec, later upgraded to
17mbyte/sec). Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that significantly reduces throughput,
eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M
IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for
E5-2600 server blades claiming over million IOPS (two such FCS higher
throughput than 104 FICON). Note: IBM docs has SAPs (system assist
processors that do actual I/O) be kept to 70% CPU or about 1.5M
IOPS. Also no CKD DASD has been made for decades, all being simulated
on industry standard fixed-block devices.
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM ES/9000
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 29 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000
Other trivia: Early 80s I was introduced to John Boyd and would
sponsor his briefings at IBM. In 1989/1990, the Marine Corps
Commandant leverages Boyd for corps makeover (when IBM was desperately
in need of makeover); some more
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Also early 80s, I got the HSDT project, T1 and faster computer links
(both terrestrial and satellite) and lots of battles with the
communication group (60s, IBM had 2701 controller that supported T1
links, with 70s and transition to SNA and its issues, it appeared
controllers were caped at 56kbits/sec). Was also suppose to get $20M
to interconnect the NSF Supercomputer datacenters ... then congress
cuts the budget, some other things happen and eventually a RFP was
released (in part based on what we already had running), NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.
John Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM Mainframe Efficiency
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Efficiency
Date: 29 Jul, 2025
Blog: Facebook
Mainframes since turn of century
z900, 16 cores, 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS* (1250MIPS/core), Jun2025
... early numbers actual industry benchmark (number program iterations
compared to industry MIPS reference platform), more recent numbers
inferred from IBM pubs giving throughput compared to previous
generations; *"z17 using 18% over z16" (& then z17 core/single-thread
1.12 times z16).
2010 E5-2600 server blade benchmarked at 500BIPS (ten times
max. configured z196, and 2010 E5-2600 still twice z17) and more
recent generations have at least maintained that ten times since 2010
(aka say 5TIPS, 5000BIPS)
The big cloud operators aggressively cut costs of system, in part by
doing their own asssembling (claiming 1/3rd the price of brand name
servers, like IBM). Before IBM sold off its blade server business, it
had a base list price of $1815 for E5-2600 server blade (compared to
$30M for z196). Then industry press had blade component makers
shipping half their product directly to cloud megadatacenters (and IBM
shortly sells off it server blade business).
A large cloud operator will have a score or more of megadatacenters
around the world, each megadatacenter with half million or more server
blades (each blade ten times max. configured mainframe) and enormous
automation. They had so radically reduced system costs, that
power&cooling was increasingly becoming major cost component. As a
result, cloud operators have put enormous pressure on component
vendors to increasingly optimize power per computation (sometimes new
generation energy efficient, has resulted in complete replacement of
all systems).
Industry benchmarks were about total mips, then number of
transactions, then transactions per dollar, and more recently
transactions per watt. PUE (power usage effectivenss) was introduced
in 2006 and large cloud megadatacenters regularly quote their values
https://en.wikipedia.org/wiki/Power_usage_effectiveness
google
https://datacenters.google/efficiency/
google: Our data centers deliver over six times more computing power
per unit of electricity than they did just five years ago.
https://datacenters.google/operating-sustainably/
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
IBM 4341
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 30 Jul, 2025
Blog: Facebook
4341 ... like a chest freezer or credenza
http://www.bitsavers.org/pdf/ibm/brochures/IBM4341Processor.pdf
http://www.bitsavers.org/pdf/datapro/datapro_reports_70s-90s/IBM/70C-491-08_8109_IBM_4300.pdf
when I transferred to San Jose Research, got to wander around IBM (&
non-IBM) datacenters in Silicon Valley, including disk
engineering/bldg14 and product test/bldg15 across the street. they had
been running 7x24, prescheduled, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF (in
that environment, requiring manual reboot). I offer to rewrite I/O
supervisor to make it bullet-proof and never fail to allow any amount
on on-demand, concurrent testing.
Then bldg15 gets 1st engineering 3033 (outside POK processor
engineering) for disk I/O testing. Testing was only taking a percent
or two of cpu, so we scrounge up a 3830 controller and 3330 string and
set-up our, private online service.
Then 1978, get an engineering 4341 (introduced/announced 30jun1979)
and in Jan1979, branch office hears about it and cons me into doing a
national lab benchmark looking at getting 70 for compute farm (sort of
the leading edge of the coming cluster supercomputing tsunami). Later
in the 80s, large corporations were ordering hundreds of vm/4341s at a
time for placing out in departmental areas (sort of the leading edge
of the coming distributed computing tsunami). Inside IBM, departmental
conference rooms become scarce, so many converted to vm/4341 rooms.
trivia: earlier, after FS imploded and the rush to get stuff back into
370 product pipelines, Endicott cons me into helping with ECPS for
138/148 ... which was then also available on 4331/4341. Initial
analysis done for doing ECPS ... old archived post from three decades
ago:
https://www.garlic.com/~lynn/94.html#21
... Endicott then convinces me to take trip around the world with them
presenting the 138/148 & ECPS business case to various planning
organizations
mid-80s, communication group was trying to block announce of mainframe
TCP/IP and when they lost, they changed tactics. Since they had
corporate ownership of everything that crossed datacenter walls, it
had to be released through them, what shipped got aggregate
44kbytes/sec using nearly whole 3090 CPU. I then add RFC1044 support
and in some tuning tests at Cray Research between Cray and 4341, got
sustained 4341 channel throughput, using only modest amount of 4341
processor (something like 500 times improvement in bytes moved per
instruction executed).
note, also in the wake of FS implosion, head of POK managed to
convince corporate to kill the VM370 product, shutdown the development
group and transfer all the people to POK for MVS/XA. Endicott
eventually manages to save the VM370 product mission, but had to
recreate a development group from scratch
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home