List of Archived Posts

2025 Newsgroup Postings (07/26 - )

Liibrary Catalog
Chip Design (LSM & EVE)
Mainframe Networking and LANs
Mainframe Networking and LANs
Mainframe Networking and LANs
SLAC and CERN
SLAC and CERN
IBM ES/9000
IBM ES/9000
IBM ES/9000
IBM Mainframe Efficiency
IBM 4341
IBM 370/168
IBM's 32 vs 64 bits, was VAX
Tandem Non-Stop
MVT/HASP
Some VM370 History
IBM RSCS/VNET
Some VM370 History
370 Virtual Memory
370 Virtual Memory
HA/CMP
370 Virtual Memory
370 Virtual Memory
IBM Yorktown Research
IBM Management
IBM 1655
IBM 1655
Univ, Boeing/Renton, IBM/HONE
IBM PS2
370 Virtual Memory
Public Facebook Mainframe Group
IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
Univ, Boeing/Renton, IBM/HONE
TYMSHARE, VMSHARE, ADVENTURE
Mosaic
IBM and non-IBM
EMACS
DASD
IBM OS/2 & M'soft
IBM OS/2 & M'soft
IBM OS/2 & M'soft

Liibrary Catalog

From: Lynn Wheeler <lynn@garlic.com>
Subject: Liibrary Catalog
Date: 26 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025c.html#118 Liibrary Catalog

By early 80s, online NIH NLM had a problem with answers to queries
where it would return thousands of answers, as additional terms were
added out around 6-8 terms, it would go bimodel between thousands of
answers and zero. Along came "Grateful Med" query app on Apple
... instead of returning the answers, it returned the count of answers
and the holy grail become finding query with move than zero and less
than 100 answers.

"Grateful Med" refs:
https://pubmed.ncbi.nlm.nih.gov/10304249/
https://pubmed.ncbi.nlm.nih.gov/2407046/
https://pubmed.ncbi.nlm.nih.gov/35102837/

--
virtualization experience starting Jan1968, online at home since Mar1970

Chip Design (LSM & EVE)

From: Lynn Wheeler <lynn@garlic.com>
Subject: Chip Design (LSM & EVE)
Date: 27 Jul, 2025
Blog: Facebook

70s, IBM Los Gatos lab did the LSM (Los Gatos State Machine) ... that
ran chip design logic verification, 50k times faster than IBM 3033
... included clock support that could be used for chips with
asynchronous clocks and analog circuits ... like electronic/thin-film
disk head chips.

Then in the 80s there was EVE (Endicott Verification Engine) that ran
faster and handled larger VSLI chips (than LSM), but assumed
synchronous clock designs. Disk Engineering had been moved offsite
(temporarily to bldg "86", just south of main plant site, while bldg
"14" was getting seismic retrofit) and got an EVE.

I also had HSDT project (T1 and faster computer links, both
terrestrial and satellite) mostly done out of LSG, that included
custom designed 3-dish Ku-band satellite system (Los Gatos, Yorktown,
and Austin). IBM San Jose had done T3 Collins digital radio microwave
complex (centered bldg 12 on main plant site). Set up T1 circuit from
bldg29 (LSG) to bldg12, and then bldg12 to bldg86. Austin was in
process of doing 6chip RIOS for what becomes RS/6000 ... and being
able to get fast turn around chip designs between Austin and bldg86
EVE is credited with helping bring RIOS chip design in a year early.

trivia: when transferred from Science Center to Research in San Jose,
got to wander around Silicon Valley datacenters, including disk
engineering/bldg14 and product test/bldg15 across the street. They
were running 7x24, prescheduled, stand-alone testing and commented
that they had recently tried MVS, but it had 15min MTBF (in that
environment), requiring manual reboot. I offered to rewrite I/O
supervisor, making it bullet-proof and never fail, allowing any amount
of ondemand, concurrent testing ... greatly improving productivity.

Bldg15 then got engineering 3033 (first outside of POK 3033 processor
engineering) and since disk testing only used a percent or two of CPU,
scrounge a 3830 disk controller and 3330 disk drive string and setup
our own private online service. At the time the air-bearing simulation
(for thin-film disk head) was getting a couple turn arounds a month on
SJR 370/195. We set it up on bld15 3033 and they were able to get
several turn arounds a day. 3370 was first thin-film head.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

1988, get HA/6000 project (also IBM Los Gatos lab), initially for
NYTimes to migrate their newspaper system (ATEX) off VAXCluster to
RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (that have VAXCluster support in same source base with UNIX
.... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s
and hoping can upgrade to interoperable with FCS (planning for HA/CMP
high-end).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid-92 and 128-system
clusters ye-92. Mid Jan1992 presentations with FSD convinces them to
use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992,
cluster scaleup is transferred to be announced as IBM Supercomputer
(for technical/scientific *ONLY*) and we are told we can't work with
anything that has more than 4-systems (we leave IBM a few months
later).

Some concern that cluster scaleup would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS

The executive we had been reporting to, goes over to head up
Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with
Motorola 88k bus enabling shared-memory, tightly-coupled,
multiprocessor system implementations

Sometime after leaving IBM, brought into small client/server startup
as consultant. Two former Oracle people (that were in the
Ellison/Hester meeting) are there responsible for something they call
"commerce server" and want to do payment transactions on the
server. The startup also invented this technology they call SSL/HTTPS,
that they want to use. The result is now frequently called
e-commerce. I have responsibility for everything between webservers
and the payment networks.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce & payment networks
https://www.garlic.com/~lynn/subnetwork.html#gateway

posts mentioning Los Gatos LSM and EVE (endicott verification engine)
https://www.garlic.com/~lynn/2023f.html#16 Internet
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021c.html#53 IBM CEO
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#5 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2010m.html#52 Basic question about CPU instructions
https://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360
https://www.garlic.com/~lynn/2007l.html#53 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2007h.html#61 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
https://www.garlic.com/~lynn/2006r.html#11 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Networking and LANs

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook

Mid-80s, the communication group was fighting release of mainframe
tcp/ip support. When they lost, they change tactic and said that since
they had corporate responsibility for everything that crossed
datacenter walls, it had to be released through them. What shipped got
aggregate 44kbyte/sec using nearly whole 3090 processor. I then do
RFC1044 support and in some tuning tests at Cray Research between Cray
and 4341, got sustained 4341 channel support using only modest amount
of 4341 CPU (somethng like 500 times improvement in bytes moved per
instruction executed)

There were also claims about how much better token-ring was than
ethernet. IBM AWD (workstation) had done their own cards for PC/RT
(16bit, PC/AT bus) including 4mbit token-ring card. Then for RS/6000
(w/microchannel), they were told they could not do their own cards,
but had to use the (communication group heavily performance kneecapped)
PS2 cards (example PS2 16mbit T/R card had lower card throughput than
the PC/RT 4mbit T/R card).

New Almaden Research bldg was heavily provisioned with IBM CAT wiring,
supposedly for 16mbit T/R, but found that running 10mbit ethernet
(over same wiring) had higher aggregate throughput (8.5mbit/sec) and
lower latency. Also that $69 10mbit ethernet cards had much higher
card throughput (8.5mbit/sec) than the $800 PS2 16mbit T/R cards. Also
for 300 workstation configuration, the price difference
(300*$69=$20,700)-(300*$800=$240,000)=$219,300, could get several high
performance TCP/IP routers with IBM (or non-IBM) mainframe channel
interfaces, 16 10mbit Ethernet LAN interfaces, Telco T1 & T3 options,
100mbit/sec FDDI LAN options and other features ... say 300
workstations could be spread across 80 high-performance 10mbit
Ethernet LANs.

Late 80s, a senior disk engineer got a talk scheduled at internal,
annual, world-wide communication group conference, supposedly on 3174
performance. However he open the talk with comment that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe to more distributed computing friendly
platforms. They had come up with a number of solutions, but they were
constantly being vetoed by the communication group (having
stranglehold on mainframe datacenters with their corporate ownership
of everything that crossed datacenter walls). Disk division
partial countermeasure was investing in distributed computing
startups using IBM disks, and we would periodically get asked to drop
by the investments to see if we could offer any help.

Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 baby blues in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").

other trivia: 1980, STL (since renamed SVL) was bursting at the seams
and were moving 300 people (& 3270s) from the IMS group to offsite
bldg with dataprocessing back to the STL datacenter. They had tried
"remote 3270", but found the human factors totally unacceptable. I get
con'ed into doing channel extender support, allowing channel attached
3270 controllers to be placed at offsite bldg with no perceptible
difference in human factors. Unintended side-effect was those IMS
168-3 systems saw 10-15% improvement in throughput. The issue was STL
had been spreading the directly 3270 channel attached controllers
across channels with 3830/3330 disks. The channel extender boxes had
much lower channel busy (for same amount of 3270 activity) reducing
interferance with disk throughput (and there some consideration moving
*ALL* 3270 channel attached controllers to channel extender boxes).

more trivia: After channel-extender, early 80s, I had got HSDT, T1 and
faster computer links (both satellite and terrestrial) and lots of
battles with communication group (60s, IBM had 2701 supporting T1 but
in the 70s move to SNA/VTAM and issues ... controller links were caped
at 56kbits/sec). Was also working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputing centers. Then congress
cuts the budget, some other things happen and eventually an RFP is
released (in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

1988, IBM branch asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes
fibre-channel standard ("FCS", including some stuff I had done in
1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then
POK manages to get their stuff released as ESCON (when it is already
obsolete, initially 10mbyte/sec, later upgraded to 17mbyte/sec). Then
some POK engineers become involved with "FCS" and define a
heavy-weight protocol that significantly reduces throughput,
eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M
IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for
E5-2600 server blades claiming over million IOPS (two such FCS higher
throughput than 104 FICON). Note: IBM docs has SAPs (system assist
processors that do actual I/O) be kept to 70% CPU or about 1.5M
IOPS. Also no CKD DASD has been made for decades, all being simulated
on industry standard fixed-block devices.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Networking and LANs

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs

long-ago and far way: co-worker responsible for the science center
wide-area network (that grows into the internal corporate, non-SNA,
network; larger than arpanet/internet from just about the beginning
until sometime mid/late 80s about the time it was forced to convert to
SNA; technology had also been used for the corporate sponsored univ
BITNET). ref by one of the science center inventors of GML (precursor
to SGML&HTML) in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Networking and LANs

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Networking and LANs
Date: 27 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#2 Mainframe Networking and LANs
https://www.garlic.com/~lynn/2025d.html#3 Mainframe Networking and LANs

misc. other details ...

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."

... snip ...

Original JES NJE came from HASP (that had "TUCC" in card cols 68-71)
... and had numerous problems with the internal network. It started
out using spare entries in the 255 entry psuedo device table
... usually about 160-180 ... however the internal network had quickly
passed 255 entries in the 1st half of 70s (before NJE & VNET/RSCS
release to customers) ... and JES would trash any traffic where the
origin or destination node wasn't in their local table. Also the
network fields had been somewhat intermixed with job control fields
(compared to the cleanly layered VM370 VNET/RSCS) and traffic between
MVS/JES systems at different release levels had habit of crashing
destination MVS (infamous case of Hursley (UK) MVS systems crashing
because of changes in a San Jose MVS JES). As a result, MVS/JES
systems were restricted to boundary nodes behind a protected
VM370/RSCS system (where a library of code had accumulated that knew
how to rewrite NJE headers between origin node and the immediately
connected destination node). JES NJE was finally upgraded to support
999 node network ... but after the internal network had passed 1000
nodes.

HASP, ASP, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

For a time, the person responsible for AWP164 (becomes APPN) and I
reported to same executive ... and I would periodically kid him that
he should come over and work on real networking (TCP/IP) because the
SNA people would never appreciate him. When it came time to announce
APPN, the SNA group "non-concurred" ... the APPN announcement then was
carefully rewritten to NOT imply any relationship between APPN and
SNA.

Late 80s, univ. did analysis of VTAM LU6.2 ... finding 160k pathlength
compared to UNIX workstation (BSD reno/tahoe) TCP ... 5k pathlength.

First half of 90s, the communication group hired silicon valley
contractor to implement TCP/IP directly in VTAM. When he demonstrated
was TCP running much faster than LU6.2. He was then told that
"everybody" knows that a "proper" TCP implementation is much slower
than LU6.2 ... and they would only be paying for a "proper" TCP
implementation.

I had taken two credit intro to fortran/computers. The univ was
getting 360/67 for tss/360 replacing 709/1401, but tss/360 didn't come
to fruition, so 360/67 came in within a year of taking intro class and
I was hired fulltime responsible for OS/360 (univ. shutdown datacenter
on weekends and I had place dedicated, but 48hrs w/o sleep made my
monday classes hard). Then CSC came out to install CP67 (precursor to
vm370 virtual machine, 3rd install after CSC itself and MIT Lincoln
Labs) and I mostly play with it during my dedicated weekend time. It
came with 1052 & 2741 terminal support, including automagic terminal
type identification (used SAD CCW to change terminal type port
scanner). Univ had some number of ASCII terminals (TTY 33&35) and I
add TTY terminal support to CP67 (integrated with automagic terminal
type id). I then want to have single dialup number ("hunt group") for
all terminals. Didn't quite work, although could change port scanner
type, IBM had taken short cut and hard wired line speed.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

This kicks off univ. project to build our own IBM terminal controller,
build 360 channel interface card for Interdata/3 programmed to emulate
IBM 360 controller with addition doing line auto-baud. Then
Interdata/3 is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sells it as 360 clone controller, and four of us are
written up for (some part of) IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

trivia: when ASCII/TTY port scanner first arrived for IBM controller,
it came in Heathkit box.

Selectric based terminals ... 1052, 2740, 2741 ... used tilt/rotate
code to select ball character position to strike paper. Different
balls could have different character sets .... and could translate
back&forth between whatever character set used by a computer and
the selectric ball that was currently loaded.

Selectric 1961
https://en.wikipedia.org/wiki/IBM_Selectric
Use as a computere terminal
https://en.wikipedia.org/wiki/IBM_Selectric#Use_as_a_computer_terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

SLAC and CERN

From: Lynn Wheeler <lynn@garlic.com>
Subject: SLAC and CERN
Date: 28 Jul, 2025
Blog: Facebook

Stanford SLAC was CERN "sister" institution.

HTML done at CERN (GML invented at CSC in 1969, decade later morphs
into ISO SGML and after another decade morphs into HTML at CERN)

Co-worker responsible for the science center CP67 wide-area network
(non-SNA), account by one of the 1969 GML inventors at science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

CSC CP67-based wide-area network then grows into the corporate
internal network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s when the internal network was
forced to convert to SNA) and technology used for corporate sponsored
univ. BITNET

First webserver in the states (outside of europe) was Stanford SLAC on VM370 system (descendant of CSC CP67)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

SLAC/CERN, initially 168E & then 3081E ... sufficient 370 instructions
implementated to run fortran programs to do initial data reduction
along accelerator line.
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf

SLAC also hosted the monthly BAYBUNCH VM370 user group meetings.

CSC co-worker responsible for CSC wide-area network, Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

posts mentioning slac/cern 168e/3081e
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2023b.html#92 IRS and legacy COBOL
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2020.html#40 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017k.html#47 When did the home computer die?
https://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#78 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016e.html#24 Is it a lost cause?
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2015b.html#28 The joy of simplicity?
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#79 Ancient computers in use today
https://www.garlic.com/~lynn/2015.html#69 Remembrance of things past
https://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing

--
virtualization experience starting Jan1968, online at home since Mar1970

SLAC and CERN

From: Lynn Wheeler <lynn@garlic.com>
Subject: SLAC and CERN
Date: 28 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#5 SLAC and CERN

note: 1974, CERN did analysis comparing VM370/CMS and MVS/TSO, paper
and presentation given at SHARE. Within IBM, copies of the paper were
classified "IBM Confidential - Restricted" (2nd highest security
classification, required "Need To Know").  While freely available
outside IBM, IBM wanted to restrict internal IBMers access. Within
2yrs, head of POK managed to convince corporate to kill the VM370
product, shutdown the development group and and transfer all the
people to POK for MVS/XA. Eventually, Endicott managed to save the
VM370/CMS product mission (for the midrange), but had to recreate a
development group from scratch.

Plans were to not inform the VM370 group until the very last minute,
to minimize the numbers escaping into the local Boston/Cambridge area
(it was in the days of DEC VAX/VMS infancy and joke was that head of
POK was a major contributor to DEC VMS). The shutdown managed to leak
early and there was hunt for the leak source (fortunately for me,
nobody gave up the source).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning CERN 1974 SHARE paper
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2014l.html#13 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ES/9000

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 28 Jul, 2025
Blog: Facebook

ES9000, well ... Amdahl won the battle to make ACS, 360 compatible
... then it was canceled (and Amdahl departs IBM). Folklore; concern
that ACS/360 would advance state of the art too fast, and IBM would
loose control of the market ... ACS/360 end ... including things that
show up more than 20yrs later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

1988, got HA/6000, originally for NYTimes to move their newspaper
system (ATEX) off DEC VAXCluster to RS/6000 (run out of Los Gatos lab,
bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (that have VAXCluster support in same source base with
UNIX .... Oracle, Sybase, Ingres, Informix).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid-92 and 128-system
clusters ye-92. Mid Jan1992 presentations with FSD convinces them to
use HA/CMP cluster scale-up for gov. supercomputer bids. Late Jan1992,
cluster scale-up is transferred to be announced as IBM Supercomputer
(for technical/scientific *ONLY*) and we are told we can't work with
anything that has more than 4-systems (we leave IBM a few months
later).

Some concern that cluster scale-up would eat the mainframe .... 1993
MIPS benchmark (industry standard, number of program iterations
compared to reference platform):

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Executive we had reported to for HA/CMP goes over to head up
Somerset/AIM (Apple, IBM, Motorola), do single chip Power/PC with
Motorola cache/bus enabling SMP, tightly-coupled, shared-memory,
multiprocessor configurations.

i86 chip makers then do hardware layer that translate i86 instructions
into RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)

Dec2000, IBM ships 1st 16-processor mainframe (industry benchmark):

• z900, 16 processors 2.5BIPS (156MIPS/processor)

mid-80s, communication group was fighting announce of mainframe
TCP/IP, when they lost, they change strategy; since they had corporate
strategic ownership of everything that crossed datacenter walls, it
had to ship through them; what shipped got aggregate 44kbytes/sec
using nearly whole 3090 processor. I then add RFC1044 support and in
some tuning tests at Cray Research between Cray and 4341, get
sustained 4341 channel throughput using only modest amount of 4341 CPU
(something like 500 times improvement in bytes moved per instruction
executed).

RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

posts mentioning 70s 16-cpu multiprocessor project
https://www.garlic.com/~lynn/2025c.html#111 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#57 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#73 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#69 Amdahl Trivia
https://www.garlic.com/~lynn/2025b.html#58 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#35 3081, 370/XA, MVS/XA
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#32 IBM 3090
https://www.garlic.com/~lynn/2024g.html#89 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#107 NSFnet
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#37 IBM 370/168
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#61 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2013h.html#14 The cloud is killing traditional hardware and software

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ES/9000

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 28 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000

IBM AWD (workstation) had done their own cards for PC/RT (16bit, PC/AT
bus) including 4mbit token-ring card. Then for RS/6000
(w/microchannel), they were told they could not do their own cards,
but had to use the (communication group heavily performance kneecapped)
PS2 cards (example PS2 16mbit T/R card had lower card throughput than
the PC/RT 4mbit T/R card). New Almaden Research bldg was heavily
provisioned with IBM CAT wiring, supposedly for 16mbit T/R, but found
that running 10mbit ethernet (over same wiring) had higher aggregate
throughput (8.5mbit/sec) and lower latency. Also that $69 10mbit
ethernet cards had much higher card throughput (8.5mbit/sec) than the
$800 PS2 16mbit T/R cards.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Late 80s, a senior disk engineer got a talk scheduled at internal,
annual, world-wide communication group conference, supposedly on 3174
performance. However he open the talk with comment that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing drop in disk sales with
data fleeing mainframe to more distributed computing friendly
platforms. They had come up with a number of solutions, but they were
constantly being vetoed by the communication group (having
stranglehold on mainframe datacenters with their corporate ownership
of everything that crossed datacenter walls). Disk division
partial countermeasure was investing in distributed computing
startups using IBM disks, and we would periodically get asked to drop
by the investments to see if we could offer any help.

Demise of disk division
https://www.garlic.com/~lynn/subnetwork.html#terminal

Wasn't just disks and couple years later, IBM has one of the largest
losses in the history of US companies and was being reorged into the
13 baby blues in preparation for breaking up the company (take-off
on the "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup (but it
wasn't long before the disk division was "divested").

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

20yrs before one of the largest losses in US company history, Learson
tried (and failed) to block the bureaucrats, careerists, and MBAs from
destroying Watsons culture & legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Oh, also 1988, IBM branch asks if I could help LLNL (national lab)
standardize some serial stuff they were working with, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit/sec, full-duplex, aggregate
200mbyte/sec). Then POK manages to get their stuff released as ESCON
(when it is already obsolete, initially 10mbyte/sec, later upgraded to
17mbyte/sec). Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that significantly reduces throughput,
eventually ships as FICON. 2010, z196 "Peak I/O" benchmark gets 2M
IOPS using 104 FICON (20K IOPS/FICON). Also 2010, FCS announced for
E5-2600 server blades claiming over million IOPS (two such FCS higher
throughput than 104 FICON). Note: IBM docs has SAPs (system assist
processors that do actual I/O) be kept to 70% CPU or about 1.5M
IOPS. Also no CKD DASD has been made for decades, all being simulated
on industry standard fixed-block devices.

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ES/9000

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ES/9000
Date: 29 Jul, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#8 IBM ES/9000

Other trivia: Early 80s I was introduced to John Boyd and would
sponsor his briefings at IBM. In 1989/1990, the Marine Corps
Commandant leverages Boyd for corps makeover (when IBM was desperately
in need of makeover); some more
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Also early 80s, I got the HSDT project, T1 and faster computer links
(both terrestrial and satellite) and lots of battles with the
communication group (60s, IBM had 2701 controller that supported T1
links, with 70s and transition to SNA and its issues, it appeared
controllers were caped at 56kbits/sec). Was also suppose to get $20M
to interconnect the NSF Supercomputer datacenters ... then congress
cuts the budget, some other things happen and eventually a RFP was
released (in part based on what we already had running), NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

John Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Efficiency

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Efficiency
Date: 29 Jul, 2025
Blog: Facebook

Mainframes since turn of century

z900, 16 cores, 2.5BIPS (156MIPS/core), Dec2000
z990, 32 cores, 9BIPS, (281MIPS/core), 2003
z9, 54 cores, 18BIPS (333MIPS/core), July2005
z10, 64 cores, 30BIPS (469MIPS/core), Feb2008
z196, 80 cores, 50BIPS (625MIPS/core), Jul2010
EC12, 101 cores, 75BIPS (743MIPS/core), Aug2012
z13, 140 cores, 100BIPS (710MIPS/core), Jan2015
z14, 170 cores, 150BIPS (862MIPS/core), Aug2017
z15, 190 cores, 190BIPS (1000MIPS/core), Sep2019
z16, 200 cores, 222BIPS (1111MIPS/core), Sep2022
z17, 208 cores, 260BIPS* (1250MIPS/core), Jun2025

... early numbers actual industry benchmark (number program iterations
compared to industry MIPS reference platform), more recent numbers
inferred from IBM pubs giving throughput compared to previous
generations; *"z17 using 18% over z16" (& then z17 core/single-thread
1.12 times z16).

2010 E5-2600 server blade benchmarked at 500BIPS (ten times
max. configured z196, and 2010 E5-2600 still twice z17) and more
recent generations have at least maintained that ten times since 2010
(aka say 5TIPS, 5000BIPS)

The big cloud operators aggressively cut costs of system, in part by
doing their own asssembling (claiming 1/3rd the price of brand name
servers, like IBM). Before IBM sold off its blade server business, it
had a base list price of $1815 for E5-2600 server blade (compared to
$30M for z196). Then industry press had blade component makers
shipping half their product directly to cloud megadatacenters (and IBM
shortly sells off it server blade business).

A large cloud operator will have a score or more of megadatacenters
around the world, each megadatacenter with half million or more server
blades (each blade ten times max. configured mainframe) and enormous
automation. They had so radically reduced system costs, that
power&cooling was increasingly becoming major cost component. As a
result, cloud operators have put enormous pressure on component
vendors to increasingly optimize power per computation (sometimes new
generation energy efficient, has resulted in complete replacement of
all systems).

Industry benchmarks were about total mips, then number of
transactions, then transactions per dollar, and more recently
transactions per watt. PUE (power usage effectivenss) was introduced
in 2006 and large cloud megadatacenters regularly quote their values
https://en.wikipedia.org/wiki/Power_usage_effectiveness
google
https://datacenters.google/efficiency/
google: Our data centers deliver over six times more computing power
per unit of electricity than they did just five years ago.
https://datacenters.google/operating-sustainably/

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 30 Jul, 2025
Blog: Facebook

4341 ... like a chest freezer or credenza
http://www.bitsavers.org/pdf/ibm/brochures/IBM4341Processor.pdf
http://www.bitsavers.org/pdf/datapro/datapro_reports_70s-90s/IBM/70C-491-08_8109_IBM_4300.pdf

when I transferred to San Jose Research, got to wander around IBM (&
non-IBM) datacenters in Silicon Valley, including disk
engineering/bldg14 and product test/bldg15 across the street. they had
been running 7x24, prescheduled, stand-alone mainframe testing and
mentioned that they had recently tried MVS, but it had 15min MTBF (in
that environment, requiring manual reboot). I offer to rewrite I/O
supervisor to make it bullet-proof and never fail to allow any amount
on on-demand, concurrent testing.

Then bldg15 gets 1st engineering 3033 (outside POK processor
engineering) for disk I/O testing. Testing was only taking a percent
or two of cpu, so we scrounge up a 3830 controller and 3330 string and
set-up our, private online service.

Then 1978, get an engineering 4341 (introduced/announced 30jun1979)
and in Jan1979, branch office hears about it and cons me into doing a
national lab benchmark looking at getting 70 for compute farm (sort of
the leading edge of the coming cluster supercomputing tsunami). Later
in the 80s, large corporations were ordering hundreds of vm/4341s at a
time for placing out in departmental areas (sort of the leading edge
of the coming distributed computing tsunami). Inside IBM, departmental
conference rooms become scarce, so many converted to vm/4341 rooms.

trivia: earlier, after FS imploded and the rush to get stuff back into
370 product pipelines, Endicott cons me into helping with ECPS for
138/148 ... which was then also available on 4331/4341. Initial
analysis done for doing ECPS ... old archived post from three decades
ago:
https://www.garlic.com/~lynn/94.html#21

... Endicott then convinces me to take trip around the world with them
presenting the 138/148 & ECPS business case to various planning
organizations

mid-80s, communication group was trying to block announce of mainframe
TCP/IP and when they lost, they changed tactics. Since they had
corporate ownership of everything that crossed datacenter walls, it
had to be released through them, what shipped got aggregate
44kbytes/sec using nearly whole 3090 CPU. I then add RFC1044 support
and in some tuning tests at Cray Research between Cray and 4341, got
sustained 4341 channel throughput, using only modest amount of 4341
processor (something like 500 times improvement in bytes moved per
instruction executed).

note, also in the wake of FS implosion, head of POK managed to
convince corporate to kill the VM370 product, shutdown the development
group and transfer all the people to POK for MVS/XA. Endicott
eventually manages to save the VM370 product mission, but had to
recreate a development group from scratch

FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/168

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/168
Date: 06 Aug, 2025
Blog: Facebook

as undergraduate, i was hired into (very) small group in the Boeing
CFO office to help with the formation of boeing computer services
(consoldate all dataprocessing into independent business unit) ... at
the time i thought renton datacenter was largest in the world (when I
graduate, i join ibm science center instead of staying with CFO).

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (and online sales and marketing
support HONE was one of the 1st and long time customers). In the
decision to add virtual memory to all 370s and morph CP67 into VM370,
lots of features were simplified or dropped (including muliprocessor
support).

US HONE consolidates all its datacentets in silicon valley with a
bunch of 168s (trivia: when facebook 1st moves into silicon valley, it
was new bldg built next door to former consolidated US HONE
datacentet). I then add multiprocessor/SMP support into my
VM370R3-based CSC/VM, initially for HONE (so they can upgrade all
their 168s to multiprocessor/SMP)

370/165 avg 2.1 machine cycles per 370 instruction. move to 168
optimized microcode to avg 1.6 cycles per 370 instruction and new
memory 4-5 times faster (getting about 2.5MIPS). 168-3 doubled
processor cache size getting to about 3MIPS.

168-3 used 2k bit to index additional cache entries ... as result 2k
page mode (vs1, dos/vs) only ran with half cache (same size as 168-1).
VM/370 ran in 4k mode, except when running 2k virtual operating system
(vs1, dos/vs), and could run much slower because the constant
switching between 2k&4k modes, when hardware had to flush the cache.

First half 70s, IBM had the Future System effort,
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

which was going to completely replace 370 (and completely different
than 370; internal politics was shutting down 370 efforts and lack of
new 370s is credited with giving the 370 system clone makers their
market foothold). When FS implodes, there is mad rush to get stuff
back into 370 product pipelines, includig kicking off quick&dirty
3033&3081 efforts in parallel.

3033 starts out remapping 168 logic to 20% faster chips. They then
further optimize 168 microcode to get it down to a avg. one machine
cycle per 370 instruction (getting about 1.5 times 168-3 MIP rate).

303x channel director is 158 engine with just the integrated channel
microcode. A 3031 is two 158 engines, one w/just integrated channel
microcode, the other w/370 microcode. A 3032 is 168-3 with channel
director (slower than 168-3 external channels)

After FS implodes, there is also a new effort to do a 370 16-cpu
multiprocessor (SMP) that I got roped into helping (in part because my
HONE 2-cpu implementation was getting twice throughput of single cpu)
and we con the 3033 processor engineers into working on in it their
spare time (a lot more interesting than remapping 168 logic to 20%
faster chips). Everybody thought it was great until somebody tells
head of POK that it could be decades before POK's favorite son
operating system ("MVS") had (effective) 16-cpu SMP support (MVT/MVS
documents that their 2-cpu support only getting 1.2-1.5 throughput of
single processor; note: POK doesn't ship 16-cpu SMP until after turn
of century).

Head of POK then directs some of us to never visit POK again and
directs 3033 processor engineers heads down and no
distractions. Contributing was head of POK was in the process of
convincing corporate to kill the VM370 product, shutdown the product
group and transfer all the people to POK for MVS/XA (Endicott
eventually manages to save the VM370 product mission for the midrange,
but has to recreate a development group from scratch).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#subtopic.html#545tech
posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM's 32 vs 64 bits, was VAX

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's 32 vs 64 bits, was VAX
Newsgroups: comp.arch, alt.folklore.computers
Date: Thu, 07 Aug 2025 07:32:35 -1000

John Levine <johnl@taugh.com> writes:

It's a 32 bit architecture with 31 bit addressing, kludgily extended
from 24 bit addressing in the 1970s.

2nd half 70s kludge, with 370s that could have 64mbytes of real memory
with only 24bit addressing ... the virtual memory page table entry (PTE)
had 16bits with 2 "unused bits" ... 12bit page number (12bit 4kbyte
pages, 24bits) ... and defined the two unused bits to prepend to the
page number ... making 14bit page number ... for 26bits (instructions
were still be 24bit, but virtual memory used to translate to 26bits real
addressing).

original 360 I/O had only 24bit addressing, adding virtual memory (to
all 370s) added IDALs, the CCW was still 24bit but were still being
built by applications running in virtual memory ... and (effectively)
assumed any large storage locations consisting of one contiguous
area. Moving to virtual memory, I/O large "contiguous" area was now
borken into page size chunks in non-contiguous areas. Translating
"virtual" I/O program, the original virtual CCW ... would be converted
to CCW with real addresses and flagged as IDAL ... where the CCW pointed
to IDAL list of real addresses ... that were 32 bit words ... (31 bits
specifying real address) for each (possibly non-contiguous) real page
involved.

--
virtualization experience starting Jan1968, online at home since Mar1970

Tandem Non-Stop

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tandem Non-Stop
Date: 07 Aug, 2025
Blog: Facebook

A small SJR group (including Jim Gray, misc. others from south san
jose, and periodically even a number of non-IBMers) would have
Friday's after work at local watering holes (I had worked with Jim
Gray and Vera Watson on original sql/relational, System/R). Jim Gray
then left SJR for Tandem fall 1980. I had been blamed for online
computer conference on the IBM internal network late 70s and early
80s. It really took off the spring of 1981 when I distribute "friday"
trip report to see Jim at Tandem. From IBMJargon:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

Folklore is that when the corporate executive committee was told, 5of6 wanted to fire me. Tandem study from Jim
https://www.garlic.com/~lynn/grayft84.pdf
'85 paper
https://pages.cs.wisc.edu/~remzi/Classes/739/Fall2018/Papers/gray85-easy.pdf
https://web.archive.org/web/20080724051051/http://www.cs.berkeley.edu/~yelick/294-f00/papers/Gray85.txt

Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT/HASP

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT/HASP
Date: 07 Aug, 2025
Blog: Facebook

I took two credit hr intro to fortan/computers. At the end of the
semester, I was hired to rewrite 1401 MPIO for 360/30. Univ was
getting 360/67 for tss/360 replacing 709(tape->tape)/1401(709
front-end) and 360/30 was temporary replacement for 1401 until 360/67
arrived. Univ shutdown datacenter on weekends and I would get the
whole place to myself (although 48hrs w/o sleep made monday classes
hard). I got a whole stack of hardware and software manuals and got to
design my own monitor, device drivers, interrupt handlers, error
recovery, storage management, etc. Then within a year of taking intro
class, the 360/67 arrived and I was hired fulltime responsible for
OS/360 (TSS/360 hadn't come to fruition).

Student fortran had run under second on 709, but initially over a
minute on 360/67 (running as 360/65). I install HASP and cut the time
in half. I then start doing highly modified stage2 sysgen with MFT11,
carefully placing datasets and PDS members to optimize arm seek &
multi-track search; cutting another 2/3rds to 12.9secs. Student
fortran never got better than 709 until I install Univ of Waterloo
WATFOR (ran at 20,000 "cards"/min on 360/65, i.e. 333/sec ... its own
monitor handling multiple jobs in single step; student fortran tended
to be 30-60 cards, operations tended to do a tray of student fortran
cards per run).

Then CSC comes out to install CP67/CMS (3rd installation after CSC
itself and MIT Lincoln Labs) and I mostly get to play with it during
my dedicated weekends. It came with 1052&2741 terminal support and
Univ had some number of ascii tty 33s&35s and I add ascii terminal
support.

First MVT sysgen I did was for 15/16, and then for MVT18/HASP, I
remove 2780 support (to reduce core footprint) and add terminal
support with a editor that simulated CMS edit-syntax for a CRJE-llke
function for HASP.

Before I graduate, I was hired fulltime into a small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services (consolidate all dataprocessing into independent business
unit). I think Boeing Renton datacenter largest in the world (joke was
that Boeing was getting 360/65s like other companies got keypunches).

trivia: my (future) wife was in Crabtree's gburg JES group and one of
the co-authors of the "JESUS" specification (all the features of JES2
& JES3 that respective customers couldn't live without). For
various reasons never came to fruition.

ASP, HASP, JES3, JES2, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent univ 709/1401, MPIO, and Boeing CFO posts
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#102 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

Some VM370 History

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some VM370 History
Date: 07 Aug, 2025
Blog: Facebook

well ... recent ref:
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP

CSC comes out to install CP67 (3rd after CSC itself and MIT Lincoln
Labs) and I mostly played with it during by dedicated weekend 48hrs. I
start out rewriting pathlengths for running OS/360 running in virtual
machine. Test stream ran 322secs on real machine, initially 856secs in
virtual machine (CP67 CPU 534secs), after a couple months I have
reduced CP67 CPU from 534secs to 113secs. I then start rewriting the
dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO)
and mutli-page transfer channel programs (from FIFO and optimized for
transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270).

CP/67 came with 1052 & 2741 terminal support, including automagic
terminal type identification (used SAD CCW to change terminal type
port scanner). Univ had some number of ASCII terminals (TTY 33&35) and
I add TTY terminal support to CP67 (integrated with automagic terminal
type id). I then want to have single dialup number ("hunt group") for
all terminals. Didn't quite work, although could change port scanner
type, IBM had taken short cut and hard wired line speed.

This kicks off univ. project to build our own IBM terminal controller,
build 360 channel interface card for Interdata/3 programmed to emulate
IBM 360 controller with addition doing line auto-baud. Then
Interdata/3 is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sells it as 360 clone controller, and four of us are
written up for (some part of) IBM clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

trivia: when ASCII/TTY port scanner first arrived for IBM controller,
it came in Heathkit box.

CSC was picking up much of my code and shipping with CP67. Six months
after installing CP67 at the univ, CSC had scheduled a CP67/CMS class
on the west coast, and I'm scheduled to go. I arrive Sunday night and
am asked to teach the CP67 class. It turns out the CSC people
scheduled to teach had resigned that Friday to join NCSS (one of the
early commercial CP67 startups). Later I join small group in Boeing
CFO office and after I graduate, I join CSC (instead of staying with
CFO). Almost immediatly I'm asked to teach (more) classes.

With regard to various agencies that had been heavy CP67 users back to
the 60s:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

Early 80s, 308x was suppose to be multiprocessor only (and was some
warmed over technology from FS implosion). The 3081D 2-CPU had lower
aggregate MIPS than the single processor Amdahl *and* some IBM
production systems had no multiprocessor support (like ACP/TPF) and
IBM was afraid that the whole market would move to Amdahl. There were
a number of hacks done to VM370 multiprocessor to try and improve
ACP/TPF throughput running in a single virtual machine by increase
overlapped, asynchronous processing in an otherwise idle 2nd (3081)
processor. However those "enhancements" had degraded nearly all the
other VM370 customer multiprocessor throughput by 10-15+%. Then some
VM370 tweaks were made to improve 3270 terminal response (attempting
to mask the degradation).

There were some large customers back to the 60s that were fast ASCII
glass teletypes which didn't see any benefit from those VM370 3270
tweaks. I had earlier done something similar, but in the CMS code
... which worked for all terminal types (not just 3270) and was asked
in to help this large, long-time customer; initially reduced Q1 drops
from 65/sec to 43/sec for the same amount of CMS intensive interactive
throughput ... but I wasn't allowed to undo the VM370 ACP/TPF
tweaks. I was allowed to put the VM370 DMKVIO code back to the
original CP67 implementation ... which significantly reduced that part
of VM370 overhead (somewhat offsetting the multiprocessor overhead
tailored for running virtual ACP/TPF).

VM370 multiprocessor posting
https://www.garlic.com/~lynn/2025d.html#12 IBM 370/168

CSC postings
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller postings
https://www.garlic.com/~lynn/submain.html#360pcm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RSCS/VNET

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RSCS/VNET
Date: 07 Aug, 2025
Blog: Facebook

Some of the MIT ctss/7094 (had msg function on same machine) people
went to the 5th flr for Multics. Others went to IBM Science Center on
4th flr did virtual machines (virtual memory hardware mods for 360/40
for CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory became available), Science Center wide-area network (morphs
into VNET/RSCS internal corporate network, technology also used for
corporate sponsored univ BITNET), lots of other stuff ... including
messaging on the same machine.

IBM Pisa Science Center did "SPM" (sort of superset of later
combination of IUCV, VMCF, and SMSG) for CP67 that was later ported to
VM370. Original RSCS/VNET (before ship to customers) had SPM support
... that supported forwarding messages to anywhere on the network.

co-worker was responsible for CP67-based wide-area network, one of the
1969 inventors of GML (decade later morphs ISO SGML and after another
decade morphs into HTML at CERN)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

CSC CP67-based wide-area network then grows into the corporate
internal network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s when the internal network was
forced to convert to SNA).

There were problems with MVS/JES2 systems and had to be tightly
regulated ... original HASP code had "TUCC" in cols68-71 and scavenged
unused entries in the 255-entry psuedo device table (tended to be
160-180 entries). JES2 would trash traffic where origin or destination
node wasn't in local table ... when the internal network was well past
255 nodes (and JES2 had to be restricted to edge nodes with no or
minimal passthrough traffic).

Also NJE fields were somewhat intermixed with job control fields and
there were tenancy for traffic between JES2 systems at different
release levels to crash the destination MVS. As a result the RSCS/VNET
simulated NJE driver built up a large amount of code that would
recognize differences between MVS/JES2 origin and destination and
adjust fields to correspond to the immediate destination MVS/JES2
(further restricting MVS systems to edge/boundary node, behind a
protective VM370 RSCS/VNET system). There was infamous case where
changes in a San Jose MVS system was crashing MVS systems in Hursley
(England) and the Hursley VM370/VNET was blamed (because they hadn't
installed the updates to account for the San Jose JES2 field changes).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
ASP, HASP, JES3, JES2, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

some RSCS, VNET, SPM, VMCF, IPCS, SMSG posts
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025.html#116 CMS 3270 Multi-user SPACEWAR Game
https://www.garlic.com/~lynn/2025.html#114 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024d.html#43 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#44 Adventure Game
https://www.garlic.com/~lynn/2022f.html#94 Foreign Language
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history

--
virtualization experience starting Jan1968, online at home since Mar1970

Some VM370 History

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some VM370 History
Date: 08 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#16 Some VM370 History

Some gov agency was very active in the SHARE VM370 group and on
TYMSHARE's VMSHARE. SHARE installation code was 3letter that usually
represented the company ... in this case, they chose "CAD" (supposedly
standing for "cloak and dagger").

Name regular at SHARE and the name & agency shows up on
VMSHARE. Tymshare started providing their CMS-based online computer
conference for "free" to SHARE in Aug1976. After transfer from CSC to
SJR in 2nd half of 70s, I would regularly get to wander around
datacenters in silicon valley, and regular visits to Tymshare (and/or
see them at the monthly BAYBUNCH meetings hosted by Stanford SLAC). I
cut an early deal with Tymshare to get monthly tape dump of all
VMSHARE files for putting up on internal systems and network (biggest
problem was lawyers that were concerned that IBM internal employees
would be exposed to unfiltered customer information). After Tymshare
acquired by M/D in 1984, VMSHARE had to move to different platform.
http://vm.marist.edu/~vmshare/

random example: in 1974, CERN did a VM370/CMS comparison with MVS/TSO
and presented paper at SHARE. Copies inside IBM were maked
confidential/restricted (2nd highest security, required "need to
know") to limit internal employee exposure to unfiltered customer
information (later after "Future System" implosion and mad rush to get
stuff back into 370 product pipelines, head of POK convinced corporate
to kill VM370 product, shutdown the development group and transfer all
the people to POK for MVS/XA; Endicott eventually manages to save
VM370 product mission, but had to recreate a development group from
scratch).

In recent years, I'm was reading a couple works about Lansdale and one
mentions an 1973 incident where the VP goes across the river to give a
talk in agency auditorium. That week I'm teaching a class in the
basement (some 30-40 people). In the middle of one afternoon, half the
class gets up and quietly leaves. Then one of the people remaining
tells me I can look at it in one of two ways, half the class leaves to
go upstairs to listen to the VP in the auditorium and half the class
stays to listen to me. I can't remember for sure if he was also my
host at that 73 class.

trivia: for fun of it, search VMSHARE memo/note/prob/browse for that
last name, turns up several (not all same person). This happens to be
one also mentioning a silicon valley conference where I was frequently
only IBM attendee:
http://vm.marist.edu/~vmshare/browse.cgi?fn=SUNDEVIL&ft=NOTE

some past posts mentioning Lansdale:
https://www.garlic.com/~lynn/2022g.html#60 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2021j.html#37 IBM Confidential
https://www.garlic.com/~lynn/2021d.html#84 Bizarre Career Events
https://www.garlic.com/~lynn/2019e.html#98 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#90 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019.html#87 LUsers
https://www.garlic.com/~lynn/2018e.html#9 Buying Victory: Money as a Weapon on the Battlefields of Today and Tomorrow
https://www.garlic.com/~lynn/2018d.html#101 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018d.html#0 The Road Not Taken: Edward Lansdale and the American Tragedy in Vietnam
https://www.garlic.com/~lynn/2018c.html#107 Post WW2 red hunt
https://www.garlic.com/~lynn/2013e.html#16 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#48 What Makes an Architecture Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory
Date: 09 Aug, 2025
Blog: Facebook

IBM cambridge science center wanted 360/50 to modify for virtual
memory, but all the extras were going to FAA/ATC, so had to settle for
360/40 and then did CP40/CMS. When 360/67 standard with virtual memory
became available, CP40 morphs into CP67 (at the time, the official
commercial support was TSS/360 which had 1200 people ... when CSC had
12 people in the CP67/CMS group). There were two commerical, online
spinoffs of CSC in the 60s ... and later in the 70s also commerical
operations like BCS & TYMSHARE, offering commercial online
services. Of course by far, the largest "commercial" CP67 offering was
the internal branch office online sales and marketing support HONE
systems.

Early last decade, a customer asked me to tract down decision to find
the IBM decision to add virtual memory to all 370s ... and found staff
member to executive making decision. Bascially MVT storage management
was so bad that region sizes had to be specified four times larger
than used and a typical 1mbyte 370/165 would only run four concurrent
regions, insufficient to keep system busy and justified. Going to
running MVT in a 16mbyte virtual memory would allow number of regions
to be increased by factor of four (capped at 15 by 4bit storage
protect key) with little or no paging (sort of like running MVT in a
CP67 16mbyte virtual machine).

Ludlow was doing the initial VS2 implementation on 360/67 (pending 370
engineering models with virtual memory) ... and I would periodically
drop into visit. There was a little bit of code building the tables,
page replacement, and page I/O. The biggest issue was (EXCP/SVC0)
making copies of channel programs, replacing virtual addresses with
real (same as CP67) and he borrows CP67 CCWTRANS to craft into EXCP
(this was VS2/SVS, to get around the 15 region limit of using 4bit
storage to keep regions separated, SVS was moved to VS2/MVS, giving
each region its own virtual memory address space).

Note in 60s, Boeing had modified MVTR13 to run in virtual memory (sort
of like initial VS2/SVS), but w/o paging (to partially address the MVT
storage management issues) ... more akin to Manchester ... aka single
virtual address space (not lots of virtual address spaces) at a time.

I had done a lot of work on CP67 as undergraduate before joining CSC
(univ I was at, was 3rd CP67 installation after CSC itself and MIT
Lincoln labs), that CSC would ship in the product). In the decision to
add virtual memory to all 370s, there was also decision for
CP67->VM370 and a lot of features were simplified or dropped. When I
graduated and joined CSC, one of my hobbies was enhanced production
operating systems for internal datacenters and HONE was one of my
first (and long-time customer). SHARE orgination were submitting
resolutions to IBM for releasing lots of my CP67 enhancements
incorporated into VM370 for release to customers. Some pieces dribbled
out in VM370R3 & VM370R4.

Also in the early half of 70s, was the IBM FS effort (completely
different than 370 and was going to completely replace it), and
internal politics was killing off 370 efforts, the lack of new 370
during the period is credited with given the clone 370 makers
(including Amdahl) their market foothold. Then when FS imploded, there
was mad rush to get stuff back into the 370 product pipelines,
including kicking off the quick&dirty 3033&3081 efforts in parallel.

The head of POK was also convincing corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventuall manages to save the VM370 product
mission, but had to recreate a development group from scratch).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

The final nail in the FS coffin was analysis by the IBM Houston
Science Center that if 370/195 applications were redone for FS machine
made out of the fastest available technology, the throughput would be
about that of 370/145 (about 30 times slowdown).

Original target for CP67->VM370 was 370/145 ... and greatly
simplified my (undergraduate) dynamic adaptive scheduling and resource
management done for CP67 ... which Kingston Common really struggled
with for higher end machines. I spent much of 1974 moving lots of CP67
stuff into VM370R2 (including my dynamic adaptive code) for my
internal CSC/VM. Then moved CP67 multiprocessor support into
VM370R3-based CSC/VM ... originally for HONE (US HONE datacenters had
been consolidated in silicon valley) so they could upgrade all the
168s to 2-CPU multiprocessor (getting twice throughput of 1-CPU
.... at a time when MVS docs claimed only 1.2-1.5 times the throughput
of 1-CPU).

I had transferred from CSC to SJR on the west coast and got to wander
a lot of IBM (and non-IBM) datacenters including disk
bldg14/engineering and bldg15/product test across the street. They
were running prescheduled, 7x24, stand-alone mainframe testing and had
mentioned they had tried MVS, but it had 15min MTBF (requiring manual
re-ipl) in that environment. I offered to rewrite I/O supervisor
making it bullet-proof and never fail, allowing any amount of
on-demand, concurrent testing (greatly improving productivity). A
couple years later with 3380s about to ship, FE had a test of 57
simulated errors (they believe likely to occur), MVS was still failing
in all 57 cases (and in 2/3rds of the cases, no indications of why).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online (virtual machine based) commercial offerings
https://www.garlic.com/~lynn/submain.html#online
dynamic adaptive scheduling and resource managemeent posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, page replacement, working set, etc posts
https://www.garlic.com/~lynn/subtopic.html#clock
CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

post about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
Melinda's VM370 (and some CP67) history
https://www.leeandmelindavarian.com/Melinda#VMHist

some recent posts
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025d.html#16 Some VM370 History
https://www.garlic.com/~lynn/2025d.html#17 IBM RSCS/VNET
https://www.garlic.com/~lynn/2025d.html#18 Some VM370 History

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory
Date: 09 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory

Other trivia: 23jun1969 unbundling announcement including charging for
(application) software (but made the case that kernel software should
still be free). Then with the demise of FS and mad rush to get stuff
back into 370 product pipelines (along with the associate rise of 370
clone makers), there was transition to start charging for incremental
kernel addons (eventually resulting for all kernel software in the
80s) ... and a bunch of my internal stuff was chosen as guinea pig for
(charged for) release (I had to spend some amount of time with lawyers
and business people about kernel software policies), aka became SEPP,
prior to SP.

Unfortunately, I included VM370 kernel reorganization for
multiprocessor operation (but not actual multiprocessor support). The
initial kernel charge policy was hardware support was still
(initially) free (and couldn't have prereq of charge-for
software). When the decision was made to release multiprocessor
support ... that created a problem with its dependency on the
corresponding (charge-for) kernel reorg. The eventual decision was to
move all of that software into the "free" base (while not changing the
price of the remaining kernel add-on).

23jun1969 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
future system
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 09 Aug, 2025
Blog: Facebook

Mid-80s, communication group was fighting off IBM mainframe TCP/IP
support release, when they lost, they then change tactic and said that
since they had corporate responsibility for everything that crossed
the datacenter walls, it had to be released through them; what shipped
got aggregate 44kbytes/sec using nearly whole 3090 processor. I then
did RFC1044 support and in some tuning tests at Cray Research between
Cray and 4341, got sustained 4341 channel throughput using only modest
amount of 4341 cpu (something like 500 times improvement in bytes
moved per instruction executed).

1988, IBM branch office asked if I could help LLNL standardize some
serial stuff they were working with, which quickly becomes
fibre-channel standard ("FCS", including some stuff I had done in
1980; initial 1gbit/sec, full-duplex, 200mbytes/sec). Then the IBM POK
mainframe group finally releases some serial stuff with ES/9000 as
ESCON (when it is already obsolete, initially 10mbyte/sec, later
upgraded to 17mbyte/sec).

Also 1988, HA/6000 is approved, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster
support in same source base as Unix. Early Jan1992 in meeting with
Oracle CEO, IBM AWD executive Hester tells Ellison that we would have
16-system clusters by mid92 and 128-system clusters by
ye92. Mid-Jan92, I convince IBM FSD to use HA/CMP for gov
supercomputer bids. Then late-Jan92, cluster scale-up is transferred
for announce as IBM supercomputer (for technical/scientific *ONLY*)
and we were told we can't work on anything with more than four systems
(we leave IBM a few months later).

There apparently some concern that HA/CMP would eat the commercial
mainframe (1993):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Late 90s, I did some consulting for Steve Chen (at the time CTO of
Sequent, before IBM bought it and shut it down).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: sometime after leaving IBM, I'm brought in as consultant at a
small client/server startup, two of the former Oracle employees (that
were in the Jan92 Ellison/Hester meeting) are their responsible for
something called "commerce server" and they want to do payment
transactions. The startup had also invented this technology called
"SSL" they want to use. The result is now sometimes called "electronic
commerce". I have responsibility for webservers to payment networks.

Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory
Date: 09 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#20 370 Virtual Memory

Manchester & VS2/SVS used virtual memory for mapping single address
space ... needing more addressing than available real memory in
VS2/SVS case it was more to compensate for the poor MVT storage
management ... because there was little or no paging. Then the move to
VS2/MVS was because needed separation/protection for more than 15
concurrently executing regions (provided by 4bit storage protection
keys); giving each executing region its own separate virtual address
space.

Then the move to MVS/XA was because the extensive OS/360 pointer
passing API. The switch from VS2/SVS where everything was in the same
address space ... and the kernel calls (SVC) met supervisor directly
addressing parameters pointed to by the caller's pointer ... requiring
an 8mbyte kernel image occupied 8mbytes of every caller's virtual
address space (cutting application space from 16mbytes to
8mbytes). Then because subsystems were moved to their own separate
address space ... to access calling parameters, had to place them into
the CSA (common segment area) that was mapped into every application
address space (leaving 7mbytes). Then because CSA space requirements
were somewhat proportional to number of concurrent regions and number
of subsystems, CSA became common system area ... and by 3033 had
exploded to 5-6mbytes (leaving 2-3mbytes for application (but
threatening to become 8mbytes, leaving zero).

370/xa introduced access registers and primary/secondary address
spaces for subsystems ... parameters could stay in caller's address
space (not CSA) ... system would switch the caller's address space to
secondary and load the subsystem's address space into primary ... now
subsystems can access everything in the caller's address space
(including parameters) ... on return the process was reversed, moving
secondary address space back to primary. The 3033 issue was becoming
so dire that subset of access registers was retrofitted to 3033 as
"dual address space mode".

trivia: the person that retrofitted "dual address space mode" for
3033, in the early 80s left IBM for HP ... and later was one of the
primary architects for Intel Itanium.

paging posts:
https://www.garlic.com/~lynn/subtopic.html#clock

some posts mentioning pointer passing API, MVS problems and CSA
(both segment and system)
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#17 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2010p.html#21 Dataspaces or 64 bit storage
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory
Date: 10 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#20 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory

other trivia: mentions customers weren't moving to VS2/MVS (as fast as
needed, I was at the SHARE when it was 1st played), see "$4K"
reference in Glossary:
http://www.mxg.com/thebuttonman/boney.asp

with the FS implosion there was mad rush to get stuff back into the
370 product pipelines, kicking off quick&dirty 3033&3081 in parallel
... along with 370/xa ... referred to as "811" (for Nov78 publication
of specification, design, architecture) ... nearly all done for MVS/XA
(head of POK had already convinced corporate to kill vm370 product,
shutdown the development group and transfer all the people to POK for
MVS/XA; endicott managed to save vm370 product mission for the
mid-range, but had to recreate development group from scratch).

Later, customers weren't migrating from MVS to MVS/XA (as required,
and CSA is threatening to take over all the remaining of 16mbyte
address space). Amdahl was having more success because Amdahl machines
had the microcode (virtual machine) hypervisor (multiple domain) and
could run MVS & MVS/XA concurrently (IBM wasn't able to respond with
LPAR&PR/SM for nearly a decade). POK had done a simplified VMTOOL for
MVS/XA development, needed special microcode (to slip in&out of
VM-mode, eventually named SIE) and the microcode had to be swapped
in&out (sort of like overlays) because of limited 3081 microcode space
(so never was targeted for performance) ... eventually VMTOOL made
available to 3081 customers as VM/MA (migration aid) and VM/SF (system
facility).

Part of the issue needing ever increasing number of concurrent
executing regions as machines increased in power was a tome I wrote in
the early 80s (started pointing out in the mid-70s), that disk
relative system throughput had decline by order of magnitude since 360
announce (in the 60s), i.e. disks got 3-5 times faster while systems
got 40-50 times faster. A disk division executive took exception to
the analysis and assigned the division performance group to refute the
claim. However after a couple weeks, they came back and effectively
said that I had slightly understated the problem. Their analysis then
was respun for a presentation on how to configure disks and
filesystems for better system throughput (16Aug1984, SHARE 63, B874).

recent past posts about MVS & MVS/XA migration
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Yorktown Research

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Yorktown Research
Date: 10 Aug, 2025
Blog: Facebook

Transferred from CSC to SJR on the west coast ... and then for
numerous transgressions (folklore; 5of6 of corporate executive
committee wanted to fire me), was transferred to YKT ... still lived
in San Jose and had various IBM offices/labs in the area, but had to
commute to YKT a couple times a month (SJ monday, SFO->JFK redeye
monday night, bright and early Tues in YKT, Tues-Fri in YKT and
JFK->SFO Fri afternoon). Was told that they could never make me a
fellow with 5of6 of corporate executive committee wanting to fire me
... but if I kept my head down, they could route funding my way as if
I were one.

I also had part of wing and labs in Los Gatos lab. and along the way,
funding for "HSDT", T1 and faster computer links ... and battles with
the communication group (IBM had 2701 controllers in the 60s w/T1
support, but transition in 70s to SNA/VTAM and various issues, caped
controllers at 56kbit/sec). Initially had T1 circuit over the company
T3 C-band TDMA satellite system, between LSG and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi E&S lab in Kingston that
had boat loads of Floating Point System boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems

Then got dedicated custom designed Ku-band TDMA system, initially
three stations, LSG, YKT, and Austin (included allowing RIOS chip
design team to use the EVE in San Jose)

Was also working with NSF director and was suppose to get $20M to
interconnect the NSF Supercomputer Centers. Then congress cuts the
budget, some other things happen and eventually an RFP is release NSF
28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

IBM branch asked if I could help LLNL standardize some serial stuff
they were working with, quickly became fibre-channel standard ("FCS",
including some stuff I had done in 1980, initial 1gbit/sec,
full-duplex, aggregate 200mbyte/sec). Later POK ships their fiber
stuff as ESCON (when it is already obsolete). Same year, got HA/6000
project, initially for NYTimes to move their newspaper system (ATEX)
off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Inges, Informix) that had VAXcluster
support in the same source base with Unix.

We had reported to executive that goes over to head up (AIM) Somerset.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), told not
allowed to work with anything more than 4-system clusters, then leave
IBM a few months later.

A little later, asked in as consultant to small client/server
startup. Two former Oracle employees (that were in Ellison/Hester
meeting) were there responsible for something called "commerce server"
and they wanted to do payment transactions. The startup had done some
technology they called "SSL" they wanted to use; it is now frequently
called "electronic commerce"; I had responsibility for everything
between webservers and payment networks. IETF/Internet RFC Editor,
Postel also let me help him with the periodically re-issued "STD1".

Designed security chip and working with Seimens guy with office in the
old ROLM facility. Seimens spins chips off as Infineon and guy working
with became its president and rang bell at NYSE. Was then getting it
fab'ed at new security chip fab in Dresden (already certified by US &
German govs) and was required to do audit walk through. TD to agency
DDI was doing assurance panel in the Trusted Computing track at IDF
... ref gone 404, but lives on at wayback machine
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts

https://www.garlic.com/~lynn/subtopic.html#801 NSFNET posts https://www.garlic.com/~lynn/subnetwork.html#nsfnet HA/CMP posts https://www.garlic.com/~lynn/subtopic.html#hacmp payment gateway posts https://www.garlic.com/~lynn/subnetwork.html#gateway some x959&AADS posts https://www.garlic.com/~lynn/subpubkey.html#x959 x959&aads refs https://www.garlic.com/~lynn/x959.html -- virtualization experience starting Jan1968, online at home since Mar1970

IBM Management

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Management
Date: 11 Aug, 2025
Blog: Facebook

1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

Future Systems posts
https://www.garlic.com/~lynn/submain.html#futuresys

Late 80s, AMEX and KKR were in competition for private-equity,
reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.

I was introduce to John Boyd in the early 80s and would sponsor his
briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd
for makeover of the corps (at a time when IBM was desperately in need
of a makeover). Then IBM has one of the largest losses in the history
of US companies and was being reorganized into the 13 baby blues in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts:
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

from Learson 1972 Management Briefing:

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312
TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of
bureaucracy. Evidently the earlier ones haven't worked. So this time
I'm taking a further step: I'm going directly to the individual
employees in the company. You will be reading this poster and my
comment on it in the forthcoming issue of THINK magazine. But I wanted
each one of you to have an advance copy because rooting out
bureaucracy rests principally with the way each of us runs his own
shop.

We've got to make a dent in this problem. By the time the THINK piece
comes out, I want the correction process already to have begun. And
that job starts with you and with me.

Vin Learson

... snip ...

IBM wild duck poster 1973
https://collection.cooperhewitt.org/objects/18618011/

Before research, I had joined cambridge science center after
graduation. I would attend user group meetings and drop into customer
accounts, director of one of IBM's largest (financial industry)
datacenters especially liked me to drop in and talk technology. At one
point, the local IBM branch manager horribly offended the customer and
in retribution, they ordered an Amdahl computer (it would be a lonely
Amdahl in a vast sea of blue; Amdahl had been selling into
technical/scientific market and this would be the first for true blue,
commercial customer). I was asked to go live on-site for 6-12 months
(to help obfuscate the reason for the order). I talked it over with
the customer and then refused the request. I was then told that the
branch manager was good sailing buddy of IBM CEO and if I refused, I
could say goodby to career, promotions, raises.

Amdahl leaves after ACS/360 is killed
https://people.computing.clemson.edu/~mark/acs_end.html

Later after transferring to SJR on west coast, I 1st tried to have
Boyd briefing done through San Jose plant site education. Initially,
they agreed ... but later as I provided more info about briefing and
prevailing in adversarial situations, they told me IBM spends a great
deal educating managers in handling employees and it wouldn't be in
IBM's best interest to expose general employees to Boyd. I should
limit the audience to senior members of competitive analysis
departments. First briefing was in SJR auditorium (open to all). I did
learn that a "cookie guard" was required for break refreshments
... otherwise the refreshments will have disappeared into the local
population by break time. I was then admonished that unspoken rule was
talks by important people, had to be scheduled first in YKT before
other research locations.

other Boyd:
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
https://en.wikipedia.org/wiki/OODA_loop

Boyd related posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 1655

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 1655
Date: 11 Aug, 2025
Blog: Facebook

1982, started getting "IBM 1655" (solid state disk from intel) ... it
could emulate four 2305 (48mbyte) on 1.5mbyte channel (7-8ms/page)
... but could also be configured as native mode and 3mbyte data
streaming (3ms/page). My "SYSPAG" was a way of specifying DASD
configuration for paging, w/o having explicitly coded device type
rules. A decade earlier, I had released "page migration" checking for
idle pages on "fast" paging devices and moving to "slower" paging
devices (page replacement for 3 level, rather than just memory/DASD
2-level)

From: wheeler
To: distribution

re: 1655 native mode;

I have my SYSPAG updates applied to hpo3.2 which eliminates much of
the device specific support code in and around the paging system
... as a result much of the native mode 1655 updates were eliminated
or significantly reduced. Primary places of code remaining are DMKPAG
(actual CCWs), DMKCCW (virtual ccws - use as attached device), DMKCPI
(real CCWS at IPL time), and DMKFOR (ccws to format).

... snip ...

paging, page replacement, page I/O:
https://www.garlic.com/~lynn/subtopic.html#clock

posts mentioning 1655 and SYSPAG
https://www.garlic.com/~lynn/2021j.html#28 Programming Languages in IBM
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
https://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller chache
https://www.garlic.com/~lynn/2006y.html#9 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 1655

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 1655
Date: 11 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#26 IBM 1655

I got HSDT, T1 and faster computing links (both terrestrial and
satellite) and lots of battles with communication group (60s IBM 2701
controller supported T1, but 70s move to SNA/VTAM and issues, appeared
to cap controllers at 56kbits/sec).

Mid-80s, they generated analysis that customers weren't looking for T1
support until sometime well into the 90s. The showed number of "fat
pipe" configurations (parallel 56kbit treated as single logical links)
... and found they dropped to zero by seven parallel links (what they
didn't know, or didn't want to publicize, was that typical telco
tariff at five or six 56kbit was about the same as full T1). Trivial
survey by HSDT found 200 customers with full T1, they just switched to
non-IBM hardware and software.

About the same time they were fighting off release of mainframe TCP/IP
support. When they lost, they changed the tactics and said that since
they had corporate ownership of everything that crossed datacenter
walls, it had to be release through them. What shipped got aggregate
44kbytes/sec using nearly full 3090 processor. I then added RFC1044
support and in some tuning tests at Cray Research between Cray and
4341, got sustained 4341 channel throughput using only modest about of
4341 CPU (something like 500 times improvement in bytes moved per
instruction executed).

Some univ analysis claimed that LU6.2 VTAM pathlength was 160K
instructions while equivalent unix workstation TCP pathlength was 5K
instruction.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Univ, Boeing/Renton, IBM/HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: Univ, Boeing/Renton, IBM/HONE
Date: 11 Aug, 2025
Blog: Facebook

Within a year of taking two credit hr fortran/computer class, a 360/67
arrived (part of replacing 709/1401), originally for TSS/360 (but
never came to production), and I was hired fulltime responsible for
os/360. Then CSC comes out and installs CP67/CMS (3rd after CSC itself
and MIT Lincoln Labs) and I mostly get to play with it during my
dedicate weekend time (univ shutdown datacenter on weekends although
48hrs w/o sleep made monday classes hard). CP67 supported 1052&2741
terminals with automagic terminal type identification (switching
terminal-type port scanner as needed). Univ. had some number of ASCII
TTY 33&35, so I add ascii terminal support (integrated with automagic
terminal type id; trivia when the ASCII port scanner had been
delivered to the univ, it came in Heathkit box). I then wanted single
dial-up number ("hunt group") for all terminals. Didn't quite work,
IBM had taken short-cut and hardwired line speed for each port. That
kicks off clone controller project, implement channel interface board
for Interdata/3 programmed to emulate IBM controller, with addition it
support auto line speed. It is then upgraded with Interdata/4 for
channel interface, with cluster of Interdata/3s for port
interfaces. Four of us are then written up for (some part of) clone
controller business ... sold by Interdata and later Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Then before I graduate, I'm hired into small group in Boeing CFO
office to help with the formation of Boeing Computer Services
(consolidate all dataprocessing into an independent business unit). I
think Boeing Renton datacenter largest in the world (360/65s arriving
faster than they could be installed, boxes constantly staged in
hallways around machine room, joke that Boeing got 360/65s like other
companies acquired keypunches). Lots of politics between Renton
directory and CFO, who only had 360/30 up at Boeing Field for payroll
(although they enlarge the room and install 360/67 for me to play
with, when I'm not doing other stuff).

Then when I graduate, I join IBM science center (instead of staying
with CFO). One of my hobbies after joining IBM was enhanced production
systems for internal datacenters (online sales&marketing support HONE
systems was one of the first and long time customer, initially CP67,
then VM370, also HONE had me go along for early non-US HONE
installs). With the decision to add virtual memory to all 370s, a new
group was formed to morph CP67 into VM370, but lots of CP67 stuff was
greatly simplified and/or dropped. 1974, I start moving lots of stuff
into VM370R2 for my CSC/VM. HONE then consolidates their US 370
datacenters in Palo Alto (across the back parking lot from PASC,
trivia: when FACEBOOK 1st moves into Silicon Valley, it was into a new
bldg built next door to the former HONE datacenter). I then start
putting multiprocessor support into VM370R3-based CSC/VM, initially
for US HONE so they could upgrade all their 370/168s to 2-CPU systems.

trivia: after bay area earthquake in early 80s, HONE was 1st
replicated in Dallas, and then a 3rd in Boulder.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller
https://www.garlic.com/~lynn/submain.html#360pcm
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Recent posts mentioning Boeing CFO, Renton, BCS ("boeing computer
services")
https://www.garlic.com/~lynn/2025d.html#15 MVT/HASP
https://www.garlic.com/~lynn/2025d.html#12 IBM 370/168
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#100 When Big Blue Went to War
https://www.garlic.com/~lynn/2025c.html#83 IBM HONE
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#106 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#102 Large IBM Customers
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#70 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#22 IBM SE Asia
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM PS2

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM PS2
Date: 12 Aug, 2025
Blog: Facebook

Head of POK took over Boca & PCs. There was joke that IBM lost $5 on
every PS2 made, but IBM would make it up with volumes. Boca then hires
Dataquest (since bought by Gartner) to do study of PC futures
(including video tape round table of several silicon valley
experts). The person running the study I had known for several years
... and I was asked to be a silicon valley expert, I clear it with my
local management and Dataquest would obfuscate my bio so Boca wouldn't
recognize me as IBM employee.

2010 max configured z196 (mainframe) benchmarked (industry standard
number program iterations compared to reference platform) at 50BIPS
and went for $30M. At same time E5-2600 server blade benchmarked for
500BIPS (program iterations compared to same reference platform) and
IBM base list price was $1815 ... and large cloud operations (dozens
or scores of megadatacenters around the world, each with half million
or more blade servers and enormous automation) were claiming they
assemble their own blade servers at 1/3rd price of brand name
servers. Then industry press had article that server component vendors
were shipping at least half their product directly to cloud
megadatacenters, and IBM sells off its server blade business.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

posts mentioning Dataquest, Gartner, PS2, Boca
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024e.html#103 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2023g.html#59 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#13 IBM/PC
https://www.garlic.com/~lynn/2022h.html#109 terminals and servers, was How convergent was the general use of binary floating point?
https://www.garlic.com/~lynn/2022h.html#104 IBM 360
https://www.garlic.com/~lynn/2022h.html#38 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2021k.html#36 OS/2
https://www.garlic.com/~lynn/2021f.html#72 IBM OS/2
https://www.garlic.com/~lynn/2021.html#68 OS/2
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2017h.html#113 IBM PS2
https://www.garlic.com/~lynn/2017f.html#110 IBM downfall
https://www.garlic.com/~lynn/2017d.html#26 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017b.html#23 IBM "Breakup"
https://www.garlic.com/~lynn/2014l.html#46 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2013i.html#4 IBM commitment to academia
https://www.garlic.com/~lynn/2012k.html#44 Slackware
https://www.garlic.com/~lynn/2010c.html#78 SLIGHTLY OT - Home Computer of the Future (not IBM)
https://www.garlic.com/~lynn/2008d.html#60 more on (the new 40+ yr old) virtualization

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory
Date: 12 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#20 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#22 370 Virtual Memory
https://www.garlic.com/~lynn/2025d.html#23 370 Virtual Memory

VM/370 CMS had 64k bytes of OS/360 simulation (joke that CMS 64kbytes
was more effective than MVS's 8mbytes). Circa 1980, san jose plant
site had some large apps that required MVS because wouldn't run on
CMS. Then the Los Gatos lab added 12kbytes of further OS/360
simulation and got nearly all the rest ported from MVS to CMS.

At the time Burlington had 7mbyte VLSI design Fortran app and special
generated MVS systems restricted to 8mbyte kernel image and 1mbyte CSA
... creating brick wall at 7mbyte for the fortran app (any time
enhancements/changes were made, it was solid at brick wall at
7mbyte). Los Gatos offered to provide them extra 12kbytes of OS/360
... CMS running in 16mbyte virtual machine would use less than
192kbye ... leaving the rest of the 16mbyte for the Burlington VLSI
fortran app (more than doubling addressing available, compared to
their specially created MVS systems). However Burlington was a heavily
influenced POK shop, and the head of POK had already gotten corporate
to kill VM370 product, shutdown the development group, and transfer
all the people to POK (for MVS/XA) ... having all the Burlington 370s
move to VM370/CMS would be great loss of face (Endicott had managed to
save VM370 product for mid-range, but was still in the process of
recreating a development group from scratch ... so much of VM370/CMS
work was being done by the internal community).

some recent posts mentioning Los Gatos lab:
https://www.garlic.com/~lynn/2025d.html#24 IBM Yorktown Research
https://www.garlic.com/~lynn/2025d.html#7 IBM ES/9000
https://www.garlic.com/~lynn/2025d.html#1 Chip Design (LSM & EVE)
https://www.garlic.com/~lynn/2025c.html#116 Internet
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#107 IBM San Jose Disk
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025b.html#86 Packet network dean to retire
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#74 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#37 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#21 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#12 IBM APPN
https://www.garlic.com/~lynn/2025.html#2 IBM APPN
https://www.garlic.com/~lynn/2024g.html#103 John Boyd and Deming
https://www.garlic.com/~lynn/2024g.html#102 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#57 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#6 IBM 5100
https://www.garlic.com/~lynn/2024f.html#82 IBM Registered Confidential and "811"
https://www.garlic.com/~lynn/2024f.html#45 IBM 5100 and Other History
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#79 NSFNET
https://www.garlic.com/~lynn/2024e.html#63 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#40 Instruction Tracing
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#80 IBM ATM At San Jose Plant Site
https://www.garlic.com/~lynn/2024d.html#19 IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#5 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#114 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#81 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#68 Berkeley 10M
https://www.garlic.com/~lynn/2024b.html#27 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#42 Los Gatos Lab, Calma, 3277GA
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#15 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

Public Facebook Mainframe Group

From: Lynn Wheeler <lynn@garlic.com>
Subject: Public Facebook Mainframe Group
Date: 13 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group

... mostly repeat from post:

late 70s & early 80s, I was blamed for online computer conferencing on
the internal network (larger than arpanet/internet from the late 60s
to sometime mid/late 80s, about the time it was forced to convert to
SNA). It really took off the spring '81 when I distributed trip report
to visit Jim Gray at Tandem (he had left IBM SJR fall of 1980), only
about 300 actually participated but claims upwards of 25,000 were
reading. From IBMJargon:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf

  Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

Six copies of 300 page extraction from the memos were put together in
Tandem 3ring binders and sent to each member of the executive
committee, along with executive summary and executive summary of the
executive summary. A small bit reproduced in this (linkedin) post:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Some task forces were formed to study the phenomena and researcher was
hired to study how I communicated. The researcher sat in the back of
my office for nine months, taking notes on conversations and phone
calls, got copies of all my incoming and outgoing email, and logs of
all instant messages. The result was IBM (internal) reports,
conference talks&papers, books and a Stanford PHD (joint between
language and computer AI, Winograd was advisor on AI side). Eventually
IBM forum software was created along with officially sanctioned,
moderated FORUMs.

Also from
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

1972, Learson had tried (and failed) to block the bureaucrats,
careerists, and MBAs from destroying Watson culture/literacy,
pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
then from Future System in 1st half of 70s, 1993 Computer Wars: The
Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

Leading to IBM with one of the largest losses in the history of US
companies, was being reorged into the 13 "baby blues" in preparation
for breaking up the company (take-off on "baby bell" breakup a decade
earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get call from the bowels of (corp
hdqtrs) Armonk asking us to help with the corporate breakup. Before we
get started, the board brings in the former AMEX president as CEO to
try and save the company, who (somewhat) reverses the breakup.

note: late 80s, senior disk engineer got a talk scheduled at annual,
world-wide, internal communication group conference, supposedly on
3174 performance. However, his opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were all being vetoed by the communication group (with their
corporate ownership of everything that crossed datacenter walls). The
communication group stranglehold on mainframe datacenters wasn't just
disks and a couple yrs later, IBM has one of the largest losses in the
history of US companies. Disk division executive partial
countermeasure (to communication group) was investing in distributed
computing startups that would use IBM disks (and he would periodically
call us in to visit his investments to see if we could provide any
help).

... other trivia, mid-80s, the communication group was fighting off
release of mainframe tcp/ip support, when they lost, they changed the
strategy. Since they had corporate responsibility for everything that
crossed datacenter walls, it had to be released through them. What
shipped got aggregate 44kbytes/sec using nearly whole 3090
processor. I then added RFC1044 support and in some tuning tests at
Cray Research between Cray and 4341, got sustained 4341 channel
throughput using only modest amount of 4341 processor (something like
500 times increase in the bytes moved per instruction executed).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
communication group terminal emulation & strangle hold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#emulation
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
Date: 13 Aug, 2025
Blog: Facebook

MIT CTSS/7094 had a form of email.
https://multicians.org/thvv/mail-history.html

Then some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to the IBM Cambridge Science Center on the 4th
flr and did virtual machines (1st modified 360/40 w/virtual memory and
did CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory becomes available, precursor to VM370), science center
wide-area network that morphs into the IBM internal network, larger
than arpanet/internet from beginning until sometime mid/late 80s about
time forced to convert to SNA; technology also used for the corporate
sponsored univ BITNET), invented GML 1969 (precursor to SGML and HTML,
etc). From one of the GML inventors about the science center wide-area
network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

IBM Pisa Scientific Center had done SPM for CP67 (later ported to
internal VM370), superset of combination of (the later VM370) VMCF,
IUCV and SMSG. RMSG/VNET supported SPM (even version sent to
customers) ... which could be used for instant messaging on the
internal network. SPM was used by multi-user client/server space war
game (and with RMSG/VNET SPM support, clients could be on any node on
the internal network). Some number of apps internally and on BITNET
supporting the instant messaging capability.

PROFS started out picking up internal apps and wrapping 3270 menus
around (for the less computer literate). They picked up a very early
version of VMSG for the email client. When the VMSG author tried to
offer them a much enhanced version of VMSG, profs group tried to have
him separated from the company. The whole thing quieted down when he
demonstrated that every VMSG (and PROFS email) had his initials in a
non-displayed field. After that he only shared his source with me and
one other person. VMSG also contained ITPS format option for email
sent to the gateway between internal network and ITPS.

The VMSG author also did Parasite/Story, CMS application that used
3270 psuedo devices and its own HLLAPI-like (before IBM/PC) ... could
talk to CCDN via PASSTHRU/CCDN gateway. Old archived post with
PARASITE/STORY information (remarkable aspect was code so efficient,
could run in less than 8k bytes).
https://www.garlic.com/~lynn/2001k.html#35
and (field engineering) RETAIN PUT Bucket Retriever "Story"
https://www.garlic.com/~lynn/2001k.html#36

Another system was branch office online sales&marketing support HONE
systems. When I joined IBM, one of my hobbies was enhanced production
operating systems for internal datacenters and HONE was 1st (and long
time) customer, initially CP67/CMS systems and 2741 terminals, moving
to VM370/CMS systems (all over the world) and 3270 terminals.

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

posts mentioning vmsg, parasite, story
https://www.garlic.com/~lynn/2025b.html#60 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2015d.html#12 HONE Shutdown
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014j.html#25 another question about TSO edit command
https://www.garlic.com/~lynn/2014h.html#71 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#66 spool file data
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
Date: 14 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

The Cambridge Science Center had also ported APL\360 to CP67/CMS for
CMS\APL (needed to rework storage management from 16kbyte workspace
swapping, to large demand page workspaces and also API for using
system services like file I/O) and most of sales marketing and support
apps were done CMS\APL ... upgraded to APL\CMS in move to
VM370/CMS. In the morph of CP67->VM370, lots of stuff was simplified
and/or dropped. 1974, with VM370R2, I started moving lots of stuff
(feature, function, performance) into VM370 for my internal
CSC/VM. Then for VM370R3 CSC/VM, I put multiprocessor support back in,
originally for HONE so they could upgrade all their 168s to 2-CPU
systems (each system getting twice the throughput of single CPU). US
HONE had consolidated all their datacenters in silicon valley (trivia:
when FACEBOOK 1st moved into silicon valley, it was into a new bldg
built next door to the former US HONE datacenter). After the early 80s
bay area earthquake, US HONE was 1st replicated in Dallas and then a
another in Boulder.

One of 1st overseas trips having recently graduated and joined IBM,
was HONE asked me to go along for early non-US HONE install in Paris.

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
Date: 14 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#33 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

Also asked to go over for HONE install in Tokyo ... Okura hotel, right
across from the US compound, IBM was down the hill, then under the
highway overpass on the other side (I think yen was 330/$).

After transfer from CSC to Research on the west coast (CSC/VM turns
into SJR/VM), got to wander around IBM (& non-IBM) datacenters in
silicon valley, including disk bldg14/engineering and bldg15/product
test ... on the other side of the street. Bldg14&15 were running
pre-scheduled, 7x24 stand-alone testing and mentioned that they had
recently tried MVS, but it had 15min MTBF (requiring manual re-ipl) in
that environment. I offer to rewrite I/O supervisor making it bullet
proof and never fail (allowing any amount of on-demand concurrent
testing, greatly improving productivity. Then bldg15 gets 1st
engineering 3033 outside POK 3033 processor engineering. Testing was
only taking percent or two of CPU so we scrounge a 3830 & 3330 string
for putting up our own private online service. I do an internal
research report on all the I/O integrity work and happen to mentiong
the MVS 15min MTBF ... bringing down the wrath of the MVS group on my
head. A few years later, just before 3380s were about to ship, FE has
57 simulated hardware errors they considered likely to occur. In all
57 cases, MVS was still crashing in all cases and in 2/3rds of the
cases, no indication what caused the crashes (and I didn't feel
sorry).

I would also stop by TYMSHARE and/or see them at monthly meetings
hosted by Stanford SLAC. They had made their CMS online computer
conferencing free to the SHARE organization in Aug1976 as VMSHARE;
archives here:
http://vm.marist.edu/~vmshare/
I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE
files for putting up on internal network and systems (including HONE),
biggest problem were lawyers concerned that internal employees would
be contaminated by access to "unfiltered" customer information.

Some dates back to CERN's analysis comparing MVS/TSO and VM370/CMS,
presented at SHARE in 1974. Internally, inside IBM, copies of the
report were stamped "IBM Confidential - Restricted" (2nd higest
classification, available on a need to know only). Then after FS
implodes
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

the head of POK managed to convince corporate to kill the VM370
product, shutdown the VM370 development group and transfer all the
people to POK for MVS/XA (Endicott eventually manages to save the
VM370 mission for the mid-range, but has to recreate a VM370
development group from scratch). POK executives were also strong
arming HONE trying to force them to convert to MVS.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer in bldgs 14&15:
https://www.garlic.com/~lynn/subtopic.html#disk
commercial virtual machine offerings
https://www.garlic.com/~lynn/submain.html#online

TYMSHARE & VMSHARE posts
https://www.garlic.com/~lynn/2025d.html#18 Some VM370 History
https://www.garlic.com/~lynn/2025c.html#89 Open-Source Operating System
https://www.garlic.com/~lynn/2025.html#126 The Paging Game
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2023f.html#64 Online Computer Conferencing
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019b.html#54 Misinformation: anti-vaccine bullshit
https://www.garlic.com/~lynn/2018f.html#77 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2015g.html#91 IBM 4341, introduced in 1979, was 26 times faster than the 360/30
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014d.html#44 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2012p.html#22 What is a Mainframe?
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2011k.html#2 First Website Launched 20 Years Ago Today
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2009s.html#12 user group meetings
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006v.html#22 vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
Date: 15 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#33 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

RAID history
https://en.wikipedia.org/wiki/RAID#History

In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was
subsequently named RAID 4.[5]

... snip ...

trivia: Ken worked in bldg14. I had transferred from CSC to SJR on the
west coast the same year and got to wander around IBM (and non-IBM)
datacenters in silicon valley, including disk bldg14/engineering and
bldg15/product test, across the street. They were running
pre-scheduled, 7x24, stand-alone mainframe testing and said that they
had recently tried MVS, but it had 15min MTBF (in that environment)
requiring manual re-ipl. I offer to rewrite I/O supervisor making it
bullet-proof and never fail, allowing any amount of on-demand,
concurrent testing (greatly improving productivity). I do (internal
only) Reseach Report on the I/O Integrity work and happen to mention
the MVS 15min MTBF, bringing down the wrath of the MVS organization on
my head. A couple years later with 3380s about to ship, FE had a test
of 57 simulated errors (they believe likely to occur), MVS was still
failing in all 57 cases (and in 2/3rds of the cases, no indications of
why).

Note: no IBM CKD DASD has been made for decades, all being emulated on
industry standard fixed-block devices.

posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

trivia: some of the MIT CTSS/7094 went to the 5th flr for Multics,
others went to the IBM cambridge science center on the 4th floor,
modified 360/40 with virtual memory and did CP40/CMS, which morphs
into CP67/CMS when 360/67 standard with virtual memory becomes
available ... also invented GML (letters after inventors last names)
in 1969 (after a decade it morphs into ISO standard SGML and after
another decade morphs into HTML at CERN), bunch of other stuff. In
early 70s, after decision to add virtual memory to all 370s, some of
CSC splits off and takes over the IBM Boston Programming Center (on
the 3rd flr) for the VM370 development group.

FS was completely different than 370 and was going to completely
replace it (during FS, internal politics was killing off 370 efforts,
lack of new 370 during FS, is credited with giving clone 370 system
makers, their market foothold). When FS implodes there is mad rush to
get stuff back into product pipelines, including kicking off
quick&dirty 30333&3081 efforts in parallel.

Head of POK also manages to convince corporate to kill VM370, shutdown
the development group and transfer all the people to POK for MVS/XA
(Endicott eventually manages to save 370 product mission for the
mid-range, but had to recreate a development group from scratch. Later
customers weren't converting from MVS to MVS/XA as planned. Amdahl was
having more success because they had (purely) microcode hypervisor
("multiple domain") and was able to run MVS & MVS/XA concurrently
(note IBM wasn't able to responsd with LPAR&PR/SM on 3090 for nearly
decade). POK had done primitive virtual machine ("VMTOOL") for MVS
development, also needed SIE instruction to slip in&out of virtual
machine mode ... part performance problems was 3081 didn't have enough
microcode space so SIE stuff had to be swapped in&out.

Melinda's history (including CP40 & CP67)
https://www.leeandmelindavarian.com/Melinda#VMHist

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
cp67l, csc/vm, sjr/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

past posts mentioning "SIE", MVS/XA, VMTOOL, Amdahl
https://www.garlic.com/~lynn/2025d.html#23 370 Virtual Memory
https://www.garlic.com/~lynn/2025c.html#78 IBM 4341
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#27 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2014j.html#10 R.I.P. PDP-10?
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
https://www.garlic.com/~lynn/2006j.html#27 virtual memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Univ, Boeing/Renton, IBM/HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: Univ, Boeing/Renton, IBM/HONE
Date: 16 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#28 Univ, Boeing/Renton, IBM/HONE

23jun69 unbundling started charging for application software, SE
services, maint, etc.

The charging for SE services pretty much put an end to SE support
teams at customer sites ... where new SEs learned the trade ... sort
of as apprentices (since they couldn't figure out how not to charge
for new, inexperienced SEs at customer site). In reaction, HONE
(Hands-On Network Environment) was setup ... a number of internal cp67
data centers providing virtual machine access to SEs in the branch
office working with guest operating systems. The concept was that SEs
could get hands-on operating experience via remote access running in
(CP67) virtual machines.

CSC had also ported apl\360 to CMS (for cms\apl) and a number of sales
& marketing support applications were developed in CMS\APL and (also)
deployed on HONE. Relatively quickly the sales&marketing applications
came to dominate all HONE activity (personal computing, time-sharing)
... and the original objective of SE training (using guest operating
systems in virtual machines) withered away.

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

recent related comments/replies
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#33 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#34 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025d.html#35 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE, VMSHARE, ADVENTURE

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE, VMSHARE, ADVENTURE
Date: 16 Aug, 2025
Blog: Facebook

After transferring from science center to SJR Research on the west
coast, got to wander around lots of (IBM & no-IBM) datacenters in
silicon valley, including TYMSHARE ... also see them at the monthly
BAYBUNCH meetings hosted at Stanford SLAC. TYMSHARE started offering
their CMS based online computer conferencing, free to (mainframe user
group) SHARE in Aug1976 as VMSHARE, archives:
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape dump/copy of VMSHARE
(and later PCSHARE) files for putting up on internal network and
systems (including world-wide branch office online HONE
systems). Biggest problem was concern that internal employees would be
contaminated exposed to unfiltered customer information. Some of this
dated back to 1974, when CERN made presentation at SHARE comparing
VM370/CMS and MVS/TSO (inside IBM, copies were stamped "IBM
Confidential - Restricted", 2nd highest classification, aka need to
know only)

One such TYMSHARE visit, they demonstrated ADVENTURE that they had
found on Stanford SAIL PDP10 and ported to VM370/CMS. I got copy for
putting up on internal systems. I would send source to anybody that
proved that got all points. Within short time, versions with more
points as well as PLI versions appeared
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

Most internal 3270 logon screens had "For Business Use Only", however
SJR 3270 logon screens had "For Management Approved Use" ... came in
handy when some people from corporate audit demanded all the demo
programs (like adventure) had to be removed.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

misc posts mentioning TYMSHARE, VMSHARE, and Adventure:
https://www.garlic.com/~lynn/2025.html#126 The Paging Game
https://www.garlic.com/~lynn/2024g.html#97 CMS Computer Games
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2024f.html#11 TYMSHARE, Engelbart, Ann Hardy
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP

--
virtualization experience starting Jan1968, online at home since Mar1970

Mosaic

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mosaic
Date: 16 Aug, 2025
Blog: Facebook

Got HSDT in early 80s, T1 and faster computer links (both satellite
and terrestrial) ... and working with NSF Director; was suppose to get
$20M to interconnect the NSF supercomputer centers. Then congress cuts
the budget, some other things happen and eventually an RFP is released
(in part based on what we already had running). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

1988, got HA/6000 project, initially for NYTimes to move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Inges, Informix) that had VAXcluster
support in the same source base with Unix.

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), told not
allowed to work with anything more than 4-system clusters, then leave
IBM a few months later.

A little later, asked in as consultant to small client/server
startup. Two former Oracle employees (that were in Ellison/Hester
meeting) were there responsible for something called "commerce server"
and they wanted to do payment transactions. The startup had done some
technology they called "SSL" they wanted to use; it is now frequently
called "electronic commerce"; I had responsibility for everything
between webservers and payment networks. IETF/Internet RFC Editor,
Postel also let me help him with the periodically re-issued "STD1";
Postel also sponsored my talk at ISI on "Why Internet Isn't Business
Critical Dataprocessing" (based on the software, procedures,
documentation I had to do for "electronic commerce").

NCSA was major recipient of "new technologies" funding
https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
Some of the NCSA people move to silicon valley and formed Mosaic
Corp. NCSA complains about use of "Mosaic" ... trivia: where did they
get the rights for "Netscape".

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment network gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

Posts mentioning Postel/ISI and "Why Internet Isn't Business Critical Dataprocessing
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024g.html#80 The New Internet Thing
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and non-IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM and non-IBM
Date: 17 Aug, 2025
Blog: Facebook

not exactly, within a year of taking two credit hr intro to
fortran/computers, univ hires me fulltime responsible for os/360
(360/67 arrived to replace 709/1401 for tss/360 ... which didn't come
to production so ran as 360/65); univ shutdown datacenter for weekends
and I would have it dedicated, although 48hrs w/o sleep made monday
classes hard. then CSC came out to install CP67/CMS (3rd after CSC
itself and MIT Lincoln Labs) and mostly I played with it during my
dedicated weekends.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

CP67 supported 1052&2741 terminals with automagic terminal type
identification (switching terminal-type port scanner as
needed). Univ. had some number of ASCII TTY 33&35, so I add ascii
terminal support (integrated with automagic terminal type id; trivia
when the ASCII port scanner had been delivered to the univ, it came in
Heathkit box). I then wanted single dial-up number ("hunt group") for
all terminals. Didn't quite work, IBM had taken short-cut and
hardwired line speed for each port. That kicks off clone controller
project, implement channel interface board for Interdata/3 programmed
to emulate IBM controller, with addition it support auto line
speed. It is then upgraded with Interdata/4 for channel interface,
with cluster of Interdata/3s for port interfaces. Four of us are then
written up for (some part of) IBM clone controller business ... sold
by Interdata and later Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

trivia: Some of the MIT CTSS/7094 went to the 5th flr and did MULTICs
and others went to IBM science center on 4th flr (among many things,
modify 360/40 with virtual memory and did CP40/CMS, which morphs into
CP67/CMS when 360/67 standard with virtual memory). Folklore is that
(Mutlics) Bell people returned home and did a simplified MULTICS as
UNIX.
https://en.wikipedia.org/wiki/Multics#Unix
Then portable UNIX was 1st developed on Interdata.
https://en.wikipedia.org/wiki/Interdata_7/32_and_8/32#Operating_systems

360 plug-compatible (clone) controller
https://www.garlic.com/~lynn/submain.html#360pcm

other trivia: mid-80s, communication group was fighting release of
mainframe TCP/IP, when that failed they changed strategies. Since they
had corporate responsibility for everything that crossed datacenter
walls, it had to be released through them. What shipped got aggregate
44kbytes/sec using nearly whole 3090 processor. I then add RFC1044
support and in some tuning tests at Cray Research, between a Cray and
4341, got sustained 4341 channel throughput, using only modest amount
of 4341 CPU (something like 500 times increase in bytes moved per
instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

then in 1988, IBM Branch asks if I can help LLNL (national lab) get
some serial stuff they are working with, standardized ... which
quickly became fibre-channel standard ("FCS", including some stuff I
did in 1980; ... 1gbit/sec, full-duplex, aggregate
200mbytes/sec). Then POK finally get their stuff released as ESCON
(when it is already obsolete), initially 10mbyte/sec, later increased
to 17mbyte/sec. Later some POK engineers becomes involved FCS and
define a heavy-weight protocol that significantly reduces throughput,
eventually released as FICON. 2010, IBM releases z196 "Peak I/O"
benchmark getting 2M IOPS using 104 FICON (on 104 FCS, 20k
IOPS/FICON). About same time a FCS is released for E5-2600 server
blades claiming over million IOPS (two such FCS having higher
throughput than 104 FICON). Also IBM pubs recommend SAPs (system
assist processors that do actual I/O) CPU be kept to 70% ... or about
1.5M IOPS. More recently they claim they have zHPF (more like what I
did in 1980 and original FCS) has got it up to 100K IOPS/FICON (five
times original and closer to one tenth 2010 "FCS")

(1980) channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

EMACS

From: Lynn Wheeler <lynn@garlic.com>
Subject: EMACS
Date: 18 Aug, 2025
Blog: Facebook

starting PC/RT 38yrs ago, following year got ha/6000 project,
originally for NYTimes to move their newspaper system (ATEX) off DEC
VAXCluster to RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson

still using emacs daily for editing, shell, quite a bit of lisp
programming, etc.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

recent posts mentioning distributed lock manager
https://www.garlic.com/~lynn/2025c.html#69 Tandem Computers
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#48 IBM Technology
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#104 IBM S/88
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#119 Consumer and Commercial Computers
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#76 old pharts, Multics vs Unix vs mainframes
https://www.garlic.com/~lynn/2024f.html#109 NSFnet
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK"
https://www.garlic.com/~lynn/2024f.html#25 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#117 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#75 IBM San Jose
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#52 Cray
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024b.html#80 IBM DBMS/RDBMS
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD

From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD
Date: 18 Aug, 2025
Blog: Facebook

DASD originated with DRUMS, DISKS, Datacell ... possibly dating when
it wasn't obvious what would prevail.

Trivia: ECKD started off protocol for CKD disks for 3880 "CALYPSO"
speed-matching buffer ... 3380 3mbyte/sec with 1.5mbyte/sec channels.

3370 FBA (fixed block architecture) ... however some operating systems
were so intertwined with CKD ... that they couldn't be weaned. Next
3380 (3mbyte/sec) was "CKD" ... but already moving to fixed-block (can
be seen in records/track formulas, where record length has to be
rounded up to multiple of fixed cell size). Now not even that level of
obfuscation ... for decades CKD disks has been simulated on industry
standard fixed-block devices.

Trivia: expanded store originated with 3090 ... it was obvious
production throughput needed more memory than could be packaged within
3090 memory access latency. 3090 expanded store bus was wide, high
performance ... that used a synchronous instruction that moved 4k page
between expanded store and standard processor memory (a trivial
fraction of pathlength required for I/O operation). Expanded store bus
was also adapted for 3090 vector market being able to attach HIPPI
devices (LANL standardization of Cray 100mbyte/sec channel) .... using
PC "peak/poke" paradigm and "move" I/O commands to reserved expanded
store addresses.

FCS trivia: IBM mainframe didn't get equivalent until FICON. In 1988,
IBM Branch office asks if I could help LLNL (national lab) standardize
some serial stuff they were working with, which quickly becomes
fibre-channel standard ("FCS", including some stuff I had done in
1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then
IBM released their serial stuff as ESCON (when it is already
obsolete), initially 10mbyte/sec, later upgraded to 17mbyte/sec. Then
some IBM engineers becomes involved with FCS and define a heavy-weight
protocol that radically reduces throughput, released as FICON. 2010,
z196 "Peak I/O" benchmark gets 2M IOPS using 104 FICON (20K
IOPS/FICON). Same year, FCS is announced for E5-2600 server blades
getting over million IOPS (two such FCS having higher throughput than
104 FICON). Note also, IBM pubs recommend SAPs (system assist
processors that do actual I/O) be restricted to 70% CPU (about 1.5M
IOPS).

70s & early 80s, getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, vtoc, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

some posts mentioning 3090 expanded store
https://www.garlic.com/~lynn/2025b.html#65 Supercomputer Datacenters
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024.html#30 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#7 Vintage 3880-11 & 3880-13
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2021k.html#110 Network Systems
https://www.garlic.com/~lynn/2021e.html#25 rather far from Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2019e.html#120 maps on Cadillac Seville trip computer from 1978
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2019b.html#77 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018e.html#71 PDP 11/40 system manual
https://www.garlic.com/~lynn/2018b.html#47 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017k.html#11 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2017g.html#102 SEX
https://www.garlic.com/~lynn/2017g.html#61 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017d.html#63 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#4 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#71 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016f.html#5 More IBM DASD RAS discussion
https://www.garlic.com/~lynn/2016e.html#108 Some (IBM-related) History
https://www.garlic.com/~lynn/2016d.html#24 What was a 3314?
https://www.garlic.com/~lynn/2016b.html#111 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/2 & M'soft

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/2 & M'soft
Date: 19 Aug, 2025
Blog: Facebook

Nov1987, Boca OS2 sent email to Endicott asking for help with
dispatch/scheduling (saying VM370 was considered much better than
OS/2), Endicott forwards it to Kingston, Kingston forwards it to me
(when I was undergraduate 20yrs earlier, had done it originally for
CP/67). After graduating and joining IBM, one of my hobbies was
enhanced production operating systems for internal datacenters and and
online sales&marketing support HONE was one of the 1st (and long time)
customer. trivia: late 70s, I do CMSBACK for several internal
operations ... including HONE (later morphs into WDSF and ADSM).

posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

other trivia: 1972, Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, pg160-163,
30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

FS completely different from 370 and going to completely replace it
(during FS, internal politics was killing off 370 efforts, limited new
370 is credited with giving 370 system clone makers their market
foothold). One of the final nails in the FS coffin was analysis by the
IBM Houston Science Center that if 370/195 apps were redone for FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

AMEX and KKR were in competition for private equity (LBO, junk bonds
got such a bad reputation during the 80s S&L crisis they change the
name to private equity) take-over of RJR:
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
and KKR wins, then runs into trouble and hires away president of AMEX
to help.

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

20yrs after Learson failed to block destruction of Watson
culture/legacy, IBM has one of the largest losses in the history of US
corporations and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company ("baby blues" take-off on the
"baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions

Same year that IBM has its enormous loss, AMEX spins off much of its
mainframe datacenters along with financial transaction outsourcing
business, in the largest IPO up until that time (many of the
executives had previously reported to former AMEX president, the new
IBM CEO). Disclaimer: turn of the century, the former AMEX operation,
I'm hired as chief scientist; 2005 interview for IBM System Magazine
(although some history info slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history
Also turn of the century, it was doing complete credit card
outsourcing for half of all cards in the US (plastic, transactions,
auths, settlement, statementing/billing, call centers, etc),

Same time, asked to spend time in Seattle area to help with
"electronic commerce". Background: after leaving IBM was brought in as
consultant into small client/server startup, two former Oracle
employees (that were in the Ellison/Hester meeting) are there
responsible for something called "commerce server" and want to do
payment transactions. The startup had also invented this technology
they called "SSL" they want to use. It is now frequently called
"electronic commerce". I had responsibility for everything between
webservers and financial industry payment networks. Based on the
procedures, software, documentation had to do for "electronic
commerce", I do a talk "Why The Internet Isn't Business Critical
Dataprocessing", that the Internet IETF RFC Standards Editor, Postel
sponsors at USC/ISI.

"electronic commerce" gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

The 80s former head of IBM POK mainframe and then head of Boca
PS2/OS2, was then CEO for-hire at Seattle area security startup that
had a contract with M'soft porting Kerberos to NT for active directory
(tended to have monthly meetings with him). M'soft was also in a
program with the former AMEX group to deploy online banking
service. Numbers showed that NT didn't have the required performance
and would require SUN servers and I was elected to explain it to
M'soft CEO. Instead, a couple days before, the M'soft organization
decided that online bank services would be limited to what NT could
handle (increasing as NT throughput improves).

When he was at Boca, he had hired Dataquest (since bought by Gartner)
to do study of the future of personal computing, including a
multi-hour video tape round-table of silicon valley experts. For a
number of years, I had known the Dataquest person running the study
and was asked to be a silicon valley expert. I clear it with my local
IBM management and Dataquest garbles my bio so Boca wouldn't recognize
me as IBM employee.

Note: late 80s, senior disk engineer gets talk scheduled at annual,
internal, world-wide communication group conference, supposedly on
3174 performance. However, the opening was that the communication
group was going to be responsible for the demise of the disk
division. The disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly being vetoed by the communication group (with
their corporate ownership of everything that crossed the datacenter
walls) trying to protect their dumb terminal paradigm. The
communication group stanglehold on mainframe datacenters wasn't just
disk and a couple years later, IBM has one of the largest losses in
the history of US companies.

Disk division executive (software, also responsible for ADSM) partial
countermeasure (to communication group) was investing in distributed
computing startups that would use IBM disks ... he would periodically
ask us to visit his investments to see if we could provide any help.

communication group and dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/2 & M'soft

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/2 & M'soft
Date: 19 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft

MIT CTSS/7094 had a form of email.
https://multicians.org/thvv/mail-history.html

Then some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to the IBM Science Center on the 4th flr and did
virtual machines (1st modified 360/40 w/virtual memory and did
CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual
memory becomes available), science center wide-area network (that
grows into corporate internal network, larger than arpanet/internet
from science-center beginning until sometime mid/late 80s; technology
also used for the corporate sponsored univ BITNET), invented GML 1969
(precursor to SGML and HTML), lots of performance tools, etc. Later
with decision was made to add virtual memory to all 370s, there was
project that morphed CP67 into VM370 (although lots of stuff was
initially simplified or dropped).

Account of science center wide-area network by one of the science
center inventors of GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

PROFS started out picking up internal apps and wrapping 3270 menus
around (for the less computer literate). They picked up a very early
version of VMSG for the email client. When the VMSG author tried to
offer them a much enhanced version of VMSG, profs group tried to have
him separated from the company. The whole thing quieted down when he
demonstrated that every VMSG (and PROFS email) had his initials in a
non-displayed field. After that he only shared his source with me and
one other person.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

a couple recent posts mentioning VMSG, PROFS, CP-67-based Wide Area
Network:
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2024f.html#44 PROFS & VMSG
https://www.garlic.com/~lynn/2024e.html#99 PROFS, SCRIPT, GML, Internal Network

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/2 & M'soft

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/2 & M'soft
Date: 19 Aug, 2025
Blog: Facebook

re:
https://www.garlic.com/~lynn/2025d.html#42 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#43 IBM OS/2 & M'soft

Early 80s, got HSDT, T1 and faster computer links (both terrestrial
and satellite) and lots of battles with communication group (60s, IBM
had 2701 controller that supported T1 computer links, 70s transition
to SNA/VTAM and issues capped computer links at 56kbytes/sec). Was
also working with NSF director and was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happened and finally an RFP is released (in
part based on what we already had running).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Around the same time, communication group was fighting release of
mainframe TCP/IP support. When they lost, they changed their tactic
and since they had corporate ownership of everything that crossed
datacenter walls, it had to be released through them. What shipped got
aggregate 44kbytes/sec using nearly whole 3090 processor. I then add
RFC104 support and in some tuning tests at Cray Research between Cray
and 4341, 4341 got sustained channel throughput using only modest
amount of the CPU (something like 500 times increase in bytes moved
per instruction executed).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.


... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

Ellison/Hester meeting ref; last product done at IBM; approved 1988 as
HA/6000, originally for NYTimes to move their newspaper system (ATEX)
off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS
vendors (oracle, sybase, ingres, informix) that had VAXCluster in same
source base with Unix. I do a distributed lock manager supporting
VAXCluster semantics (and especially Oracle and Ingres have a lot of
input on improving scale-up performance). trivia: previously worked on
original SQL/relational, System/R with Jim Gray and Vera Watson. S/88
Product Administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
document (it gets pulled when both AS400/Rochester and mainframe/POK
complain they couldn't meet requirements).

Original SQL/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
cluster ye92. Mid-jan1992, convinced FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), told not
allowed to work on anything with more than 4-system clusters, then
leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

other trivia: before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Side-track: In the aftermath of the FS implosion, the 70s head of POK
manages to convince corporate to kill VM370 product (follow-on to
CP/67), shutdown the development group, and transfer all the people to
POK for MVS/XA (Endicott eventually manages to save the VM370 mission
for the mid-range, but had recreate a development group from scratch).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home