List of Archived Posts

2026 Newsgroup Postings (01/01 - )

DUMP Reader
43 Years Of TCP/IP
43 Years Of TCP/IP
43 Years Of TCP/IP
43 Years Of TCP/IP
PROFS and other CMS applications
PROFS and other CMS applications
Cluster Supercomputer Tsunami
IBM Downfall
IBM Terminals
4341, cluster supercomputing, distributed computing
4341, cluster supercomputing, distributed computing
IBM Virtual Machine and Virtual Memory
IBM CSC, SJR, System/R, QBE
Webservers and Browsers
IBM 360s, Unbundling, 370s, Future System
Mainframe and non-mainframe technology
Wild Ducks
IBM FAA/ATC
IBM Online Apps, Network, Email

DUMP Reader

From: Lynn Wheeler <lynn@garlic.com>
Subject: DUMP Reader
Date: 01 Jan, 2026
Blog: Facebook

Early in days of REX, before being renamed REXX and released to
customers, I wanted to show that REX wasn't just another pretty
scripting language. I chose rewriting IPCS (online dump analyzer done
in huge amount of assembler) ... working half time over a few weeks
w/objective to have ten times the function and ten times the
performance (slight of hand & hacks for interpreted REX faster than
assembler) ... I finished early so added automated scripts that looked
for most common failure signatures.

I then thought it could be released to customers (in place of IPCS),
but for what ever reason it wasn't ... even though nearly every
internal datacenters and customer support PSRs were using
it. Eventually I got permission to give presentations at customer user
group meetings on how I implemented it ... and within a few months
customer implementations started to appear.

Later the 3092 (3090 service processor, started out 4331 running
modified version of VM370R6, all service screens done in CMS IOS3270
... before release, the 4331 was upgraded to a pair of 4361s)
solicited to ship with the service processor.

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

43 Years Of TCP/IP

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: 43 Years Of TCP/IP
Newsgroups: alt.folklore.computers
Date: Thu, 01 Jan 2026 15:27:07 -1000

Peter Flass <Peter@Iron-Spring.com> writes:

I think the alternatives were X.25 and various "network architectures"
from different vendors, that all looked like SNA. SNA was a complete
mess.

The Internet That Wasn't. How TCP/IP eclipsed the Open
Systems Interconnection standards to become the global protocol for
computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."

... snip ...

I was on Chessin's XTP TAB 2nd part of the 80s and there were some
gov/mil (including SAFENET2) so we took it to X3S3.3 ... but
eventually got told that ISO had rule they could only standardize
stuff that conformed to OSI Model.

XTP didn't because 1) supported internetworking which didn't exist in
OSI, 2) bypassed network/transport interface, 3) went directly to
LAN/MAC interface which doesn't exist in OSI.

there was joke that while (internet) IETF had rule to proceed in
standards process, there needed to be two interoperable
implementations, while ISO didn't even require a standard be
implementable.

co-worker at the science center was responsible for the 60s CP67-based
science centers wide-area network that morphs into the corporate
internal network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s, about the time it was forced to
convert to SNA/VTAM).

comment by one of the 1969 GML inventors at the science center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

...

newspaper article about some of Edson's Internet & TCP/IP IBM battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed, Internet &
TCP/IP) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

43 Years Of TCP/IP

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: 43 Years Of TCP/IP
Newsgroups: alt.folklore.computers
Date: Thu, 01 Jan 2026 15:36:39 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

SNA wasn't even a proper peer-to-peer network architecture at this
time.

I just remembered that of course ISO-OSI was the "official" candidate
for an open network architecture. But it turned out to be overly
complicated and bureaucratic and (mostly) too hard to implement. So
TCP/IP won pretty much by default.

re:
https://www.garlic.com/~lynn/2026.html#1 43 Years Of TCP/IP

For a time I reported to same executive as person responsible for
AWP164 (which had some peer-to-peer) that morphs into (AS/400) APPN. I
told him that he should come over to work on real networking (TCP/IP)
because the SNA forces would never appreciate him.

When AS/400 went to announce APPN, the SNA forces vetoed it and there
was delay to carefully rewrite the announcement letter to not imply
any relationship between APPN & SNA. It wasn't until much later that
documents were rewritten to imply that somehow APPN came under the SNA
umbrella.

--
virtualization experience starting Jan1968, online at home since Mar1970

43 Years Of TCP/IP

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: 43 Years Of TCP/IP
Newsgroups: alt.folklore.computers
Date: Fri, 02 Jan 2026 08:27:29 -1000

Lynn Wheeler <lynn@garlic.com> writes:

newspaper article about some of Edson's Internet & TCP/IP IBM battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed, Internet &
TCP/IP) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

re:
https://www.garlic.com/~lynn/2026.html#1 43 Years Of TCP/IP
https://www.garlic.com/~lynn/2026.html#2 43 Years Of TCP/IP

late 80s, a senior disk engineer got a talk scheduled at internal,
world-wide, annual communication group conference, supposedly on 3174
performance. However, his opening was that the communication group was
going to be responsible for the demise of the disk division. The disk
division was seeing drop in disk sales with data fleeing mainframe
datacenters to more distributed computing friendly platforms. The disk
division had come up with a number of solutions, but they were
constantly being vetoed by the communication group (with their
corporate ownership of everything that crossed the datacenter walls)
trying to protect their dumb terminal paradigm. Senior disk software
executive partial countermeasure was investing in distributed
computing startups that would use IBM disks (he would periodically ask
us to drop in on his investments to see if we could offer any
assistance).

The communication group's stranglehold on mainframe datacenters wasn't
just disks and a couple years later, IBM has one of the largest losses
in the history of US companies ... and was being reorganized into the
13 "baby blues" (take-off on the "baby bells" breakup a decade
earlier) in preperation for breaking up IBM.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

other trivia: in the early 80s, I was funded for HSDT project, T1 and
faster computer links (both terrestrial and satellite) and battles
with SNA group (60s, IBM had 2701 supporting T1, 70s with SNA/VTAM and
issues, links were capped at 56kbit ... and I had to mostly resort to
non-IBM hardware). Also was working with NSF director and was suppose
to get $20M to interconnect the NSF Supercomputer centers. Then
congress cuts the budget, some other things happened and eventually
there was RFP released (in part based on what we already had
running). NSF 28Mar1986 Preliminary Announcement (from old archived
a.f.c post):
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program
to provide Supercomputer cycles; the New Technologies Program to foster
new supercomputer software and hardware developments; and the Networking
Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

... IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director to
IBM Chief Scientist and IBM Senior VP and director of Research, copying
IBM CEO) with support from other gov. agencies ... but that just made
the internal politics worse (as did claims that what we already had
operational was at least 5yrs ahead of the winning bid), as regional
networks connect in, NSFnet becomes the NSFNET backbone, precursor to
modern internet. Note RFP had called for T1 links, however winning bid
put in 440kbit/sec links ... then to make it look something like T1,
they put in T1 trunks with telco multiplexors running multiple
440kbit/sec links over T1 trunks.

When director left NSF, he went over to K (H?) street lobby group
(council on competitiveness) and we would try and periodically drop in
on him

demise of disk division and communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#emulation
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

43 Years Of TCP/IP

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: 43 Years Of TCP/IP
Newsgroups: alt.folklore.computers
Date: Fri, 02 Jan 2026 13:27:04 -1000

Al Kossow <aek@bitsavers.org> writes:

Chessin came to visit us in the Systems Technology Group at Apple ATG
and we had a nice discussion.

I had wondered whatever happened to XTP.

re:
https://www.garlic.com/~lynn/2026.html#1 43 Years Of TCP/IP
https://www.garlic.com/~lynn/2026.html#2 43 Years Of TCP/IP
https://www.garlic.com/~lynn/2026.html#3 43 Years Of TCP/IP

TCP had minimum 7 packet exchange and XTP defined a reliable
transaction with minimum of 3 packet exchange. Issue was that TCP/IP
was part of kernel distribution requiring physical media (and
typically some expertise for complete system change/upgrade; browsers
and webservers were self contained load&go).

XTP also defined things like trailer protocol where interface hardware
could do CRC as packet flowing through and do the append/check
... helping minimize packet fiddling (as well as other pieces of
protocol offloading, Chessin also liked to draw analogies with SGI
graphic card process pipelining). Problem was that there were lots of
push back (part of claim at the time HTTPS prevailing over IPSEC) for
any kernel change prereq.

topic drift ... 1988, HA/6000 was approved, initially for NYTimes to
migrate their newspaper system off DEC VAXCluster to RS/6000. I rename
it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc, also porting LLNL LINCS and NCAR
filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support
in same source base with unix (also do DLM supporting VAXCluster
semantics).

Early Jan92, have a meeting with Oracle CEO where IBM AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92. Mid Jan92, convince IBM FSD to bid
HA/CMP for gov. supercomputers. Late Jan92, cluster scale-up is
transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told we can't do clusters with
anything that involve more than four systems (we leave IBM a few
months later).

Partially blamed FSD going up to the IBM Kingston supercomputer group
to tell them they were adopting HA/CMP for gov. bids (of course
somebody was going to have to do it eventually). A couple weeks later,
17feb1992, Computerworld news ... IBM establishes laboratory to
develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Not long after leaving IBM, was brought in as consulatnt to small
client/server startup, two former Oracle people (that had worked on
HA/CMP and were in the Ellison/Hester meeting) are there responsible
for something called "commerce server" and they want to do payment
transactions. The startup had also invented this stuff they called
"SSL" they want to use, it is now frequently called "e-commerce". I
had responsibility between web servers and payment networks, including
the payment gateways.

One of the problems with HTTP&HTTPS were transactions built on top
of TCP ... implementation that sort of assumed long lived sessions
(made it easier to install on top kernel TCP/IP protocol stack). As
webserver workload ramped up, web servers were starting to spend 95+%
of CPU running FINWAIT list. NETSCAPE was increasing number of servers
and trying to spread the workload. Eventually NETSCAPE installs a
large multiprocessor server from SEQUENT (that had also redone DYNIX
FINWAIT processing to eliminate that non-linear increase in CPU
overhead).

XTP had provided for piggy-back transaction processing to keep packet
exchange overhead to minimum ... and I showed HTTPS over XTP in the
minimum 3-packet exchange (existing HTTPS had to 1st establish TCP
session, then establish HTTPS, then the transaction, then shutdown
session).
https://en.wikipedia.org/wiki/Xpress_Transport_Protocol

other trivia: I then did a talk on "Why Internet Isn't Business
Critical Dataprocessing" based on documentation, processes and
software I had to do for e-commerce, which (IETF RFC editor) Postel
sponsored at ISI/USC.

more trivia: when 1st started doing TCP/IP over high-speed satellite
links, established dynamic adaptive rate-based pacing
implementation ... which I also got written into the XTP spec.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

posts mentioning dynamic adaptive rate-based pacing
https://www.garlic.com/~lynn/2025c.html#46 IBM Germany and 370/125
https://www.garlic.com/~lynn/2025b.html#114 ROLM, HSDT
https://www.garlic.com/~lynn/2025b.html#81 IBM 3081
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025.html#114 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025.html#36 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#35 IBM ATM Protocol?
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#71 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#58 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2023f.html#16 Internet
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#22 Telum & z16
https://www.garlic.com/~lynn/2021k.html#110 Network Systems
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021i.html#71 IBM MYTE
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018b.html#16 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017d.html#28 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2013n.html#31 SNA vs TCP/IP
https://www.garlic.com/~lynn/2008l.html#64 Blinkylights
https://www.garlic.com/~lynn/2008e.html#28 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice

--
virtualization experience starting Jan1968, online at home since Mar1970

PROFS and other CMS applications

From: Lynn Wheeler <lynn@garlic.com>
Subject: PROFS and other CMS applications
Date: 03 Jan, 2026
Blog: Facebook

Some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr to do Multics,
https://en.wikipedia.org/wiki/Multics
https://en.wikipedia.org/wiki/Multics-like

Others went to the IBM cambridge science center on the 4th floor,
modified 360/40 with virtual memory hardware and did CP/40, which
morphs into CP/67 when 360/67 standard with virtual memory becomes
available ... also invented GML (letters after inventors last names)
in 1969 (after a decade it morphs into ISO standard SGML and after
another decade morphs into HTML at CERN). In early 70s, after decision
to add virtual memory to all 370s, some of CSC splits off and takes
over the IBM Boston Programming Center on the 3rd flr, for the VM370
development group.

MIT CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
had been ported to CP67/CMS as SCRIPT (later GML tag processing added
to SCRIPT) ... later release renamed DCF. there was also form email on
MIT CTSS
https://multicians.org/thvv/mail-history.html

Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
responsible for the science center wide-area network (VNET/RSCS) which
morphs into the IBM internal corporate network (larger than
arpanet/internet from the beginning until sometime mid/late 80s about
the time it was forced to convert to SNA/VTAM), technology also used
for the corporate sponsored univ BITNET (& EARN in Europe). Comment by
one of the CSC inventors of GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

PROFS group had been collecting some internal apps for wrapping 3270
menus around, one of which was very early version of VMSG for the
email client. When the VMSG author tried to offer them a much enhanced
version, they tried to have him separated from the company. The whole
thing quieted down when he demonstrated his initials was in every
PROFS email (in non-displayed field). After that he only shared his
source with me and one other person.

When I graduated and joined science center, one of my hobbies was
enhanced production operating systems for internal datacenters and the
online sales&marketing support HONE systems was one of the first and
long time customers (1st CP67 later VM370). One of my 1st non-US IBM
trips in early 70s was HONE asked me to do CP67 install in La Defense,
Paris in the early 70s (and at the time, it took a little
investigation on how to access my email back in the states).

Late 70s & early 80s I was blamed for online computer conferencing on
the internal network. It really took off the spring of 1981 when I
distributed a trip report to visit Jim Gray at Tandem (had left SJR
fall1980). Only about 300 directly participated but claims that 25,000
were reading. From IBMJargon:
https://havantcivicsociety.uk/wp-content/uploads/2019/05/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

Six copies of 300 page extraction from the memos were printed and put
in Tandem 3ring binders and sent to each member of the executive
committee, along with executive summary and executive summary of the
executive summary (folklore is 5of6 corporate executive committee
wanted to fire me).

Then there was some number of internal IBM task forces, official
sanctioned IBM software (VMTOOLS) and approved FORUMS with official
moderators. There was researcher hired to study how I communicated,
spent nine months in the back of my office, took notes on
face-to-face, telephone, etc converstations, got copies of all my
incoming and outgoing email, and logs of all instant messages. The
material was also used for conference talks and papers, books and
Stanford Phd, joint between language and computer AI (Winograd was
advisor on computer side).

Pisa Scientific Center did "SPM" for CP/67, which was later imported
to (internal) VM/370 ... and use implemented in RSCS/VNET (even
version shipped to customers) ... sort of superset of the combination
of VM/370 VMCF, IUCV, & SMSG (in the product). Circa 1980, a CMS
3270, mult-user, client/server spacewar was implemented and since
supported by RSCS/VNET, user clients could play from anywhere in the
world on the internal network. Almost immediately robot players
appeared beating human players (faster response time) and server was
modified to increase power use non-linear when responses starting
dropping below human response time.

At the time my VM370 for internal datacenters were getting .11sec
interactive response. Players with 3272/3277 had .089sec hardware
response for aggregate .196sec ... which would have advantage over
users on systems that had quarter second system response and/or 3278s
which had .3-.5sec hardware response (combination could be
.55-.75sec).

trivia: Kildall worked on (virtual machine) IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
before developing CP/M (name take-off on CP/67).
https://en.wikipedia.org/wiki/CP/M
which spawns Seattle Computer Products
https://en.wikipedia.org/wiki/Seattle_Computer_Products
which spawns MS/DOS
https://en.wikipedia.org/wiki/MS-DOS

IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

PROFS and other CMS applications

From: Lynn Wheeler <lynn@garlic.com>
Subject: PROFS and other CMS applications
Date: 04 Jan, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026.html#5 PROFS and other CMS applications

As science center CP67-based (RSCS/VNET) wide-area network started to
morph into the internal corporate network, there had to be a HASP/JES2
NJE emulation driver to start to connect in the HASP(/JES2)-based
systems ... which were limited to the edge of the internal network
(behind CP67 and then VM370 systems). Part of the issue was RSCS/VNET
had nice clean layered implmentation (which NJE didn't). The other
part was that the HASP/JES2 NJE implementation (originally had "TUCC"
in cols 68-71) used spare entries in the HASP 255 entry pseudo device
table (typically 160-180) for network node definitions ... and by the
time of VS2/MVS, the corporate network was already past 255 nodes, NJE
trashing any traffic where either the origin or destination wasn't in
its local table (later NJE was updated for max 999 nodes, but it was
after the internal network had already passed 1000). The other reason
for keeping MVS/JES2 on boundary nodes (and behind RSCS/VNET) was
because JES2 traffic between systems at different version had habit of
crashing the MVS system (requiring manual re-IPL) ... some of this was
header had network and job fields intermixed. A body of RSCS/VNET NJE
emulation code grew-up that was aware of different JES2 version field
layout and could re-organize the header record to be acceptable to the
directly receiving JES2 system (there is the infamous case of updated
San Jose MVS/JES2 systems crashing Hursely MVS/JES2 systems, blamed on
the Hursley RSCS/VNET group because they hadn't obtained the lastest
updates to re-organize JES2 fields beween San Jose and Hursley).

The Arpanet 1Jan1983 cut-over from IMP/Host protocol to
internetworking (TCP/IP), there were approx 100 IMP network nodes and
255 hosts, at a time when the internal network was about to pass
1000. Old archived post with a list of world-wide corporate locations
that added one or more network nodes during 1983
https://www.garlic.com/~lynn/2006k.html#8

and IBM 1983 1000th node globe:
1000th node globe

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HASP/JES NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputer Tsunami

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputer Tsunami
Date: 05 Jan, 2026
Blog: Facebook

Second half of 70s, transfer to SJR and get to wander IBM (and
non-IBM) datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test across the street. They
were running 7x24 prescheduled stand-alone mainframe testing and said
that they had recently tried MVS, but it had 15min MTBF (in that
environment) requiring manual re-ipl. I offer to rewrite I/O
supervisor, making it bullet-proof and never fail, allowing any amount
of on-demand concurrent testing, greatly improving
productivity. Bldg15 gets 1st engineering 3033 outside POK processor
engineering for channel disk I/O tesing. Then 1978 got engineering
4341. Jan1979 branch office hears about 4341 and cons me into doing
benchmark for national lab looking at getting 70 for compute farm
(sort of leading edge of the coming cluster supercomputing tsunami).

Decade later, 1988 get HA/6000 project, originally for NYTimes so they
could migrate their newspaper system ("ATEX") off DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc, also port LLNL & NCAR supercomputer
filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Ingres, informix that had DEC VAXCluster
support in same source base as unix; I do a distributed lock
manager/DLM with VAXCluster API and lots of scale-up improvements).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells
Ellison that we would have 16-system clusters mid92 and 128-system
clusters ye92. Mid-jan1992, convinced IBM FSD to bid HA/CMP for
gov. supercomputers. Late-jan1992, HA/CMP is transferred for announce
as IBM Supercomputer (for technical/scientific *ONLY*), and we were
told couldn't work on clusters with more than 4-systems (we leave IBM
a few months later). A couple weeks after cluster scale-up transfer,
17feb1992, Computerworld news ... IBM establishes laboratory to
develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

a few recent posts mentioning 4341 supercomputer leading edge tsunami
& ha/cmp
https://www.garlic.com/~lynn/2025e.html#112 The Rise Of The Internet
https://www.garlic.com/~lynn/2025e.html#44 IBM SQL/Relational
https://www.garlic.com/~lynn/2025e.html#35 Linux Clusters
https://www.garlic.com/~lynn/2025e.html#1 Mainframe skills
https://www.garlic.com/~lynn/2025d.html#98 IBM Supercomputer
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025c.html#98 5-CPU 370/125
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2021j.html#52 ESnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 06 Jan, 2026
Blog: Facebook

1972, Learson tried (and failed) to block bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of
management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Future System project 1st half 70s, imploded, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

FS completely different from 370 and going to completely replace it
(during FS, internal politics was killing off 370 efforts, limited new
370 is credited with giving 370 system clone makers their market
foothold). One of the final nails in the FS coffin was analysis by the
IBM Houston Science Center that if 370/195 apps were redone for FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

trivia: I continued to work on 360&370 all during FS, periodically
ridiculing what they were doing (drawing analogy with long playing
cult film down at central sq; which wasn't exactly career enhancing
activity)

Late 70s & early 80s I was blamed for online computer conferencing on
the internal network. It really took off the spring of 1981 when I
distributed a trip report to visit Jim Gray at Tandem (had left SJR
fall1980). Only about 300 directly participated but claims that 25,000
were reading. From IBMJargon:
https://havantcivicsociety.uk/wp-content/uploads/2019/05/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

Six copies of 300 page extraction from the memos were printed and
packaged in Tandem 3ring binders, sending to each member of the
executive committee, along with executive summary and executive
summary of the executive summary (folklore is 5of6 corporate executive
committee wanted to fire me). From summary of summary:

• The perception of many technical people in IBM is that the company
is rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM. Most of the contributors
to the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action

• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive

... snip ...

About the same time, I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
https://en.wikipedia.org/wiki/OODA_loop
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
https://thetacticalprofessor.net/2018/04/27/updated-version-of-boyds-aerial-attack-study/
John Boyd - USAF The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm

Boyd then used E-M as a design tool. Until E-M came along, fighter
aircraft had been designed to fly fast in a straight line or fly high
to reach enemy bombers. The F-X, which became the F-15, was the first
Air Force fighter ever designed with maneuvering specifications. Boyd
was the father of the F-15, the F-16, and the F-18.

... snip ...

In 89/90, the Marine Corps Commandant leverages Boyd for makeover of
the corps (at a time when IBM was desperately in need of a
makeover). Then IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
John Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
IBM CEO & former AMXEX president
https://www.garlic.com/~lynn/submisc.html#gerstner
Pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Terminals
Date: 06 Jan, 2026
Blog: Facebook

ibm home 2741 Mar1970 until replaced by 300baud cdi miniterm summer
1977 & IBM tieline, replaced by 1200baud 3101, then ordered ibm/pc (in
employee program although it took so long to arrive, that ibm/pc
street price had dropped below employee price) ... special IBM 2400
baud hardware encrypting modem card.

at work got 3277 when they were available (replacing 2741). big uproar
when 3278s appeared about the time published studies that quarter
second response improved productivity. 3277/3272 had .086 hardware
response. For 3278, a lot of terminal electronics were moved back into
3274 (reducing 3278 manufacturing cost) ... significantly driving up
coax protocol chatter and 3278/3274 hardware response becomes .3-.5sec
(depending amount of data). Letters to the 3278 product administrator
got response that 3278 wasn't intended for interactive computing, but
data entry (aka electronic keypunch). Later IBM/PC 3277 emulation card
had 4-5 times the upload/download throughput of 3278 emulation card.

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (one of the first and long time
customers dating back to CP67 and 2741 was the online sales and
marketing support HONE systems ... eventually clones cropping up all
over the world) and at the time 3278 appeared, my systems were showing
.11sec trivial interactive system response. 3277 hardware .086sec +
system .11sec = .196sec response, easily meeting quarter sec ... while
3278s would require a time machine to send system responses back in
time.

also 1980, IBM STL (since renamed SVL) was bursting at the seams and
300 people from IMS group were moving to offsite bldg with
dataprocessing back to STL datacenter. They had tried "remote" 3270
support and found the human factors totally unacceptable. I got con'ed
into doing channel-extender support so channel-attached 3270
controllers could be placed at the off-site bldg ... resulting in no
perceptible human factors difference between off-site and inside
STL. An unintended consequence was mainframe system throughput
increased 10-15%. STL system configurations had large number of 3270
controllers spread all across channels shared with 3830/3330 disks
... and significant 3270 controller channel busy overhead was
effectively (for same amount 3270 I/O) being masked by the channel
extender (resulting in improved disk throughput). Then there was
consideration to use channel extenders for all 3270 controllers (even
those located inside STL).

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

some posts mentioning response, 3272/3277, 3274/3278
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

4341, cluster supercomputing, distributed computing

From: Lynn Wheeler <lynn@garlic.com>
Subject: 4341, cluster supercomputing, distributed computing
Date: 07 Jan, 2026
Blog: Facebook

Future System 1st half of the 70s ... completely different from 370
and going to completely replace it (internal politics during FS was
killing off 370 efforts, and lack of new 370s is credited with giving
clone 370 makers their market foothold).

Future System project 1st half 70s, imploded, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

When FS finally implodes, there is mad rush to get stuff back into 370
product pipelines including kicking off quick&dirty 3033&3081
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Endicott cons me into help with virgil/tully (138/148) ECPS microcode
assist ... archived post with copy of initial analysis
https://www.garlic.com/~lynn/94.html#21

Endicott also wanted to preinstall VM370 on every 138/148 shipped, but
corporate vetoed that ... in part because head of POK was in the
process of convincing corporate to kill the VM370 product, shutdown
the development group and transfer all the people to POK for MVS/XA
(Endicott eventually acquires the VM370 product mission but had to
recreate a development group from scratch).

I also get talked into working on 16-CPU 370, and we con the 3033
processor engineers into working on it in their spare time (a lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was great until somebody tells the head of POK that it
could be decades before POK's favorite son operationg system ("MVS")
had (effective) 16-CPU support (existing MVS documentation was that
simple 2-CPU support only got 1.2-1.5 times the throughput of 1-CPU;
POK doesn't ship a 16-CPU system until after the turn of the
century). The head of POK then invites some of us to never visit POK
again and directs 3033 processor engineers, heads down and no
distractions.

I transfer out of SJR on the west coast and got to wander IBM (and
non-IBM) datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test, across the street. They
were running 7x24, prescheduled stand alone mainframe testing. They
said they had tried MVS, but it had 15min MTBF (in that environment)
requiring manual re-ipl. I offer to rewrite I/O supervisor, making it
bullet-proof and never fail, allowing any amount of ondemand,
concurrent testing ... greatly improving productivity.

Bldg15 gets first engineeering 3033 (outside POK engineering) for I/O
testing ... which only takes percent or two of CPU, so we scrounge up
a 3830 controller and string of 3330 disks for private online
service. Then 1978, bldg15 get engineering 4341 (w/ECPS) ... and with
some microcode tweaks was also able to do 3mbyte/sec, data streaming
channel testing. Jan1979, branch office hears about it and cons me
into doing benchmark for national lab looking at getting 70 for
compute farm (sort of the leading edge of the coming cluster
supercomputing tsunami).

trivia-1: In the morph of CP67->VM370, lots of stuff was simplified
and/or dropped (includuing shared-memory, tightly-coupled,
multiprocessor support). Then w/VM370R2-base, I start adding lots of
stuff back in for my internal CSC/VM. Then for CSC/VM VM370R3-base, I
add multiprocessor support back in, initially for online
sales&marketing consolidated US HONE, so they can upgrade their
158&168s to 2-CPU (getting twice the throughput of single CPU
systems). Note: when FACEBOOK 1st moves into silicon valley, it was
new bldg built next door to former consolidated US HONE datacenter.

trivia-2: Communication group was fighting release of mainframe
TCP/IP; when they lost, change strategy and said they had corporate
ownership of everything crossing datacenter walls, it had to be
released through them. What shipped got aggregate 44kbytes using
nearly whole 3090 CPU. I then add RFC1044 support and in some testing
at Cray Research between Cray and 4341 got sustained 4341 channel
throughput using only modest 4341 CPU (something like 500 times
improvement in bytes moved per instruction execute).

trivia-3: in the 1st half 80s, there were large corporations ordering
hundreds of VM/4341 at a time for deploying out in departmental areas
(sort of the leading edge of the coming departmental computing
tsunami) ... inside IBM, departmental conference rooms became scarce
as so many were converted to VM/4341 rooms. MVS started lusting after
the market. The problem was (datacenter) 3380s were only new CKD and
the only mid-range, non-datacenter, were FBA. Eventually 3370s were
modified for CKD emulation as 3375. It didn't do them much good,
departmental computing was looking at scores of systems per support
person, while MVS still required scores of support people per system.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

vm/4341, cluster supercomuting, distributed computing posts
https://www.garlic.com/~lynn/2025d.html#53 Computing Clusters
https://www.garlic.com/~lynn/2025d.html#11 IBM 4341
https://www.garlic.com/~lynn/2025c.html#77 IBM 4341
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022f.html#92 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2018.html#24 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2016h.html#48 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#29 Erich Bloch, IBM pioneer who later led National Science Foundation, dies at 91

--
virtualization experience starting Jan1968, online at home since Mar1970

4341, cluster supercomputing, distributed computing

From: Lynn Wheeler <lynn@garlic.com>
Subject: 4341, cluster supercomputing, distributed computing
Date: 08 Jan, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026.html#11 4341, cluster supercomputing, distributed computing

Amdahl won the battle to make ACS, 360-compatible ... then ACS/360 was
killed (folklore executives felt it would advance the state of art too
fast and IBM would loose control of the market) and Amdahl leaves IBM.
https://people.computing.clemson.edu/~mark/acs_end.html
above mentions some ACS/360 features that show up more than 20yrs
later in the 90s with ES/9000

Then FS (with its killing off 370 efforts) ... one of the last nails
in the FS coffin was IBM Houston Scientific Center analysis was if
370/195 applications were redone for FS machine made out of the
fastest available hardware, they would have throughput of 370/145
(about 30 times slowdown).

Quick&dirty 303x started out with channel director as 158 engine with
just the integrated channel director microcode and no 370 microcode. A
3031 was two 158 engines, one with just 370 microcode and the other
just integrated channel microcode. 3032 was 168-3 reconfigured for
channel director external channels (i.e. 158 engine and integrated
channel microcode). 3033 started out 168 logic remapped to 20% faster
chips.

3081 was some warmed over FS technology and started out multiprocessor
only. First 3081D was two processor and aggregate MIPS less than
Amdahl 1-CPU system. They double the CPU cache sizes, bringing 2-CPU
3081K aggregate MIPs up to about the same as Amdahl 1-CPU ... although
even with same aggregate MIPS, MVS 3081 2-CPU systems only had .6-.75
times throughput of Amdahl 1-CPU (because of MVS large multiprocessor
overhead).

ECPS trivia: Very early 80s, I got permission to give presentations at
user group meetings on details of ECPS implementation ... and after
meetings, Amdahl people would grill me for more information. They said
that they were doing microcode (virtual machine) hypervisor (multiple
domain) using MACROCODE (370 like instructions running in microcode
mode, MACROCODE original done to respond to the plethpra of trivial
3033 microcode required for MVS to run). IBM was then finding IBM
customers were slow migrating from MVS to MVS/XA .... but much better
on Amdahl machines because they could run MVS and MVS/XA concurrently
on the same machine (IBM doesn't respond with LPAR until nearly decade
later on 3090).

POK had problem that after they killed VM370 (at least for high-end)
they didn't have anything equivalent. They had done a limited VMTOOL
virtual machine for MVS/XA testing (but never intended for production)
... it also required the SIE microcode instruction (for 370/XA) to
move in/out of virtual machine mode ... but because of limited 3081
microcode space, it had to be paged in/out ... further limiting its
usefulness for production. Eventually IBM did hack on VMTOOL as VM/MA
& VM/SF (for limited concurrent testing of MVS & MVS/XA). Much of
370/XA was to compensate for problems and short comings with MVS (for
instance, my redo of I/O supervisor had about 1/20th the MVS
pathlength for channel redrive, aka after end interrupt, restart
channel with queued request)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

jan1979 national lab (cdc6600) rain/rain4 fortran benchmark
https://www.garlic.com/~lynn/2000d.html#0

a few recent posts mentioning amdahl, macrocode, hypervisor, multiple
domain, ecps, lpar
https://www.garlic.com/~lynn/2025e.html#67 Mainframe to PC
https://www.garlic.com/~lynn/2025d.html#110 IBM System Meter
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2024f.html#30 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Virtual Machine and Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Virtual Machine and Virtual Memory
Date: 10 Jan, 2026
Blog: Facebook

Some of the MIT CTSS/7094 went to the 5th flr for MULTICS. Others went
to the IBM Cambridge Science Center on the 4th flr. They did virtual
machines (wanted a 360/50 to modify with hardware virtual memory, but
all the extra 50s were going to FAA/ATC and so had to settle for a
360/40 to modify and did CP40/CMS. Then when 360/67 standard with
virtual memory became available, CP40/CMS morphs into CP67/CMS
(official IBM system for 360/67 was TSS/360 ... at the time TSS/360
was decommitted, there were 1200 people in the TSS/360 organization
and 12 people in the CP67/CMS group).

Early last decade I was asked to track down decision to add virtual
memory to all 370s. I found staff to executive making
decision. Basically MVT storage management was so bad that region
sizes were being specified four times larger than used ... and
frequently a standard 1mbyte 370/165 only ran four regions
concurrently (insufficient to keep system busy and justified). Running
MVT in 16mbyte virtual address space (VS2/SVS) allowed number of
regions to be increase by factor of four times (capped at 15 because
4bit storage protect keys) with little or no paging (similar to
running MVT in a CP67 16mbyte virtual machine).

I would periodically drop by Ludlow doing initial VS2/SVS (on 360/67
pending engineering 370 with virtual memory). I little bit of code to
build virtual memory tables and simple page fault, page replacement,
page i/o). Big problem was (same as CP67 w/virtual machines), the
channel programs being passed had virtual addresses (and channels
require real addresses) and copies of the channel program had to be
made, replacing virtual addresses with real. He borrows CP67 CCWTRANS
for crafting into EXCP/SVC0.

Note in the 70s, I was pontificating that systems were getting faster
than disks were getting faster. Early 80s, I wrote tome that since 360
announced, disk relative system throughput had decline by order of
magnitude (disks got 3-5 times faster, systems got 40-50 faster). Disk
division executive then assigned the division performance group to
refute my statements. A couple weeks later, they came back and showed
I had slightly understated the problem. They then respin the analysis
for a SHARE presentation (16Aug1984, SHARE 63, B874) on how to
configure disks for improved system throughput.

Mid-70s to get passed the 15 region cap as systems getting larger, the
switch is to VS2/MVS, giving private 16mbyte virtual address space to
each region. However, OS/360 heritage is heavily pointer passing
APIs. As a result an 8mbyte image of MVS kernel is mapped into every
private 16mbyte virtual address space (leaving 8mbyte). Also MVS
subsystems were also moved into their own private 16mbyte virtual
address space. Now for subsystems API to access & return information,
a common segment area ("CSA") is mapped into every 16mbyte virtual
address space (leaving 7mbytes for regions). However requirement for
CSA space is somewhat proportional to number of subsystems and
concurrent regions ... and CSA quickly explodes to multiple segments
common system area (still "CSA") and by 3033 it was frequently running
5-6mbytes (leaving 2-3mbytes for each region, but threatening to
becomes 8mbytes, leaving zero). This is major factor VS2/MVS desperate
rush to get to 370/XA ("811" for Nov1978 architecture & specification
document dates).

With 3081 and availability of 370/XA and MVS/XA, customers weren't
moving to MVS/XA as planned. Worse is Amdahl customers were doing
better job migrating. 3081x originally was only going to be
multiprocessor only and 3081D 2-CPU had lower aggregate MIPS than
single processor Amdahl. IBM doubles the 3081 processor cache sizes
for 3081K 2-CPU with about same aggregate MIPS as 1-CPU
Amdahl. Aggregating things was MVS documentation had high MVS 2-CPU
multiprocessor overhead only gets 1.2-1.5 times throughput of a single
CPU (making 2-CPU 3081K, even with same aggregate MIPS, only has
.6-.75 throughput of Amdahl single CPU).

worse, is head of POK had previously convinced corporate to kill VM370
product, shutdown the development group, and transfer all the people
to POK for MVS/XA (Endicott eventually saves the VM370 product for the
mid-range). Amdahl had previously done Multiple Domain/HYPERVISOR
(virtual machine MACROCODE) able to run MVS & MVS/XA
concurrently on same machine. a couple recent posts
https://www.garlic.com/~lynn/2026.html#11 4341, cluster supercomputing, distributed computing
https://www.garlic.com/~lynn/2025c.html#49 IBM And Amdahl Mainframe
https://www.garlic.com/~lynn/2025b.html#118 IBM 168 And Other History

recent posts mentioning IBM Burlington 7mbyte MVS issue
https://www.garlic.com/~lynn/2025e.html#114 Comsat
https://www.garlic.com/~lynn/2025d.html#91 IBM VM370 And Pascal
https://www.garlic.com/~lynn/2025d.html#68 VM/CMS: Concepts and Facilities
https://www.garlic.com/~lynn/2025d.html#51 Computing Clusters
https://www.garlic.com/~lynn/2025.html#130 Online Social Media
https://www.garlic.com/~lynn/2025.html#104 Mainframe dumps and debugging

Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CSC, SJR, System/R, QBE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CSC, SJR, System/R, QBE
Date: 11 Jan, 2026
Blog: Facebook

When I first graduated and joined Cambridge Scientific Center, one of
my hobbies was enhanced production operating systems for internal
dataceters and one of the first (and long time) was the online branch
office HONE systems. Branch office training for SEs had been part of
large SE group on-site at customer location. 23Jun1969 unbundling
announcement started to charge for (application) software (managed to
make case that kernel software should still be free), SE services,
maint., etc. However they couldn't figure out how not to charge for
trainee SEs on-site at customer. As a result HONE spawned, multiple
(virtual machine) CP67/CMS datacenters around the US providing online
access to trainee SEs at branches, running guest operating systems in
virtual machines. Scientific Center had also ported APL\360 to CMS for
CMS\APL (redoing 16kbyte swapped workspaces for large demand page
virtual memory operation and added APIs for system services like fille
I/O, enabling real world applications) and HONE started providing
online APL-based sales&marketing support applications (which came to
dominate all HONE use, with guest operating system use withered away)
... came to be the largest use of APL in the world as HONE datacenters
spawned all over the world (I was requested to do the 1st couple,
Paris and Tokyo).

When I transferred from Cambridge Scientific Center to San Jose
Research (west coast ... about the same time all the US HONE
datacenters were consolidated up in Palo Alto), I worked with Jim Gray
and Vera Watson on System/R; original SQL/relational. It was initially
developed on VM370 370/145. Then with the corporation preoccupied with
the next great DBMS, EAGLE ... was able to do tech transfer (under the
radar) to Endicott for SQL/DS. Later after "EAGLE" implodes, there was
a request for how fast could System/R be ported to MVS (eventually
released as DB2, originally for decision/support only)


Date: 03/10/80 18:36:35
From: Jim Gray

Peter DeJong of Yorktown Computer Science

Father of QBE

Arch-enemy of System R

Will be speaking on Tuesday (today) at 2:30-3:30 in 2C-244

On: System For Business Automation (SBA) which is a conceptual model
for an electronic office system. Peter has lots of good ideas on how
to send forms around to people, how to use abstract data types to
conquer the office automation problem. He also has some ideas on how
to implement triggers which are key to SBA.

... snip ... top of post, old email index

One of the science center members did an APL-based analytical system
model ... which was made available on HONE as the Performance
Predictor. SEs could enter customer's system configuration and
workload activity data and ask questions about what happens when
changes are made to configuration or workload.

Turn of century was brought into a financial outsourcing mainframe
datacenter (that handled all processing for half of all credit card
accounts in the US) .... had greater than 40 max configured mainframes
(@$30M, none older than 18months, constant rolling upgrades) all
running the same 450k statement cobol app (number of mainframes needed
to finish settlement in the overnight batch window). I did some
performance analysis optimization using some science center technology
from the 70s, found 14% overall better throughput performance. They
also had another consultant that had acquired a descendant of the
Performance Predictor (in the 90s when IBM was barely being saved
from breakup and was unloading all sort of stuff), ran it through an
APL->C translator and was using it for performance consulting and
found another 7% throughput improvement (21% aggregate improvement,
>$200M savings)

previous archived post with same QBE email
https://www.garlic.com/~lynn/2002e.html#email800310

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Cambridge Science Cener posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

some recent posts mentioning HONE performance predictor
https://www.garlic.com/~lynn/2025e.html#64 IBM Module Prefixes
https://www.garlic.com/~lynn/2025e.html#27 Opel
https://www.garlic.com/~lynn/2025c.html#19 APL and HONE
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization

--
virtualization experience starting Jan1968, online at home since Mar1970

Webservers and Browsers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Webservers and Browsers
Date: 12 Jan, 2026
Blog: Facebook

... random trivia, 1st webserver in the US was on SLAC's VM system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

other trivia;

1988, HA/6000 was approved, initially for NYTimes to migrate their
newspaper system off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc, also porting LLNL LINCS and NCAR
filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support
in same source base with unix (also do DLM supporting VAXCluster
semantics).

Early Jan92, have a meeting with Oracle CEO where IBM AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92. Mid Jan92, convince IBM FSD to bid
HA/CMP for gov. supercomputers. Late Jan92, cluster scale-up is
transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told we can't do clusters with
anything that involve more than four systems (we leave IBM a few
months later). Partially blamed FSD going up to the IBM Kingston
supercomputer group to tell them they were adopting HA/CMP for
gov. bids (of course somebody was going to have to do it
eventually). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Not long after leaving IBM, was brought in as consultant to small
client/server startup, two former Oracle people (that had worked on
HA/CMP and were in the Ellison/Hester meeting) are there responsible
for something call "commerce server" and they want to do payment
transactions. The startup had also invented this stuff they called
"SSL" they want to use, it is now frequently called "e-commerce". I
had responsibility for everything between web servers and payment
networks, including the payment gateways. One of the problems with
HTTP&HTTPS were transactions built on top of TCP ... implementation
that sort of assumed long lived sessions. As webserver workload ramped
up, web servers were starting to spend 95+% of CPU running FINWAIT
list. NETSCAPE was increasing number of servers and trying to spread
the workload. Eventually NETSCAPE installs a large multiprocessor
server from SEQUENT (that had also redone DYNIX FINWAIT processing to
eliminate that non-linear increase in CPU overhead).

I then did a talk on "Why Internet Isn't Business Critical"
dataprocessing based on documentation, processes and software I had to
do for e-commerce, which (IETF RFC editor) Postel sponsored at
ISI/USC.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
e-commerce payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360s, Unbundling, 370s, Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360s, Unbundling, 370s, Future System
Date: 15 Jan, 2026
Blog: Facebook

Before I graduated, was hired into small group in Boeing CFO office to
help with the formation of Boeing Computer Service (consolidate all
dataprocessing into independent business unit). I thought Renton
datacenter possibly largest in the world, 360/65s arriving faster than
they could be installed. Lots of politics between Renton director and
CFO, who only had 360/30 up at Boeing field for payroll (although they
enlarge the room to install a 360/67 for me to play with).

When I graduate, instead of staying with Boeing CFO, I join the IBM
Cambridge Scientific Center ... and shortly later was asked to help
with adding multithreading to 370/195. Amdahl won battle to make ACS,
360 compatible ... but then ACS/360 was killed (folklore, executives
were concerned that it would advance state-of-art too fast and IBM
would loose control of the market) and Amdahl leaves IBM to start his
own clone mainframe company. Some discussion of multithreading here:
https://people.computing.clemson.edu/~mark/acs_end.html

370/195 was pipelined and out-of-order execution ... but conditional
branches drained the pipeline ... and most code only ran at half
throughput. Adding multithreading, implementing two I-streams
(simulating two CPUs) ... each running at half throughput ... possibly
keep the 195 fully busy. Then with the decision to add virtual memory
to all 370s, it was decided that it would be too hard adding virtual
memory to 195 ... and all new 195 work was canceled. It turns out it
wouldn't have actually done that much good, anyway. MVT up through MVS
documentation had 2-CPU operation only getting 1.2-1.5 times the
throughput of single CPU systems (or in 195 case, .6-.75 of fully
busy, because of heavy multiprocessor overhead)

After IBM started adding virtual memory to all 370s, the "Future
System" started, completely different and to replace all 370s
... internal politics was killing off 370 activity and lack of new 370
during FS is credited with giving clone 370 makers (including Amdahl)
their market foothold. Then with FS implosion there is mad rush to get
stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033 & 3081. One of the last nails in the FS coffin was
IBM Houston Scientific Center analysis that if 370/195 applications
were redone for FS machine made out of the faster technology
available, they would have throughput of 370/145 (about 30 times
slowdown).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive

... snip ...

I also get talked into working on 16-CPU 370, and we con the 3033
processor engineers into working on it in their spare time (a lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was great until somebody tells the head of POK that it
could be decades before POK's favorite son operationg system ("MVS")
had (effective) 16-CPU support (existing MVS documentation was that
simple 2-CPU support only got 1.2-1.5 times the throughput of 1-CPU;
POK doesn't ship a 16-CPU system until after the turn of the
century). The head of POK then invites some of us to never visit POK
again and directs 3033 processor engineers, heads down and no
distractions.

3081 was going to be multiprocessor only and initial 3081D aggregate
MIPS was less than Amdhal single processor. IBM then doubles processor
cache size for 3081K and brings aggregate MIPs up to about the same as
Amdahl single processor (however MVS 3081K multiprocessor only had
about .6-.75 throughput of Amdahl single processor).

trivia: After I graduated and joined science center, one of my hobbies
was enhanced production operating systems for internal datacenters and
one of the 1st (and long time) customers was HONE. Branch office
training for SEs had been part of large SE group on-site at customer
location. 23Jun1969 unbundling announcement started to charge for
(application) software (managed to make case that kernel software
should still be free), SE services, maint., etc. However they couldn't
figure out how not to charge for trainee SEs on-site at customer. As a
result HONE spawned, multiple (virtual machine) CP67/CMS datacenters
around the US providing online access to trainee SEs at branches,
running guest operating systems in virtual machines. Scientific Center
had also ported APL\360 to CMS for CMS\APL (redoing 16kbyte swapped
workspaces for large demand page virtual memory operation and added
APIs for system services like fille I/O, enabling real world
applications) and HONE started providing online APL-based
sales&marketing support applications (which came to dominate all HONE
use, with guest operating system use withered away) ... came to be the
largest use of APL in the world as HONE datacenters spawned all over
the world (I was requested to do the 1st couple, Paris and Tokyo
... yen was about 330/dollar).

With adding virtual memory to all 370s, there was also effort to morph
CP67->VM370 where they simplified or dropped a lot of stuff (including
multiprocessor support). 1974, I then start adding a bunch of stuff
back into VM370R2-base for my initial internal CSC/VM. Then for
VM370R3-base CSC/VM, I add multiprocessor support back in, initially
for HONE so they could add a 2nd CPU to all their 158s and 168s
(CSC/VM 2-CPU was getting twice throughput of single CPU
systems). This was something of a problem for head of POK, with the
MVS overhead and getting such poor multiprocessor operation ... and he
was also in the process of convincing corporate to kill the VM370
product, shutdown the development group, and transfer the people to
POK for MVS/XA (Endicott eventual acquired the VM370 product mission
for the mid-range, but had to recreate a development group from
scratch).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe and non-mainframe technology

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe and non-mainframe technology
Date: 16 Jan, 2026
Blog: Facebook

1988, the IBM branch office asked if I could help LLNL (national lab)
with standardization of some serial they were working with, which
quickly becomes fibre-channel standard ("FCS", including some stuff I
had done in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then IBM mainframe release some serial (when it was
already obsolete) as ESCON, initially 10mbyte/sec, upgrading to
17mbyte/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that drastically cuts native
throughput, eventually ships as FICON. Around 2010 was a max
configured z196 public "Peak I/O" benchmark getting 2M IOPS using 104
FICON (20K/FICON IOPS). About the same time, a "FCS" was announced for
E5-2600 server blade claiming over million IOPS (two such FCS with
higher throughput than 104 FICON).

Max configured Z196 MIPS had 50BIPS industry standard benchmark
(number of program iterations compared to industry benchmark MIPS/BIPS
referenced platform) and list at $30M ($600,000/BIP). By comparison
IBM had base list price of $1815 for E5-2600 server blade benchmark at
500BIPS (same industry standard benchmark number of program
iterations). Cloud operations assembling their own E5-2600 server
blades would be more like (IBM base list $1815/3) $605
($1.21/BIP). Note IBM docs has SAPs (system assist processors that do
actual I/O, CPU be kept to 70% ... or 1.5M IOPS) ... also no CKD DASD
have been made for decades (just simulated on industry fixed-block
devices).

max configured z196: 50BIPS, 80cores, 625MIPS/core
E5-2600 server blade: 500BIPS, 16cores, 31BIPS/core

Large cloud operation would have score or more megadatacenters, each
with half million or more E5-2500 server blades and enormous
automation (70-80 staff/megadatacenter). Not long after "Peak I/O",
industry press had articles that server component vendors were
shipping half their product directly to large cloud operation ... and
shortly later, IBM sells off its server blade business.

trivia-1: My (future) wife was in Gburg JES group and one of the
catchers for ASP/JES3 and was con'ed into going to POK, responsible
for loosely-coupled architecture (Peer-coupled Shared Data). She
didn't remain long, 1) lots of battles with the communication group
trying to force her into using SNA/VTAM for loosely-coupled operation,
2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX),
except IMS hot-standby. She has story asking Vern Watts who he asks
permission for to do hot-standby; he replies, nobody, will just tell
them when its all done.
https://www.vcwatts.org/ibm_story.html

Note after Future System imploded, I got asked to help with 16-CPU 370
and we con the 3033 processor engineers into helping in their spare
time (lot more interesting than remapping 168 logic to 20% faster
chips). Everybody thought it was great until somebody tells the head
of POK that it could be decades before POK's favorite son operatin
system ("MVS") had (effective) 16-CPU support (at the time MVS docs
had 2-CPU multiprocessor systems only getting 1.2-1.5 throughput of
single CPU, POK doesn't ship 16-CPU system until after turn of
century) and head of POK invites some of us to never visit POK again
and directs 3033 processor engineers, heads down and no distractions.

One of my hobbies after joining IBM was enhanced production operating
systemms for internal datacenters (and the internal online
sales&marketing HONE systems was one of the 1st and long-time
customer). In the morph of CP67->VM370, lots of stuff was simplified
or dropped (like multiprocessor support). In 1974, I start adding
bunch of stuff back into a VM370R2-base for my CSC/VM. Then I add
multiprocessor support back into a VM370R3-base CSC/VM, originally for
HONE so they could upgrade with 2nd CPU for their 158 & 168 systems
(getting twice throughput of single CPU systems).

Also 1988, HA/6000 was approved initially for NYTimes to migrate their
newspaper system off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc, also porting LLNL LINCS and NCAR
filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support
in same source base with unix (also do DLM supporting VAXCluster
semantics).

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when both
AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability and geographic
survivability (as counter to disaster/recovery) when out marketing
HA/CMP. One of the visits to 1-800 bellcore development showed that
S/88 would use a century of downtime in one software upgrade, while
HA/CMP had a couple extra "nines" (compared to S/88).

Early Jan92, have a meeting with Oracle CEO where IBM AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92. Mid Jan92, convince IBM FSD to bid
HA/CMP for gov. supercomputers. Late Jan92, cluster scale-up is
transferred for announce as IBM Supercomputer (for
technical/scientific *ONLY*) and we are told we can't do clusters with
anything that involve more than four systems (we leave IBM a few
months later). Partially blamed FSD going up to the IBM Kingston
supercomputer group to tell them they were adopting HA/CMP for
gov. bids (of course somebody was going to have to do it
eventually). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would eat the mainframe in the commercial
market. 1993 benchmarks (number of program iterations compared to the
MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : (1-CPU) 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

One of the executives we reported to, goes over to head up
Somerset/AIM (apple, ibm, motorola) to do single chip Power/PC (with
Motorola 88K bus enabling multiprocessor operation) Then, mid-90s, i86
chip makers do hardware layer that translate i86 instructions into
RISC micro-ops for actual execution (largely negating throughput
difference between RISC and i86); 1999 industry benchmark:

IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)

... trivia-2: One quote is that (cache miss) memory latency, when
measured in count of processor cycles, is about the same as disk
latency at 360-announce, when measured in count of 60s processor
cycles (memory is the new disk). Early RISC developed memory latency
compensation; out-of-order execution, branch prediction, speculative
execution, multithreading, etc (analogy to 60s multiprogramming)

... trivia-3; part of head of POK's issues was also after "Future
System" implosion:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

he was in the process of convincing corporate to kill the VM370
product, shutdown the development group, and transfer all the people
to POK for MVS/XA (Endicott eventually manages to acquire the VM/370
product mission for the midrange ... but had to recreate a development
group from scratch). Then POK executives were going around internal
datacenters trying to strong arm to move off VM/370 to MVS. POK tried
it on HONE ... and they got a whole lot of push back and eventually
had to come back and explain to HONE, that HONE had totally
misunderstand what was being said.

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
Megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
Mainframe loosely-coupled, shared data architecture  posts
https://www.garlic.com/~lynn/submain.html#shareddata
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
virtualization experience starting Jan1968, online at home since Mar1970

Wild Ducks

From: Lynn Wheeler <lynn@garlic.com>
Subject: Wild Ducks
Date: 16 Jan, 2026
Blog: Facebook

Note that the IBM century/100yrs celebration, one of the 100 videos
was on wild ducks, ... but it was customer wild ducks... all
references to employee wild ducks has been expunged. 1972, Learson
tried (and failed) to block bureaucrats, careerists, and MBAs from
destroying Watson culture/legacy:

Management Briefing
Number 1-72: January 18,1972
ZZ04-1312

TO ALL IBM MANAGERS:

Once again, I'm writing you a Management Briefing on the subject of
bureaucracy. Evidently the earlier ones haven't worked. So this time
I'm taking a further step: I'm going directly to the individual
employees in the company. You will be reading this poster and my
comment on it in the forthcoming issue of THINK magazine. But I wanted
each one of you to have an advance copy because rooting out
bureaucracy rests principally with the way each of us runs his own
shop.

We've got to make a dent in this problem. By the time the THINK piece
comes out, I want the correction process already to have begun. And
that job starts with you and with me.

Vin Learson

--- pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

How to Stuff a Wild Duck
https://www.si.edu/object/chndm_1981-29-438

Future System project 1st half 70s, imploded from 1993 Computer Wars:
The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

--- FS completely different from 370 and going to completely replace
it (during FS, internal politics was killing off 370 efforts, limited
new 370 is credited with giving 370 system clone makers their market
foothold). One of the final nails in the FS coffin was analysis by the
IBM Houston Science Center that if 370/195 apps were redone for FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

trivia: I continued to work on 360&370 all during FS, periodically
ridiculing what they were doing (drawing analogy with long playing
cult film down at central sq; which wasn't exactly career enhancing
activity)

Late 70s & early 80s I was blamed for online computer conferencing on
the internal network. It really took off the spring of 1981 when I
distributed a trip report to visit Jim Gray at Tandem (had left SJR
fall1980). Only about 300 directly participated but claims that 25,000
were reading. From IBMJargon:
https://havantcivicsociety.uk/wp-content/uploads/2019/05/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

--- six copies of 300 page extraction from the memos were printed and
packaged in Tandem 3ring binders, sending to each member of the
executive committee, along with executive summary and executive
summary of the executive summary (folklore is 5of6 corporate executive
committee wanted to fire me). From summary of summary:

• The perception of many technical people in IBM is that the company
is rapidly heading for disaster. Furthermore, people fear that this
movement will not be appreciated until it begins more directly to
affect revenue, at which point recovery may be impossible

• Many technical people are extremely frustrated with their management
and with the way things are going in IBM. To an increasing extent,
people are reacting to this by leaving IBM. Most of the contributors
to the present discussion would prefer to stay with IBM and see the
problems rectified. However, there is increasing skepticism that
correction is possible or likely, given the apparent lack of
commitment by management to take action

• There is a widespread perception that IBM management has failed to
understand how to manage technical people and high-technology
development in an extremely competitive

--- about the same time in the early 80s, I was introduced to John
Boyd and would sponsor his briefings at IBM
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
https://en.wikipedia.org/wiki/OODA_loop
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
https://thetacticalprofessor.net/2018/04/27/updated-version-of-boyds-aerial-attack-study/
John Boyd - USAF The Fighter Pilot Who Changed the Art of Air Warfare
http://www.aviation-history.com/airmen/boyd.htm

Boyd then used E-M as a design tool. Until E-M came along, fighter
aircraft had been designed to fly fast in a straight line or fly high
to reach enemy bombers. The F-X, which became the F-15, was the first
Air Force fighter ever designed with maneuvering specifications. Boyd
was the father of the F-15, the F-16, and the F-18.

... snip ...

--- Boyd version of wild ducks:

"There are two career paths in front of you, and you have to choose
which path you will follow. One path leads to promotions, titles, and
positions of distinction.... The other path leads to doing things that
are truly significant for the Air Force, but the rewards will quite
often be a kick in the stomach because you may have to cross swords
with the party line on occasion. You can't go down both paths, you
have to choose. Do you want to be a man of distinction or do you want
to do things that really influence the shape of the Air Force? To be
or to do, that is the question."

--- in 89/90, the Marine Corps Commandant leverages Boyd for makeover
of the corps (at a time when IBM was desperately in need of a
makeover). Then IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company (take-off on "baby bell"
breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

This century, we continued to have Boyd conferences at Qauntico MCU

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
John Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

some recent wild duck posts
https://www.garlic.com/~lynn/2025d.html#49 Destruction of Middle Class
https://www.garlic.com/~lynn/2025d.html#48 IBM Vietnam
https://www.garlic.com/~lynn/2025d.html#31 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2025d.html#25 IBM Management
https://www.garlic.com/~lynn/2025d.html#9 IBM ES/9000
https://www.garlic.com/~lynn/2025c.html#83 IBM HONE
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#60 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025c.html#51 IBM Basic Beliefs
https://www.garlic.com/~lynn/2025c.html#48 IBM Technology
https://www.garlic.com/~lynn/2025b.html#106 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#102 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#93 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#75 Armonk, IBM Headquarters
https://www.garlic.com/~lynn/2025b.html#57 IBM Downturn, Downfall, Breakup
https://www.garlic.com/~lynn/2025b.html#56 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025b.html#45 Business Planning
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#123 PowerPoint snakes
https://www.garlic.com/~lynn/2025.html#122 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#115 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#98 IBM Tom Watson Jr Talks to Employees on 1960's decade of success and the 1970s
https://www.garlic.com/~lynn/2025.html#93 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2025.html#84 IBM Special Company 1989
https://www.garlic.com/~lynn/2025.html#71 VM370/CMS, VMFPLC
https://www.garlic.com/~lynn/2025.html#55 IBM Management Briefings and Dictionary of Computing
https://www.garlic.com/~lynn/2025.html#34 The Greatest Capitalist Who Ever Lived: Tom Watson Jr. and the Epic Story of How IBM Created the Digital Age
https://www.garlic.com/~lynn/2025.html#14 Dataprocessing Innovation

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM FAA/ATC

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM FAA/ATC
Date: 21 Jan, 2026
Blog: Facebook

... didn't deal with Joe in IBM, but after leaving IBM, we did a
project with Fox & Template
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514/

Two mid air collisions 1956 and 1960 make this FAA procurement
special. The computer selected will be in the critical loop of making
sure that there are no more mid-air collisions. Many in IBM want to
not bid. A marketing manager with but 7 years in IBM and less than one
year as a manager is the proposal manager. IBM is in midstep in coming
up with the new line of computers - the 360. Chaos sucks into the fray
many executives- especially the next chairman, and also the IBM
president. A fire house in Poughkeepsie N Y is home to the technical
and marketing team for 60 very cold and long days. Finance and legal
get into the fray after that.

Joe Fox had a 44 year career in the computer business- and was a vice
president in charge of 5000 people for 7 years in the federal division
of IBM. He then spent 21 years as founder and chairman of a software
corporation. He started the 3 person company in the Washington
D. C. area. He took it public as Template Software in 1995, and sold
it and retired in 1999.

... snip ...

1988, HA/6000 was approved initially for NYTimes to migrate their
newspaper system off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc, also porting LLNL LINCS and NCAR
filesystems to HA/CMP) and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Ingres, Informix) that had VAXCluster support
in same source base with unix (also do DLM supporting VAXCluster
semantics).

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when
both AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability
and geographic survivability (as counter to disaster/recovery)
when out marketing HA/CMP. One of the visits to 1-800 bellcore
development showed that S/88 would use a century of downtime in one
software upgrade, while HA/CMP had a couple extra "nines" (compared to
S/88).

Early Jan92, have a meeting with Oracle CEO where IBM AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92.

We had been spending some amount of time with TA to FSD President, who
was working 1st shift as TA and 2nd shift writing ADA code for the
latest FAA program that also involved RS/6000s. Early specs claimed
(hardware) redundancy&recovery was so complete that didn't need
software contingency (then part way through they realized there could
be business process failures and design had to be revamped).  Mid
Jan92, he helps convince IBM FSD to bid HA/CMP for
gov. supercomputers.

Late Jan92, cluster scale-up is transferred for announce as IBM
Supercomputer (for technical/scientific *ONLY*) and we are told we
can't do clusters with anything that involve more than four systems
(we leave IBM a few months later). A couple weeks later, 17feb1992,
Computerworld news ... IBM establishes laboratory to develop parallel
systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Online Apps, Network, Email

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Online Apps, Network, Email
Date: 21 Jan, 2026
Blog: Facebook

MIT 7094/CTSS had online email. Then some of the people go to the 5th
for MULTICS and others go to the IBM Cambridge Scientific Center on
4th flr and do virtual machines (initially want 360/50 to modify with
virtual memory, but all the extras were going to FAA/ATC and have to
settle for a 360/40 and do virtual machine CP40/CMS ... and some
number of CTSS apps are replicated for CMS. CP40/CMS morphs into
CP67/CMS when 360/67 standard becomes available. Co-worker was
responsible for CP67-based wide-area network ... mentioned by one of
the CSC members inventing GML (later morphs into SGML & HTML) in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.


... snip ...

Science Centers wide-area network then morphs into the internal
corporate network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s ... about the time it was forced
to convert to SNA/VTAM) ... technology also used for the corporate
sponsored univ BITNET.

Edson
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.


... snip ...

newspaper article about some of Edson's Internet & TCP/IP IBM battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed, Internet &
TCP/IP) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Along the way, PROFS group is collecting CMS apps for wrapping 3270
menu screens around and collects a very early version of VMSG for
their email client. When the VMSG author tries to offer them a much
enhanced VMSG, they try to get him separated from the company. The
whole thing quiets down when the VMSG author demonstrates his initials
in an non-displayed PROFS field. After that he only shares his source
with me and one other person.

IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
Internal Network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (&/or EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some recent archived posts mentioning VMSG & PROFS email client
https://www.garlic.com/~lynn/2026.html#5 PROFS and other CMS applications
https://www.garlic.com/~lynn/2025e.html#103 IBM CSC, HONE
https://www.garlic.com/~lynn/2025d.html#109 Internal Network, Profs and VMSG
https://www.garlic.com/~lynn/2025d.html#43 IBM OS/2 & M'soft
https://www.garlic.com/~lynn/2025d.html#32 IBM Internal Apps, Retain, HONE, CCDN, ITPS, Network
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025b.html#60 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#90 Online Social Media
https://www.garlic.com/~lynn/2024f.html#91 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#44 PROFS & VMSG
https://www.garlic.com/~lynn/2024e.html#99 PROFS, SCRIPT, GML, Internal Network
https://www.garlic.com/~lynn/2024e.html#48 PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#69 3270s For Management
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022b.html#29 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home