List of Archived Posts

2026 Newsgroup Postings (03/27 - )

CP67 Terminal Support
IBM System/R
IBM 370/195
IBM 370/195
IBM Virtual Machine
Self-hosting and the 6502
Self-hosting and the 6502
Self-hosting and the 6502
Self-hosting and the 6502
CMS, Self-hosting and the 6502
CMS, Self-hosting and the 6502
CMS, Self-hosting and the 6502
CMS, Self-hosting and the 6502
IBM RAS
Bad Responsee
IBM 16-CPU SMP
IBM 16-CPU SMP
IBM 16-CPU SMP
IBM 16-CPU SMP
DUMPRX
IBM 3090 EREP
IBM 3090 EREP
IBM Marketing
IBM CSC, IBM Unbundle, IBM HONE, IBM System/R, SCI, FCS, IBM HA/CMP
IBM CICS
IBM CICS
IBM CICS
IBM Mainframe
How We Put It Together
IBM Silicon Valley Lab
IBM Silicon Valley Lab
IBM Silicon Valley Lab
IBM Silicon Valley Lab

CP67 Terminal Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 Terminal Support
Date: 27 Mar, 2026
Blog: Facebook

Within year taking two credit hr intro fortran/computers, univ got
360/67 for tss/360 replacing 709/1401 and I was hired fulltime
responsible for os/360 (tss/360 never coming to production). A little
over another year, CSC came out to install CP/67 (3rd after CSC itself
and MIT Lincoln Labs). CP/67 arrived with 1052&2741 terminal
support (134.5baud) and auto-terminal ident, capable switching
terminal type scanner type for each port. Univ. also had TTY33&35
(110 baud) and I add ASCII support integrated with auto-terminal type.

I then want to have single dial-in number (hunt group) for all
terminals. Didn't quite work, IBM had hard-wired port line speeds
... so we start a clone terminal controller. Build a IBM channel
interface board for Interdata/3 programmed to emulate IBM controller
with the addition for auto-baud. Then upgraded with Interdata/4 for
channel interface and cluster of Interdata/3s for port
interfaces. Interdata (and later Perkin-Elmer) sell them as clone
controllers.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

... and four of us are written up responsible for (some part of) clone
controller business

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Clone/Emulated IBM mainframe controller
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/R

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 30 Mar, 2026
Blog: Facebook

Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS.
Others went to the IBM Cambridge Scientific Center on 4th
flr, virtual machines (wanted 360/50 to add virtual memory, but
all spare 50s were going to FAA/ATC so had to settle for 360/40 to add
virtual memory and do (virtual machine) CP40/CMS, morphs into
CP67/CMS when 360/67 standard with virtual memory).

With decision to add virtual memory to all 370s, some of CSC went to
the 3rd flr, taking over the IBM Boston Programming center for the
VM370 development group. The Future System effort overlapping the
adding virtual memory to (and replacing) all 370s
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

I had joined CSC in early 70s and then transfer out to SJR later 70s,
which was doing original SQL/relational, System/R (was done with
VM370) and I worked with Jim Gray and Vera Watson (lots of opposition
from IMS and EAGLE groups). Jim leaves SJR fall1980 and pawns off
various stuff on me.

My (future) wife had been in the GBURG JES group and one of the
catchers for ASP/JES3, Then she was con'ed into going to POK
responsible for loosely-coupled (mainframe for cluster) architecture
where she did Peer-coupled Shared Data architecture (late
70s). She didn't remain long because 1) sporadic battles with
communication group trying to force her into using SNA/VTAM, 2) little
uptake (until much later with SYSPLEX and Parallel SYSPLEX, 90s)
except for IMS hot-standby, she asked Vern Watts who he would ask to
get permission, he replies nobody, he will just tell them when its all
done
https://www.vcwatts.org/ibm_story.html
https://en.wikipedia.org/wiki/IBM_Information_Management_System

Vern also had major problem with SNA/VTAM, IMS hot-standby could "fall
over" in minutes ... SNA/VTAM overhead of sessions establishment
increased non-linear and typical large terminal (or ATM) configuration
could take hour and half (even on max configured 3090).

While IBM company was pre-occupied with next great DBMS "EAGLE",
managed to do System/R tech transfer to Endicott (mid-range
mainframes) for SQL/DS. Then when "EAGLE" implodes, there was request
for how fast could System/R be ported to MVS (eventually ships as DB2
for "decision support" only).

1988, HA/6000 was approved (for my wife and me), originally for
NYTimes to move their newspaper system ("ATEX") off DEC VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing (technical/scientific) cluster scale-up with
national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up
with RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX (work with Ingres, Oracle and
Sybase on redoing cluster logic & distributed lock manager
for scaling to 128-system clusters)

IBM was also remarketing Stratus as S/88 ... and the S/88 product
administrator started taking us around to their customers and also had
me write section for the corporate continuous available strategy
document (it was removed when both Rochester/AS400 and POK/high-end
mainframe complained).

Early Jan92, there was meeting with Oracle CEO and IBM/AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92. Mid-jan92, I update IBM FSD on HA/CMP
work with national labs and FSD decides to go with HA/CMP for federal
supercomputers. By end of Jan, we are told that cluster scale-up is
being transferred to Kingston for announce as IBM Supercomputer
(technical/scientific *ONLY*) and we aren't allowed to work with
anything that has more than four systems (we leave IBM a few months
later). A couple weeks later, 17feb1992, Computerworld news ... IBM
establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

TPC
https://tpc.org
former co-worker at SJR
http://www.tpc.org/information/who/gray5.asp
TPC-C
https://www.tpc.org/tpcc/results/tpcc_perf_results5.asp?resulttype=all

A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
http://bits.blogs.nytimes.com/2008/05/31/a-tribute-to-jim-gray-sometimes-nice-guys-do-finish-first/
Sailing Mystery Unsolved: Court Declares Jim Gray Dead
http://www.informationweek.com/database/sailing-mystery-unsolved-court-declares-jim-gray-dead/d/d-id/1104453

above references (from 2007):

The Search For Microsoft Researcher Jim Gray; Colleagues rallied to
look for the renowned computer scientist, but to no avail.
http://www.informationweek.com/the-search-for-microsoft-researcher-jim-gray/d/d-id/1053601

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Peer-coupled Shared Data posts
https://www.garlic.com/~lynn/submain.html#shareddata
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/195

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/195
Date: 1 Apr, 2026
Blog: Facebook

Not long after graduating and joining IBM CSC, I was asked to help
with multithreading (more detail in this post about terminating
ACS/360, Amdahl had won the battle to make ACS, 360 compatible, then
he leaves IBM after ACS/360 was killed)
https://people.computing.clemson.edu/~mark/acs_end.html 370/195
https://en.wikipedia.org/wiki/IBM_System/360_Model_195

Besides a few new instructions for 360/195=>370/195 ... they had also
added instruction retry ... for attempting (hardware) recovery after
hardware fault.

195 had pipeline and out-of-order execution ... but no branch
prediction so conditional branches drained the pipeline. As a result
most code ran at half 195 rated speed. Idea was that two instruction
streams (simulating multiprocessor), each running at half rated speed,
would keep 195 execution running full capacity. However, then it was
decided to add virtual memory to all 370s and it wasn't really
practical to add virtual memory to 195 and all new 195 efforts were
terminated.

Early last decade, I was asked to track down decision to add virtual
memory to all 370s and I found staff to executive making
decision. Basically MVT storage management was so bad that regions had
to be specified four times larger than used. As a result, standard
1mbyte, 370/165 typically could only run four regions concurrently,
insufficient to keep system busy and justified. Going to 16mbyte
virtual address space (sort of like running MVT in CP67 16mbyte
virtual machine), allowed number of concurrent regions increased by
factor of four times (capped at 15 because of 4bit storage protect
keys) with little or no paging. Ludlow was doing initial VS2/SVS on
360/67 (pending engineering 370s with virtual memory). I would drop in
on him periodically, he was doing little bit of code for virtual
address space and some simple paging. Biggest problem was EXCP/SVC0
was now being passed channel programs with virtual addresses and
channels required real addresses (CP67 had similar issue). Ludlow
borrows CP67 CCWTRANS (that performed same function) to integrate into
EXCP.

In any case, MVT (and continue VS2/MVS) multiprocessor support had so
much overhead, IBM documents said the MVT&MVS multiprocessor
support, 2-CPU systems only had 1.2-1.5 times the throughput of
MVT&MVs 1-CPU operation (so 370/195 two i-stream throughput
wouldn't have been twice actual throughput).

The IBM "Future System" effort (overlapping 370 virtual memory effort)
was totally different than 370 and was planned to totally replace
370. During FS, internal politics was also killing new 370 efforts and
lack of new 370s during FS is credited with giving clone 370 makers
(including Amdahl) their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

Future System disaster, from "Computer Wars: The Post-IBM World"
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive.

... snip ...

When FS finally implodes there was mad rush to get stuff back into the
370 product pipeline, including kicking off quick&dirty
3033&3081 efforts. I was asked to help with a 16-CPU
multiprocessor and we con the 3033 processor engineers into helping in
their spare time (a lot more interesting than remapping 168 logic to
20% faster chips). Everybody thought it was great until somebody tells
the head of POK that it could be decades before POK's favorite son
operating system ("MVS") had (effective) 16-CPU support (i.e. MVS
multiprocessor overhead even for 2-CPU operation, POK doesn't ship
16-CPU system until after the turn of century). Head of POK then
invites some of us to never visit POK again and directs the 3033
processor engineers heads down, and no distractions.

Head of POK was also convincing corporate to kill VM370 product,
shutdown the development group and transfer to POK for MVS/XA
development (Endicott eventually acquires the VM370 product mission
for the mid-range, but had to recreate a development group from
scratch). Likely contributing was CERN had presented VM370/CMS -
MVS/TSO comparison at 1974 SHARE meeting ... copies inside IBM were
stamped "IBM Confidential - Restricted" (2nd highest
classification). Somewhat similar to POK's original plans for
customers migrating to MVS
http://www.mxg.com/thebuttonman/boney.asp

Not long later, I transfer out to SJR on the west coast and got to
wander around datacenters in silicon valley, including disk
bldg14/engineering and bldg15/product test across the street. They
were doing 7x24, prescheduled, stand-alone testing. They said that
they had recently tried MVS, but it had 15min MTBF (in that
environment), requiring manual re-ipl. I offer to rewrite I/O
supervisor making it bullet proof and never fail so it could do any
amount of on-demand testing, greatly improving productivity. I then
write an internal IBM I/O Integrity research report and happen to
mention MVS 15min MTBF bringing down the wrath of the MVS organization
on my head.

Bldg15 gets 1st engineering 3033 outside POK 3033 processor
engineering. Testing only took percent or two of 3033 and so we
scrounge up 3830 controller and string of 3330 setting up our own
private online service. At the time air-bearing simulation (part of
thin-film disk head design) was only getting a couple
turn-arounds/month on the SJR 370/195. We set it up on bldg15 3033 and
they were able to get several turn-arounds/day.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

Thin-film heads originally used for FBAs 3370 ... both fixed-block and
thin-film used for future disks. For 3380 fixed-block ... CKD
simulation can be seen in 3380 records/track formulas, where record
lengths have to be rounded up to multiple of fixed cell size.

trivia: IBM 23Jun1969 unbundling announcement started to charge for
(application) software (managed to make case kernel software was still
free) SE services, maint. etc. Part of SE training was part of SE
group at customer datacenter, however after unbundling they couldn't
figure out how not to charge for trainee SEs. Eventually solutions was
several US HONE CP67/CMS datacenters for branch office SEs to login
and practice with guest operating systems running in virtual
machines. One of my hobbies (after graduating and joining IBM CSC) was
enhanced production operating systems for internal datacenters and
HONE was one of my first (and long time) IBM internal customers. CSC
also ports APL\360 to CP67/CMS for CMS\APL and HONE starts using it
for online sales&marketing support applications (which come to
dominate all HONE activity; guest operating system use just dwindling
awaY).

Some of the MIT CTSS/7094 people went to the 5th flr to do
MULTICS. Others went to IBM CSC on the 4th flr and did virtual
machines. They initial wanted 360/50 to add hardware virtual memory,
but all the spare 50s were going to FAA/ATC, and they had to settle
for 360/40; adding virtual memory and doing CP40/CMS. Then when 360/67
standard with virtual memory became available, CP40/CMS morphs into
CP67/CMS.

With the decision to add virtual memory to all 370s, some of the CSC
people take-over the IBM Boston Programming Center on the 3rd flr for
the VM370 Development Group. In the morph of CP67->VM370, lots of
stuff was simplified or dropped (including "wheeler scheduler" and
multiprocessor support). I then start adding lots of CP67 features
into a VM370R2-base (including necessary kernel reorg for
multiprocessor operation) for my internal CSC/VM. US HONE then
consolidates their datacenters in Silican Valley. Then with
VM370R3-base, I add more stuff in, including multiprocessor support,
originally for HONE so they can upgrade their 158s&168s to 2-CPU
operation (getting twice throughput of 1-CPU systems).

Factoid: when Facebook 1st moved into Silicon Valley, it was into new
bldg built next door to the former consolidated US HONE datacenter.

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
IBM 23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
dynamic adaptive resource management, "wheeler" scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/195

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/195
Date: 1 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#2 IBM 370/196

1980, IBM STL (since renamed SVL) was bursting at the seams and 300
people from IMS DBMS group were moving to offsite bldg with
dataprocessing back to STL datacenter. They had tried "remote" 3270
support and found the human factors totally unacceptable. I got con'ed
into doing channel-extender support so channel-attached 3270
controllers could be placed at the off-site bldg ... resulting in no
perceptible human factors difference between off-site and inside
STL. An unintended consequence was mainframe system throughput
increased 10-15%. STL system configurations had large number of 3270
controllers spread all across channels shared with 3830/3330 disks
... and significant 3270 controller channel busy overhead was
effectively (for same amount 3270 I/O) being masked by the channel
extender (resulting in improved disk throughput). Then there was
consideration to use channel extenders for all 3270 controllers (even
those located inside STL).

An attempt was made to get it released to customers, but there was
group in POK working on serial stuff (becomes ESCON) that got it
vetoed (worried that if it was in the market, it would be harder to
justify getting their stuff released).

1988 also HA/6000 was approved, originally for NYTimes to move their
newspaper system ("ATEX") from DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing (technical/scientific) cluster scale-up with
national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up
with RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX (work with Ingres, Oracle and
Sybase on redoing cluster logic & distributed lock manager for scaling
to 128-system clusters)

Early Jan92, there was meeting with Oracle CEO and IBM/AWD executive
Hester tells Ellison that we would have 16-system clusters by mid92
and 128-system clusters by ye92. Mid-jan92, I update IBM FSD on HA/CMP
work with national labs and FSD decides to go with HA/CMP for federal
supercomputers. By end of Jan, we are told that cluster scale-up is
being transferred to Kingston for announce as IBM Supercomputer
(technical/scientific *ONLY*) and we aren't allowed to work with
anything that has more than four systems (we leave IBM a few months
later). A couple weeks later, 17feb1992, Computerworld news ... IBM
establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

I had been planning on using (native) FCS for both storage I/O as well
as cluster coordination.

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS,
... 128-systems: 16BIPS

Executive we reported to goes over to head of Somerset/AIM (Apple,
IBM, Motorola) to do single chip 801/RISC (Power/PC) and uses Motorola
88k bus/cache enabling multiprocessor implementations (and large
clusters of multiprocessor systems).

90s, i86 chip makers do a hardware layer that translates i86
instructions into RISC micro-ops, largely negating difference with
RISC. 1999 industry benchmark:

IBM PowerPC 440: 1,000MIPS
Pentium3: 2,054MIPS (twice PowerPC 440)

Also 1988, branch office asked if I could help LLNL (national lab)
standardize some serial stuff they were working with that becomes
fibre-channel standard ("FCS", including some stuff I had done in
1980, initial 1gbit transfer, full-duplex, aggregate 200mbyte/sec)
https://en.wikipedia.org/wiki/Fibre_Channel

Then POK finally announces their serial stuff in the 90s as ESCON
(when it was already obsolete), initially 10mbytes/sec, upgraded to
17mbytes/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight FCS protocol that drastically cuts native
throughput, eventually ships as FICON. Around 2010 was a max
configured z196 "Peak I/O" benchmark released publicly, getting 2M
IOPS using 104 FICON (20K IOPS/FICON). About the same time, a "FCS"
was announced for E5-2600 server blade claiming over million IOPS (two
such FCS with higher throughput than 104 FICON, running over
FCS). Note IBM docs has SAP (system assist processors that do actual
I/O) CPUs be kept to 70% ... or 1.5M IOPS ... also no CKD DASD have
been made for decades (just simulated on industry fixed-block
devices).

max configured z196: 50BIPS, 80cores, 625MIPS/core
E5-2600 server blade: 500BIPS, 16cores, 31BIPS/core

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
fiber-channel standard (FCS) and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Virtual Machine

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Virtual Machine
Date: 3 Apr, 2026
Blog: Facebook

In college took two credit hr intro to fortran/comuters. At the end of
the semester, I was hired to re-implement 1401 MPIO in assembler for
360/30. Univ was getting 360/67 for tss/360 replacing 709/1401 and got
360/30 temporarily until availability of 360/67s. Univ. shutdown
datacenter on weekends and I got the whole place dedicated (although
48hrs w/o sleep made Monday classes hired). I was given pile of
hardware & software manuals and got to designed and implement my own
monitor, device drivers, interrupt handlers, error recovery, storage
management, etc ... and within a few weeks had 2000 card 360/30
assembler program

360/67 arrived within year of taking intro class and I was hired
fulltime responsible for os/360 (tss/360 never came to fruition). 709
did student fortran in less than second, but 360/67 os/360 took over
minute. I install HASP for MFT9.5 cutting time in half. I then start
redoing MFT11 SYSGEN STAGE2 carefully placing datasets and PDS members
optimizing arm seek and multi-track search cutting another 2/3rds to
12.9secs. 360/67 never got better than 709 until I install UofWaterloo
WATFOR, clocked on 360/67 at 20,000 cards/min (333 cards/sec
... student fortran tended to run 30-60cards/job).

Then CSC came out to install (virtual machine) CP/67 (3rd after CSC
itself and MIT Lincoln Labs) and I mostly get to play with it during
my weekend 48hr window. I then spend a few months rewriting
pathlengths for running OS/360 in virtual machine. Bare machine test
ran 322secs ... initially 856secs (CP67 CPU 534secs). After a few
months I had CP67 CPU down from 534secs to 113secs. I then start
rewriting the dispatcher/scheduler, (dynamic adaptive resource
manager/default fair share scheduling policy), paging, adding ordered
seek queuing (from FIFO) and mutli-page transfer channel programs
(from FIFO and optimized for transfers/revolution, getting 2301 paging
drum from 70-80 4k transfers/sec to channel transfer peak of 270). Six
months after univ initial CP/67 install, CSC was giving one week class
in LA. I arrive on Sunday afternoon and asked to teach the class, it
turns out that the people that were going to teach it had resigned the
Friday before to join one of the 60s CSC CP67 commercial online
spin-offs (NCSS).

CP/67 arrived with 1052&2741 terminal support and auto-terminal ident,
capable switching terminal type scanner type for each port. Univ. also
had TTY33&35 and I add ASCII support integrated with auto-terminal
type. I then want to have single dial-in number (hunt group) for all
terminals. Didn't quite work, IBM had hard-wired line speed ... so we
start a clone terminal controller. Build a IBM channel interface board
for Interdata/3 programmed to emulate IBM controller with the addition
for auto-baud. Then upgraded with Interdata/4 for channel interface
and cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) sell them as clone controllers.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
... and four of us are written up responsible for (some part of) clone
controller business

Before I graduate, I'm hired fulltime into small group in Boeing CFO
office to help with formation of Boeing Computer Services (consolidate
all dataprocessing into independent business unit). I think Renton
largest datacenter in the world, 360/65s arriving faster than they
could be installed, boxes constantly staged in hallways around machine
room. Lots of politics between Renton director and CFO, who only had a
360/30 up at Boeing Field for payroll, although they enlarge the room
to install 360/67 for me to play with when I wasn't doing other
stuff. 747-3 was flying skies of Seattle getting FAA flt
certification. Tours of mock-up of 747 cabin just south of Boeing
field would claim 747s carried so many people, there would never have
fewer than four jetways.

Boeing Huntsville had got a 2-CPU 360/67 with several 2250 graphic
displays for TSS/360 CAD/CAM (but tss/360 wasn't production), so
configured as two MVTR13 systems. They ran into same problems that
resulted in decision to add virtual memory to all 370s and modified
MVTR13 to run in virtual memory mode (but w/o paging).

Early last decade I was asked to track down decision to add virtual
memory to all 370s. Basically MVT storage management was so bad that
region sizes had to be specified four times larger than used, limit
standard 1mbyte, 370/165 to four concurrent running regions,
insufficient to keep system busy and justified. Running MVT in 16mbyte
virtual address space (similar to running MVT in a 360/67, CP/67
16mbyte virtual machine) allowed number of concurrent regions to be
increased by factor of four times (capped at 15 concurrent regions
because of 4bit storage protect key) with little or no paging.

When I graduated, I joined the IBM Cambridge Scientific Center
(instead of staying w/CFO) and one of my hobbies was enhanced
production operating systems for internal datacenters. Ludlow was
doing the initial implementation of MVT->VS2/SVS on 360/67 (until
engineering 370 with virtual memory) and I would drop by
periodically. He had a little bit of code for the 16mbyte virtual
address space and some simple paging. Biggest task was channel
programs passed to EXCP/SVC0 now had virtual addresses and channels
required real addresses and he borrows CP67's CCWTRANS for integrating
into EXCP (creating channel program copies, replacing virtual with
real addresses).

Overlapping with 370 virtual memory was IBM's Future System project,
totally different than 370 and planned to totally replace 370. Lack of
new 370s during Future System is credited with giving the clone 370
makers (including Amdahl) their market foothold. Observation was that
any other computer company with a failure the magnitude of FS would
have been bankrupt
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

When Future System imploded, there was mad rush to get stuff back into
the 370 product pipelines, including kicking off the quick&dirty
3033&3081 in parallel. Future System from: Computer Wars: The Post-IBM
World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive

... snip ...

With implosion of FS, Endicott talks me into helping with 138/148
microcode assist (ECPS). Archived post with copy of initial analysis
https://www.garlic.com/~lynn/94.html#21

I was also talked into helping with 16-CPU 370 and we con the 3033
processor engineers into working on it in their spare time (lot more
interesting than remapping 168 logic to 20% faster chips). Everybody
thought it was great until somebody tells the head of POK that it
could be decades before POK's favorite son operating system ("MVS")
had (effective) 16-CPU support (IBM docs had MVS 2-CPU only getting
1.2-1.5 times throughput of 1-CPU, POK doesn't ship a 16-CPU system
until after turn of century). Then head of POK invite some of us to
never visit POK again and instructs 3033 processor engineers, heads
down and no distractions.

I then transfer out to SJR and get to wander around datacenters in
silicon valley, including disk bldg14/engineers and bldg5/product
test, across the street. They were doing prescheduled, 7x24,
stand-alone testing and mentioned that they had recently tried MVS,
but it had 15min MTBF (in that environment) requiring manual re-ipl. I
offer to rewrite I/O supervisor to make it bullet-proof and never
fail, allowing any amount of on-demand, concurrent testing, greatly
improving productivity. Bldg15 then gets the first engineering 3033
outside POK processor engineering. Testing only took a percent of
testing so we scrounge up a 3830 controller and 3330 string and setup
our own private online service. trivia: 303x channel directors were
(still) periodically hanging, requiring manual reset and discover if I
quickly hit all six channel addresses with CLRCH, the channel director
would automagically re-IMPL. I then write an (internal) I/O Integrity
research report and happen to mention MVS 15min MTBF, bringing the
wrath of the MVS organization down on my head.

Note that the head of POK had also convinced corporate to kill the
VM370 product, shutdown the development group and transfer all the
people to POK for MVS/XA. They weren't planning on telling the people
until the very last minute (to minimize the numbers that might escape
into the Boston area). The information managed to leak early and some
number managed to escape and there was search for the leak source
(fortunately for me, the source wasn't given up). Endicott eventually
manages to acquire the VM370 product mission for the mid-range, but
had to recreate a development group from scratch. This was in the very
early days of DEC VMS and there was joke that the head of POK was
major contributor to VMS. Also IBM was under restrictions that
machines had to ship in the same sequence as the orders. There is
folklore that the 1st 3033 order was VM370 customer and it would be
great loss of face for the head of POK (having convinced corporate to
kill VM370, there is folklore that the 3033 moving van left the
shipping dock in the "correct" order, but they managed to fiddle the
van travel path and the MVS shipment arrived first).

One of the other places in silicon valley would drop in at Tymshare,
Aug1976 starts offering their CMS-based online computer conferencing
free to SHARE
https://www.share.org/

I cut a deal with Tymshare to get monthly tape dump of all VMSHARE
files for putting up on internal network and internal systems
(including HONE, internal online sales&marketing support
systems). Initially lawyers objected, they were concerned about
exposing internal employees with unfiltered customer information (that
was possibly different from corporate party line). This is similar to
a 1974 CERN comparison of VM370/CMS with MVS/TSO that was presented at
SHARE (copies inside IBM were stamped "IBM Confidential - Restricted")
... archive here
http://vm.marist.edu/~vmshare

I got some push back, concern that internal employees might be
contaminated by exposure to unfiltered customer information. Something
like this showed up in 1974 when CERN presented a comparison of
VM370/CMS and MVS/TSO at SHARE (even though presentation was freely
available, inside IBM copies had been stamped "IBM Confidential -
Restricted" aka only available on need-to-know

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone, plug-compatible terminal controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, share memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#smp

recent posts mentioning undergraduate:
https://www.garlic.com/~lynn/2026.html#98 IBM 360&370 Experience
https://www.garlic.com/~lynn/2026.html#95 CP67/CMS, CMS\APL, HONE, VM370/CMS
https://www.garlic.com/~lynn/2026.html#91 CP/67 and VM/370
https://www.garlic.com/~lynn/2026.html#83 Touch Type, Typewriters, Terminals
https://www.garlic.com/~lynn/2026.html#82 IBM DASD, CKD, FBA
https://www.garlic.com/~lynn/2026.html#78 IBM OS Debugging
https://www.garlic.com/~lynn/2026.html#67 Early Mainframe work
https://www.garlic.com/~lynn/2026.html#59 IBM CP67 and VM370
https://www.garlic.com/~lynn/2026.html#28 360 Channel
https://www.garlic.com/~lynn/2026.html#24 IBM 360, Future System
https://www.garlic.com/~lynn/2025e.html#57 IBM 360/30 and other 360s
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

Self-hosting and the 6502

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Fri, 03 Apr 2026 16:53:28 -1000

antispam@fricas.org (Waldek Hebisch) writes:

AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users.  Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.

In a sense one can say that TSS/360 was ahead of it times: on
bigger machine smaller fraction of machine would be occupied
by system code so memory available for user whould be significantly
bigger.  IIUC already on 2MB machine TSS/360 behaved much better.

Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by TSS/360
kernel ... but the tss/360 group would proudly point out that 360/67
2-CPU, two mbyte had 3.9 times the throughput of 1-CPU (trying to imply
it was tss/360 multiprocessor capability ... as opposed larger memory
for its horribly bloated fixed kernel requirement).

implying the tss/360 implementation was much better than the MVT 2-CPU
360/65MP (& later MVS 2-CPU) only had 1.2-1-5 times the throughput
of 1-CPU.

As undergraduate, univ hired me fulltime responsible for os/360 (360/67
running at 360/65).

Then CSC came out to install (virtual machine) CP/67 (3rd after CSC
itself and MIT Lincoln Labs) and I mostly get to play with it during my
weekend 48hr window. I then spend a few months rewriting pathlengths for
running OS/360 in virtual machine. Bare machine test ran 322secs
... initially 856secs (CP67 CPU 534secs). After a few months I had CP67
CPU down from 534secs to 113secs. I then start rewriting the
dispatcher/scheduler , (dynamic adaptive resource manager/default fair
share scheduling policy), paging, adding ordered seek queuing (from
FIFO) and mutli-page transfer channel programs (from FIFO and optimized
for transfers/revolution, getting 2301 paging drum from 70-80 4k
transfers/sec to channel transfer peak of 270). Six months after univ
initial CP/67 install, CSC was giving one week class in LA. I arrive on
Sunday afternoon and asked to teach the class, it turns out that the
people that were going to teach it had resigned the Friday before to
join one of the 60s CSC CP67 commercial online spin-offs.

Early last decade I was asked to track down decision to add virtual
memory to all 370s. Basically MVT storage management was so bad that
region sizes had to be specified four times larger than used, limit
standard 1mbyte, 370/165 to four concurrent running regions,
insufficient to keep system busy and justified. Running MVT in 16mbyte
virtual address space (similar to running MVT in a 360/67, CP/67 16mbyte
virtual machine) allowed number of concurrent regions to be increased by
factor of four times (capped at 15 concurrent regions because of 4bit
storage protect key) with little or no paging.

When I graduated, I joined the IBM Cambridge Scientific Center (instead
of staying w/CFO) and one of my hobbies was enhanced production
operating systems for internal datacenters (one of the 1st & long time
was the online termainl sales&marketing support HONE). Ludlow was doing
the initial implementation of MVT->VS2/SVS on 360/67 (until engineering
370 with virtual memory) and I would drop by periodically. He had a
little bit of code for the 16mbyte virtual address space and some simple
paging. Biggest task was channel programs passed to EXCP/SVC0 now had
virtual addresses and channels required real addresses and he borrows
CP67's CCWTRANS for integrating into EXCP (creating channel program
copies, replacing virtual with real addresses).

Some of the MIT CTSS/7094 people go to the 5th flr to do MULTICS. Others
when to the IBM Cambridge Science Center and did virtual machine
(initially wanted 360/50 to add hardware virtual memory but all the
extra 50s were going to FAA/ATC and they had to settle for 360/40 to
modify with virtual memory and did CP40/CMS, when 360/67 standard with
virtual memory are available, CP40/CMS morphs into CP67/CMS. With
decision to add virtual memory to 370s and some of CSC spins off and
goes to the 3rd flr, taking over the IBM Boston Programming Center for
the VM370 Development Group. In the morph of CP67->VM370, lots of stuff
was simplified and/or dropped (including paging, "wheeler scheduler",
multiprocessor support).

Then with VM370R2-base, I start adding lots of stuff back in for my
internal CSC/VM release (paging, wheeler scheduler, etc). Then with
VM370R3-base, I add more back in, including 2-CPU multiprocessor support
(initially for internal HONE so they can upgrade 158s&168s systems to
2-CPU (getting twice the troughput).

SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopiic.html#545tech
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Self-hosting and the 6502

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Sat, 04 Apr 2026 07:40:02 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

On Fri, 03 Apr 2026 16:53:28 -1000, Lynn Wheeler wrote:

Largest 360/67 1-CPU had one mbyte memory ... mostly taken up by
TSS/360 kernel ...

Wikipedia says TSS was not a great success.

Did any timesharing OSes from IBM enjoy much success? Maybe TSO? Did
that do multiuser, without the need for VMs?

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502

after CSC came out to install CP67/CMS (and before I had done any
major rewrites), IBM still had a TSS/360 SE onsite and I had to
periodically let him use some of my weekend time.

we put together a simulated interactive fortran edit, compile, execute
benchmark (10 secs delay between every simulated terminal input;
... and cp67/cms had much better throughput and much better
interactive response for 30 simulated users than tss/360 for 4
simulated users

after I joined IBM and integrated all my CP67 enhancements and did
more ... the CSC 768kbyte 360/67 was running 75-80 users (104 pageable
4kbyte pages after fixed kernel and my global page replacement).

IBM Grenoble Scientific Center had a 1mbyte 360/67 and modified it to
correspond to the 60s literature on paging working set dispatcher with
"local LRU" page replacement (155 pageable 4kbyte pages after fixed
kernel) running 35 users. The two user workloads were similar, but CSC
had higher throughput and much better interactive response.

Early 80s, Jim Gray had left SJR and joined Tandem and asked me if I
could help a Tandem co-worker get his Stanford Phd ... which involved
global LRU page replacement and the 60s "local LRU" forces were
trying to block Phd involving global LRU ... Jim knew I had loads of
my CSC CP67/CMS global LRU data as well as loads of Grenoble
CP67/CMS "local LRU" data ... more than twice as many users with
better performance but only 2/3rds the pageable real storage.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
page replacement, global LRU, wsclock posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
virtualization experience starting Jan1968, online at home since Mar1970

Self-hosting and the 6502

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Sat, 04 Apr 2026 12:31:34 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

CMS didn't do multiuser. Hence the need for the "VM" part.

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502

some of the MIT CTSS/7094 people went to 5th flr to do MULTICS, others
went to IBM Cambridge Science Center on the 4th flr and did CP40&CP67
virtual machine (and single user monitor "CMS" specifically designed
for running in virtual machine ... although originally it could run on
360 native hardware (early CMS development was on "bare" 360/40 before
CP40 was operational) .... but that capability was removed in
transition from CP67 to VM370).

Originallly CSC wanted 360/50 to modify with virtual memory ... but
all the spare 50s were going to FAA/ATC and so had to settle for
360/40 and did CP40/CMS ... when 360/67 standard with virtual memory
came available, CP40 morphs into CP67.

Note: virtual memory done for 360/40 modifications were somewhat
different from virtual memory done from 360/67 ... more information
available here:
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

with decision to add virtual memory to all 370s, some of the CSC
people, took over the IBM Boston Programming Center on the 3rd flr for
the VM370 Development group.

3rd flr BPC before becoming VM370 development group, had earlier done
CPS
https://en.wikipedia.org/wiki/Conversational_Programming_System
... although a lot was subcontracted out to Allen-Babcock (including the
CPS microcode assist for the 360/50)
https://www.bitsavers.org/pdf/allen-babcock/cps/
https://www.bitsavers.org/pdf/allen-babcock/cps/CPS_Progress_Report_may66.pdf

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Self-hosting and the 6502

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Sat, 04 Apr 2026 12:53:28 -1000

antispam@fricas.org (Waldek Hebisch) writes:

AFAICS main factor was that TSS/360 was too big, which left too
little core for users which lead to intensive paging when one
tried to increase number of users.  Also, VM quite early got
good paging algorithm, other IBM systems used worse algorithms
and improved them only later.

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#7 Self-hosting and the 6502

trivia: at the time TSS/360 was "decomitted", there were 1200 people in
the TSS/360 organization and 12 people in the CP67/CMS group.

the CP67/CMS organization got even smaller by the time I graduated and
joined CSC with the commercial CP67/CMS online 60s' spin-offs of CSC
(along with some from MIT Lincoln Labs).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

past posts mentioning 1200 tss/360 and CSC 12 CP67/CMS
https://www.garlic.com/~lynn/2026.html#12 IBM Virtual Machine and Virtual Memory
https://www.garlic.com/~lynn/2025e.html#20 IBM HASP & JES2 Networking
https://www.garlic.com/~lynn/2025d.html#19 370 Virtual Memory
https://www.garlic.com/~lynn/2025.html#41 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#21 360/50 and CP-40
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#102 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#34 Vintage Computing
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#17 Computer Server Market
https://www.garlic.com/~lynn/2022.html#40 Mythical Man Month
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2019d.html#67 Facebook Knows More About You Than the CIA
https://www.garlic.com/~lynn/2019d.html#59 IBM 360/67
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017f.html#50 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017f.html#25 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2014l.html#20 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013n.html#3 50th anniversary S/360 coming up
https://www.garlic.com/~lynn/2013m.html#37 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013l.html#24 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013h.html#45 Storage paradigm [was: RE: Data volumes]
https://www.garlic.com/~lynn/2013h.html#16 How about the old mainframe error messages that actually give you a clue about what's broken
https://www.garlic.com/~lynn/2013.html#8 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012f.html#24 Time to competency for new software language?
https://www.garlic.com/~lynn/2011p.html#48 Hello?
https://www.garlic.com/~lynn/2011o.html#14 John R. Opel, RIP
https://www.garlic.com/~lynn/2011m.html#6 What is IBM culture?
https://www.garlic.com/~lynn/2011l.html#25 computer bootlaces
https://www.garlic.com/~lynn/2011h.html#69 IBM Mainframe (1980's) on You tube
https://www.garlic.com/~lynn/2011.html#20 IBM Future System
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2009r.html#42 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009k.html#1 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2008s.html#48 New machine code
https://www.garlic.com/~lynn/2008j.html#83 How powerful C64 may have been if it used an 8 Mhz 8088 or 68008 ?microprocessor (with otherwise the same hardware)?
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2007t.html#62 Remembering the CDC 6600
https://www.garlic.com/~lynn/2007t.html#58 Remembering the CDC 6600
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007f.html#9 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2003m.html#16 OSI not quite dead yet
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS, Self-hosting and the 6502

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: CMS, Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Sun, 05 Apr 2026 14:30:15 -1000

John Levine <johnl@taugh.com> writes:

No matter what it is, it would make no sense since it only runs under
VM which provides quite a lot of multi-user support.

I would also note that any VM system that can run CMS can also run
several flavors of linux, all at the same time, if that's what you
want.

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#7 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#8 Self-hosting and the 6502

"Future System" overlapped adding virtual memory to all 370s, FS was
totally different than 370 and was going to completely replace it,
internal politics during FS was killing off 370 projects and lack of new
370 during FS period is credited with giving clone 370 makers their
market foothold.

when "FS" finally imploded, there was mad rush to get stuff back into
370 product pipeline, including kicking off quick&dirty 3033&3081 in
parallel.

1974, CERN presented comparison of VM370/CMS and MVS/TSO at SHARE ...
inside IBM the report was classified "IBM Confidential - Restricted"
"on need to know" only (not wanting internal employees see the
comparison). How much better VM370/CMS looked, likely was major factor
in the head of POK (high-end 370s) convincing corporate to kill the
VM370/CMS product, shutdown the development group and transfer all the
people to POK for MVS/XA (Endicott lab eventually manages to acquire
the VM370/CMS product mission, but had to recreate a development group
from scratch).

part of reason that other RDBMS shipping before System/R was
opposition from "IMS" (& then EAGLE)

I transfer out to SJR on the west coast and work with Jim Gray and
Vera Watson on the original SQL/relational, System/R (all work having
been done on VM370/CMS). Sign a System/R joint study with BofA and
they order 60 VM/4341s for distributed operation (sort of leading edge
of coming distributed computing tsunami). Branch office hears about
engineering 4341 and Jan1979 cons me into doing benchmark for national
lab that was looking at ordering 70 VM/4341s for compute farm (sort of
leading edge of coming cluster super computing tsunami).

VM/4341 starts shipping to customers summer 1979 and begin seeing
large corporations ordering hundreds of VM/4341s at a time for placing
out in departmental areas (inside IBM, departmental conference rooms
were becoming scarce since so many were being converted into
departmental VM/4341 computing rooms).

Was also able to do System/R tech transfer to Endicott for SQL/DS
("under the radar" with the corporation pre-occupied with the next
great DBMS, "EAGLE" ... System/R having met lots of opposition by both
the "IMS" & "EAGLE" DBMS forces). When "EAGLE" implodes, get request
from STL for how fast could "System/R" be ported to MVS ... eventually
released as "DB2" originally for "decision support" only.

trivia: old archived post with decade of VAX/VMS numbers ... VM/4341s
sold in approx. same numbers in single or small unit numbers ... big
difference were the large orders for hundreds of VM/4341 at a time.
https://www.garlic.com/~lynn/2002f.html#0

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS, Self-hosting and the 6502

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: CMS, Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Sun, 05 Apr 2026 22:19:00 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

Did it really take that many decades for IBM to understand the concept
of position-independent code?

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#7 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#8 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#9 Self-hosting and the 6502

TSS/360 supported position independent code .... could have same
shared segments across different virtual address spaces at different
address locations.

OS/360 languages generated executable with "relocatable" addresses and
loader, loading the executable images when loaded, the relocable
addresses were updated for the ("fix") loaded address locations (aka
"relocatable" until loaded for execution).

after joining IBM CSC ... with competition from TSS/360 and MULTICS up
on the 5th flr ... I did a page-mapped filesystem for CP67's CMS
(nominal filesystem workload about 3times faster (and degrading much
more gracefully as load increased) and since CMS used OS/360 language
processors ... it had fixed addresses as part of loading. I had to do
a lot of code fiddling in order to emulate TSS/360 being able to load
shared segments at independent locations.

With TSS/360 decommitted and all the 360 systems (MVT, MFT, DOS, etc)
having to support 370 virtual memory .... the wide-spread
implementation of the "relocatable addresses" updating to correspond
to the loaded address ... sort of negated any position-independent
orientation.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
location independent code posts
https://www.garlic.com/~lynn/submain.html#adcon
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS, Self-hosting and the 6502

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: CMS, Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Mon, 06 Apr 2026 07:39:18 -1000

Peter Flass <Peter@Iron-Spring.com> writes:

360 was all position-independent. In theory there were no absolute
addresses, everything was base-displacement. Change the base register
and Bob's your uncle. Unfortunately there were a couple of gotchas,
address constants being the worst. Also the small range of addresses
available from a single base became limiting as programs got larger.

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#7 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#8 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#9 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#10 CMS, Self-hosting and the 6502

OS/360, etc ... large programs required addresses ... on disk the
addresses were relative to position within the program ... but loaders
were required to modify the addresses to fixed real addresses in real
storage.

tss/360 and support for multiple virtual memory address spaces and
shared segments wanted the image in memory be exactly the same as the
image on disk ... w/o requiring all addresses to be modified as the
whole program was swapped into real memory ... but allowing for demand
paged in w/o requiring for executable images to be preloaded (and
addresses modified for their loaded position). Also not requiring for
shared segments to have the same addresses in different virtual
addresses spaces.

OS/360 MVT in transition to VS2/MVS kept the OS/360
preloading/swapping in executable image and changing location
addresses (making the affected pages of the executing image
changed). TSS/360 just mapped portions of the virtual address space to
the executable image on disk ... and in case of shared segments could
just change segment table pointer to that of same shared segment
concurrently in use by multiple other virtual address spaces.

W/o location independence and requiring executable image to be
otherwise preloaded to have address constants to be modified to their
executing position ... would have required every executing program
image to have unique address across the whole system (or have
restricted only have certain executables to be concurrently mapped
into the same address space).

That is what got me providing page-mapped filesystem for CMS ... and
CMS was using OS/360 compilers and assemblers which assumed address
constants had to be modified as executables were loaded (had to fiddle
the programs so the executables images on disk were identical to the
same as executable images mapped to virtual address spaces).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
location independent code posts
https://www.garlic.com/~lynn/submain.html#adcon
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS, Self-hosting and the 6502

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: CMS, Self-hosting and the 6502
Newsgroups: alt.folklore.computers
Date: Mon, 06 Apr 2026 17:41:54 -1000

Peter Flass <Peter@Iron-Spring.com> writes:

VM DCSS

re:
https://www.garlic.com/~lynn/2026b.html#5 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#6 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#7 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#8 Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#9 CMS, Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#10 CMS, Self-hosting and the 6502
https://www.garlic.com/~lynn/2026b.html#11 CMS, Self-hosting and the 6502

A lot of the page-mapped filesystem and advanced shared-segments, I
updated from CP67 to VM370R2 for my internal CSC/VM ... and very small
subset was added to VM370R3 as DCSS.

VM370 had been restricted to "IPL by-name" where images were saved in
locations defined and disk location specified in DMKSNT. For DCSS a
special API interfacing to entires in DMKSNT using several of the
things I had extended for page-mapped filesystem shared-segments (some
that I had twiddled for location independent ... but wasn't supported
by the small subset used for DCSS).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
location independent code posts
https://www.garlic.com/~lynn/submain.html#adcon
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RAS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RAS
Date: 7 Apr, 2026
Blog: Facebook

1988, branch office asks if I could help LLNL (national lab)
standardize some serial stuff they were working with which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then IBM mainframe release some serial (when it was
already obsolete) as ESCON, initially 10mbyte/sec, upgrading to
17mbyte/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that drastically cuts native
throughput, eventually ships as FICON. Around 2010 was a max
configured z196 public "Peak I/O" benchmark getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time, a "FCS" was announced for
E5-2600 server blade claiming over million IOPS (two such FCS with
higher throughput than 104 FICON, running over FCS). Note IBM docs has
SAPs (system assist processors that do actual I/O), CPU be kept to 70%
... or 1.5M IOPS.

Also 1988, Nick Donofrio approves HA/6000, originally for NYTimes to
move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I
rename it HA/CMP (high-availability, cluster multiprocessor)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX. trivia: 2nd half of 70s, when I
transferred to SJR on the west coast, I worked with Jim Gray and Vera
Watson on the original SQL/relational, System/R ... had been developed
on VM/370 ... then while corporation was preoccupied with the next
great DBMS, EAGLE ... was able to do tech transfer (under the radar)
to Endicott for SQL/DS ... then EAGLE implodes and was asked how fast
could System/R be ported to MVS, eventually announced as "DB2"
originally for decision-support *ONLY*.

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when both
AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability and geographic
survivability (as counter to disaster/recovery) when out marketing
HA/CMP. One of the visits to 1-800 bellcore development showed that
S/88 would use a century of downtime in one software upgrade, while
HA/CMP had a couple extra "nines" (compared to S/88).

One of the first HA/CMP customer installs was new Indian Reservation
Casino in Connecticut, was suppose to have week of testing before
opening ... but after 24hrs, they decided to open the doors (based on
projected revenue; at the time was largest in the US, still one of the
largest in the country)
https://en.wikipedia.org/wiki/Foxwoods_Resort_Casino#Debt_default

Early Jan92, there was HA/CMP meeting with Oracle CEO and IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. Mid-jan92, I update FSD on
HA/CMP work with national labs and FSD decides to go with HA/CMP for
federal supercomputers. By end of Jan, we are told that cluster
scale-up is being transferred to Kingston for announce as IBM
Supercomputer (technical/scientific *ONLY*) and we aren't allowed to
work with anything that has more than four systems (we leave IBM a few
months later). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS,
... 128-systems: 16BIPS

i86/RISC quicky search, highly pipeline, translation from i86 to RISC
micro-ops to execution complete is highly overlapped (and can be
out-of-order)
http://gec.di.uminho.pt/DISCIP/MInf/ac0607/FAQ-03.pdf

In each clock cycle, three IA-32 instructions can be fetched, decoded,
and translated into RISC instructions. But only six RISC instructions
or micro-operation can be generated by each clock cycle. If the IA-32
instruction needs more than four uops, they will be generated in
multiple clock cycle, being the first four uops to the first IA-32
instruction and the others to the remaining instructions [3].

After the IA-32 instructions are decoded into RISC instructions or
into a series of RISC instructions, if it needs more than 4 uops, they
will be executed in an out-of-order pool of pending instructions,
where these instructions can be executed without following the same
order of program instructions, considering that there is not a
dependency between them, rising the hardware utilization [5].

... snip ....

... AWD executive we reported to (doing HA/CMP) goes over to head up
Somerset/AIM (Apple, IBM, Motorola) for Power/PC, uses Motorola RISC
88k cache&bus enabling shared memory multiprocessor.

1999 benchmark (number of program iterations/sec compared to industry
MIPS/BIPS reference platform)

IBM PowerPC 440: 1,000MIPS
Intel Pentium3 2,054MIPS

2010 benchmark

max configured IBM z196: 50BIPS, 80cores, 625MIPS/core
Intel E5-2600 server blade, two 8-core chips, 500BIPS, 31BIPS/core

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

misc. posts mentioning i86 chips translating instructions to
RISC micro-ops for execution
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2022g.html#85 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2021b.html#66 where did RISC come from, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#62 instruction clock speed
https://www.garlic.com/~lynn/2016f.html#97 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2014m.html#164 Slushware
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2013l.html#70 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2013l.html#53 Mainframe On Cloud
https://www.garlic.com/~lynn/2013c.html#59 Why Intel can't retire X86
https://www.garlic.com/~lynn/2012p.html#26 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#45 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012l.html#81 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012j.html#26 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#1 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012d.html#74 Execution Velocity
https://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#35 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012c.html#59 Memory versus processor speed

--
virtualization experience starting Jan1968, online at home since Mar1970

Bad Responsee

From: Lynn Wheeler <lynn@garlic.com>
Subject: Bad Responsee
Date: 7 Apr, 2026
Blog: Facebook

I took two credit hr intro to fortran/computers and at the end of
semester was hired to rewrite 1401 MPIO for 360/30. Univ. was getting
360/67 for TSS/360, replacing 709/1401. The univ. shutdown datacenter
on weekends and I got the whole place (although 48hrs w/o sleep made
monday classes hard). I was given a pile of hardware and software
manuals and got to design and implement my own monitor, device
drivers, interrupt handlers, error recovery, storage management, etc
... and within a few weeks had 2000 card assembler program. The 360/67
arrived within a year of taking intro class and I was hired fulltime
responsible for os/360 (tss/360 not coming to production).

709 ran student fortran in under a second, but 360/67 took over a
minute. I install HASP (MFT9.5) cutting time in half. Then for MFT11,
I start redoing STAGE2 SYSGEN to carefully place datasets and PDS
members to optimize arm seek and multi-track search, cutting another
2/3rds to 12.9secs. 360/67 student fortran never got better than 709
until I install UofWaterloo WATFOR (360/67 WATFOR clocked at 20,000
cards/min; 333 cards/sec, student fortran tended to run 30-60 cards).

CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincol
Labs) and I mostly get to play with it during my dedicated weekend
time. I initially work on rewriting pathlengths to optimize running
OS/360 in virtual machines. Test stream was 322secs, under CP67
... initially 856secs (CP67 CPU 534secs). After a few months I had
CP67 CPU down from 534secs to 113secs. I then start rewriting the
dispatcher/scheduler , (dynamic adaptive resource manager/default fair
share scheduling policy), paging, adding ordered seek queuing (from
FIFO) and mutli-page transfer channel programs (from FIFO and
optimized for transfers/revolution, getting 2301 paging drum from
70-80 4k transfers/sec to channel transfer peak of 270). Six months
after univ initial install, CSC was giving one week class in LA. I
arrive on Sunday afternoon and asked to teach the class, it turns out
that the people that were going to teach it had resigned the Friday
before to join one of the 60s CSC CP67 online commercial spin-offs.

Before I graduate, I was hired fulltime into small group in Boeing CFO
office to help with the formation of Boeing Computer Services
(consolidate all data processing into an independent business unit,
including offering services to non-Boeing entities). I think Renton
datacenter largest in the world, 360/65s arriving faster than they
could be installed, boxes constantly staged in hallways around machine
room. Lots of politics between Renton director and CFO who only had
360/30 up at Boeing field for payroll (although they enlarge the room
and install a 360/67 for me to play with when I wasn't doing other
stuff).

When I graduate, I join CSC (instead of staying with CFO). One of my
hobbies at CSC was enhanced production operating systems for internal
datacenters and HONE was one of the first (and long time)
customers. CSC already was running performance monitoring that
gathered periodic data. HONE was originally CP67 datacenters part of
branch office SEs dialing in to practice with guest operating systems
running in CP67 virtual machines (SE training use to include part of
group on-site at customer, but with 23jun69 unbundling and charging
for SE services, couldn't figure out how NOT to charge for trainee
SEs). CSC had also ported APL\360 to CP67/CMS as CMS\APL and HONE
started using it for deliver CMS\APL online sales&marketing support
apps which came to dominate all HONE activity (and virtual machine
guest operating system practice withered away).

One of CSC co-workers did a very sophisticated CMS\APL-based system
model (considered part of original capacity planning) and it was made
available on HONE as Performance Predictor). Branch Office IBMers
could enter customer configuration and workload information and ask
"what-if" questions about changes to configuration and/or workload.

Turn of century I was doing some work for financial outsourcing
business (had been part of AMEX, reporting to Gerstner, but in 1992
was spun off in the largest IPO up until that time, same year that IBM
had one of the largest losses in the history of US corporations and
was being re-orged into the 13 "baby blues" in preparation for
breaking up the company). I was asked to look at datacenter that
handled half of all credit card accounts in the US including real-time
transactions, 40+ max configured IBM mainframes, constant rolling
updates, none older than 18months, all running same 450k statement
Cobol program, number needed to finish settlement in the overnight
batch window (they had large performance group doing pretty much same
approach for previous 20yrs).  Using some 60s/70s CSC technology
managed to identify 14% improvement. They also had hired an EU
consultant (that had acquired rights to a descendant of the
performance predictor during IBM's early 90s troubles and run it
through an APL->C converter, doing lots of performance consulting) and
found another 7% improvement.

IBM Jargon:

bad response - n. A delay in the response time to a trivial request of
a computer that is longer than two tenths of one second. In the 1970s,
IBM 3277 display terminals attached to quite small System/360 machines
could service up to 19 interruptions every second from a user I
measured it myself. Today, this kind of response time is considered
impossible or unachievable, even though work by Doherty, Thadhani, and
others has shown that human productivity and satisfaction are almost
linearly inversely proportional to computer response time. It is hoped
(but not expected) that the definition of Bad Response will drop below
one tenth of a second by 1990.

... snip ...

Thadhani studies showed needed quarter second or better response.

70s 3277 terminal w/3272 channel attached controller had .086 hardware
response. Then 80s, IBM introduced 3278 with lots of electronics moved
back to the 3274 controller (reducing 3278 manufacturing costs), but
increasing coax protocol chatter, latency and hardware response became
.3sec-.5sec (depending on amount of data, but not possible to achieve
quarter sec). Letters to 3278 product administrator got response that
3278 wasn't designed for interactive computing, but data entry. At the
time, I had lots of internal SJR/VM systems with interactive system
.11sec response (hardware .086sec plus system .11sec result in
.196sec).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
IBM 23Jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone

some posts mentioning 3277 & 3278 hardware response
https://www.garlic.com/~lynn/2026.html#100 IBM 360&370 Experience
https://www.garlic.com/~lynn/2026.html#86 IBM 4341
https://www.garlic.com/~lynn/2026.html#61 IBM SNA
https://www.garlic.com/~lynn/2026.html#9 IBM Terminals
https://www.garlic.com/~lynn/2025d.html#102 Rapid Response
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers   Personal

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 16-CPU SMP

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 16-CPU SMP
Date: 7 Apr, 2026
Blog: Facebook

Overlapping adding virtual memory to all 370s, the first half of 370s
was "Future System", completely different from 370 and planned to
completely replace 370. Internal politics during FS was killing off
370 efforts and lack of new 370 during FS is credited with giving
clone 370 makers (including Amdahl) their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

When Future System imploded, there was mad rush to get stuff back into
the 370 product pipelines, including kicking off the quick&dirty
3033&3081 in parallel. One of the last nails in FS was analysis by
the IBM Houston Scientific Center that if 370/195 applications were
redone for FS machine made out of fastest technology available, it
would have throughput of 370/145 (30 times slowdown).

Future System from: Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

With the demise of FS, I was talked into helping with a 16-CPU design
and we con the 3033 processor engineers into helping in their spare
time (a lot more interesting than remapping 168-logic to 20% faster
chips). Everybody thought it was great until somebody tells the head
of POK that it could be decades before the POK favorite son operating
system ("MVS") has ("effective") 16-CPU support (at the time, MVS
documents had its 2-CPU support only getting 1.2-1.5 times the
throughput of 1-CPU system, POK doesn't ship 16-CPU system until after
the turn of the century, z900). The head of POK then invites some of
us to never visit POK again and the 3033 processor engineers directed
to heads down and no distractions.

Other trivia, 1988, branch office asks if I could help LLNL (national
lab) standardize some serial stuff they were working with which
quickly becomes fibre-channel standard ("FCS", including some stuff I
had done in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then IBM mainframe release some serial (when it was
already obsolete) as ESCON, initially 10mbyte/sec, upgrading to
17mbyte/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that drastically cuts native
throughput, eventually ships as FICON.

Around 2010 was a max configured z196 public "Peak I/O" benchmark
getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time,
a "FCS" was announced for E5-2600 server blade claiming over million
IOPS (two such FCS with higher throughput than 104 FICON, running over
FCS). Note IBM docs has SAPs (system assist processors that do actual
I/O), CPUs be kept to 70% ... or 1.5M IOPS ... also no CKD DASD has
been made for decades, all simulated on industry standard fixed-block
devices.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 16-CPU SMP

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 16-CPU SMP
Date: 8 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#15 IBM 16-CPU SMP

After 16-CPU was torpedoed, I transfer out to SJR (on the west coast)
and got to wander around silicon valley datacenters, including disk
bldg14/engineering and bldg15/product test across the street. They
were were doing 7x24, pre-scheduled, stand-alone mainframe testing and
mentioned they had recently tried MVS, but it had 15min MTBF requiring
manual re-ipl. I offer to rewrite I/O supervisor to make it
bullet-proof and never fail, allowing any amount of on-demand
concurrent testing, greatly improving productivity.

Bldg15 (product test, which frequently gets earliest engineering
machines), gets first engineering 3033 (outside POK 3033 processor
engineering). Since testing only took a percent or two CPU, we
scrounge up 3830 controller and 3330 string setting up our own private
online service. Did have some problems with the 303x channel director
(they had taken a 158 engine w/o 370 microcode and just integrated
channel microcode for the channel director, a 3031 was two 158
engines, one with just 370 microcode and 2nd with just integrated
channel microcode; a 3032 was 168-3 reworked to use channel director
for external channels, a 3033 could have up to three channel
directors). We find channel directors were still periodically hanging
requiring manual re-IMPL and find if I execute CLRCH quickly to the
channel director six channels, would force re-IMPL.

Then summer 1978, bldg15 gets engineering 4341 (its integrated channel
microcode could be tweaked to do 3mbyte/sec data streaming channel
testing, aka 3880/3380). Branch office hears about it and Jan1979 cons
me into doing national lab benchmark that was looking at ordering 70
VM/4341s for a compute farm (sort of the leading edge of the coming
cluster supercomputing tsunami). Later in the 80s, large corporations
were ordering hundreds of VM/4341s at a time for distribution out in
departmental areas (inside IBM, conference rooms became scarce with so
many departmental rooms being converted to distributed VM/4341 rooms)
... sort of the leading edge of the coming distributed computing
tsunami.

I write on internal "I/O Integrity" research report and happen to
mention the MVS MTBF, bringing down the wrath of the MVS organization
on my head.

When I 1st transfer to SJR, I also worked with Jim Gray and Vera
Watson on the original SQL/Relational, System/R (done on VM/370
systems). BofA signed System/R joint study and ordered 60 VM/4341s for
distributed RDBMS. Fall of 1980, Jim leaves SJR for Tandem and palms
off some amount of things on me ... including wanted me to help BofA
with large scale distributed VM/4341 operation. Was then possible to
do System/R technology transfer (under the radar while corporation was
preoccupded with the next, great DBMS "EAGLE") to Endicott for
SQL/DS. When "EAGLE" implodes, request is made how fast could System/R
be ported to MVS, eventually released as DB2, initially for "decision
support" *ONLY*.

trivia: at the time, the thin-film disk head group was getting a
couple turn-arounds a month on the SJR 370/195 for air-bearing
simulation (part of thin-film head design). We set them up on bldg15
3033 and they can get several turn-arounds a day.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

1980, IBM STL (since renamed SVL) was bursting at the seams and 300
people from IMS DBMS group were moving to offsite bldg with
dataprocessing back to STL datacenter. They had tried "remote" 3270
support and found the human factors totally unacceptable. I got con'ed
into doing channel-extender support so channel-attached 3270
controllers could be placed at the off-site bldg ... resulting in no
perceptible human factors difference between off-site and inside
STL. An unintended consequence was mainframe system throughput
increased 10-15%. STL system configurations had large number of 3270
controllers spread across all channels shared with 3830/3330 disks
... and significant 3270 controller channel busy overhead was
effectively (for same amount 3270 I/O) being masked by the channel
extender (resulting in improved disk throughput). Then there was
consideration to use channel extenders for all 3270 controllers (even
those located inside STL).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
original SQL/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 16-CPU SMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 16-CPU SMP
Date: 10 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#15 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#16 IBM 16-CPU SMP

same time Endicott also asked me to work on 138/148 microcode assist
"ECPS" .... old archived post with the initial analysis for ECPS
https://www.garlic.com/~lynn/94.html#21

And Boeblingen ask me to work on 5-CPU 125. 115&125 had nine position
memory bus ... the 115 had microprocessors all the same for
controllers and 370 CPU, microcode for the 370 CPU getting about
80kips 370. The 125 was identical except the microprocessor for the
370 CPU was about 50% faster getting 120kips 370. The 5-CPU had up to
five of the nine positions with the faster microprocessor & 370
microcode, each getting 120kips ... and I was also going to include
the 138/148 ECPS microcode assist. Then Endicott complains that the
5-CPU 125 would overlap the throughput of the 148 and got the
Boeblingen 5-CPU 125 canceled. I had also tweaked the I/O architecture
that turned out to look a little more like 370/XA (giving DASD
controller a queue of work).

SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
125 5-CPU effort
https://www.garlic.com/~lynn/submain.html#bounce

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 16-CPU SMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 16-CPU SMP
Date: 10 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#15 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#16 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#17 IBM 16-CPU SMP

After graduating and joining IBM Cambridge Scientific Center one of my
hobbies was enhanced production operating systems for internal
datacenters and I would also get to continue going to SHARE and drop
in on various customers ... the director of one of the largest true
blue commercial financial datacenters liked me to drop by and talk
technology. At some point the branch manager horribly offended the
customer. In retaliation, they order an Amdahl machine (lone Amdahl in
a vast see of blue). This was during "Future System" and Amdahl was
primarily selling to technical/scientific/university ... and this
would be the 1st commercial. I was asked to go onsite for a year
(apparently to help obfuscate the reason for the order). I talked it
over with the customer and was told they would like me onsite, but it
would make no difference in the Amdahl order ... and I told IBM I
declined the offer. I was then told that the branch manager was good
sailing buddy of IBM CEO and if I refused, I could forget a career,
promotions, raises.

note web page about Amdahl leaving IBM ... Amdahl had won the battle
to make ACS, 360 compatible. Then ACS/360 was canceled and Amdahl
leaves.
https://people.computing.clemson.edu/~mark/acs_end.html

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning declining offer to go onsite
https://www.garlic.com/~lynn/2025d.html#99 IBM Fortran
https://www.garlic.com/~lynn/2025d.html#61 Amdahl Leaves IBM
https://www.garlic.com/~lynn/2025d.html#25 IBM Management
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#19 60s Computers
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021e.html#63 IBM / How To Stuff A Wild Duck
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2016e.html#95 IBM History

--
virtualization experience starting Jan1968, online at home since Mar1970

DUMPRX

From: Lynn Wheeler <lynn@garlic.com>
Subject: DUMPRX
Date: 10 Apr, 2026
Blog: Facebook

Early in REX (before renamed and provided to customers), I wanted to
show it wasn't just another pretty scripting language. I select large
assembler dump analysis program to redo in REX with ten times the
function and ten times the performance (lots of hack to run
interpreted REX faster than assembler) with objective of working half
time over three months. I finish early and add library of automated
processes that search for common failure signatures. I assume that it
would replace the assembler version, but for what ever reason it
wasn't, even though it was in use by nearly every internal datacenter
and PSR. I eventually get permissions to give presentations on how it
was implemented at user group meetings (and shortly similar
implementations started to appear) old 3090 reference
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

other trivia: mentioning after FS implodes getting talked into working
on 16-CPU SMP and we con the 3033 processor engineers into working on
it in their spare time; once the 3033 is out the door, they start on
trout/3090
https://www.garlic.com/~lynn/2026b.html#15 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#16 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#17 IBM 16-CPU SMP
https://www.garlic.com/~lynn/2026b.html#18 IBM 16-CPU SMP

The 3092 started out a 4331 running modified copy of VM370R6 with all
the service screens done in CMS IOS/3270, it was then upgraded to a
pair of 4361s.

Some old email from the 3092 group


Date: 31 October 1986, 16:32:58 EST
To: wheeler
Re: 3090/3092 Processor Controll and plea for help

The reason I'm sending this note to you is due to your reputation of
never throwing anything away that was once useful (besides the fact
that you wrote a lot of CP code and (bless you) DUMPRX.

I've discussed this with my management and they agreed it would be
okay to fill you in on what the 3090 PC is so I can intelligently ask
for your assistance.

The 3092 (3090 PC) is basically a 4331 running CP SEPP REL 6 PLC29
with quite a few local mods. Since CP is so old it's difficult, if not
impossible to get any support from VM development or the change team.

What I'm looking for is a version of the CP FREE/FRET trap that we
could apply or rework so it would apply to our 3090 PC. I was hoping
you might have the code or know where I could get it from (source
hopefully).

The following is an extract from some notes sent to me from our local
CP development team trying to debug the problem. Any help you can
provide would be greatly appreciated.

... snip ... top of post, old email index


Date: 23 December 1986, 10:38:21 EST
To: wheeler
Re: DUMPRX

Lynn, do you remember some notes or calls about putting DUMPRX into an
IBM product? Well .....

From the last time I asked you for help you know I work in the
3090/3092 development/support group. We use DUMPRX exclusively for
looking at testfloor and field problems (VM and CP dumps). What I
pushed for back aways and what I am pushing for now is to include
DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for
RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do
not have the new XEDIT.

In any case, we (3090/3092 development) would assume full
responsibility for DUMPRX as we release it. Any changes/enhancements
would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be
on vacation from 12/24 through 01/04.

... snip ... top of post, old email index

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090 EREP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090 EREP
Date: 11 Apr, 2026
Blog: Facebook

A year after 3090 started shipping, I got a call from the IBM 3090
product administrator. He said 3090 channel FEC was designed that
there would only be an aggregate of 3-5 channel errors across all 3090
systems for a year period, but found 15-20 errors were reported (there
was a industry operation that collected customer EREP information for
all IBM and non-IBM clone systems and published them). Turns out in
1980, I simulated "channel check" (for any kind of channel-extender
transmission error) for invoking error retry/recovery and was later
emulated by a channel-extender vendor. I then did some research and
found IFCC (interface control check) would effectively invoke the same
retry/recovery and got the vendor to change their "CC" to "IFCC" (to
improve the 3090 comparison to clone 370 makers.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel-extender

archived posts mentioning 3090 channel check
https://www.garlic.com/~lynn/2026.html#33 IBM, NSC, HSDT, HA/CMP
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#42 SNA & TCP/IP
https://www.garlic.com/~lynn/2025b.html#53 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#28 IBM 3090
https://www.garlic.com/~lynn/2024g.html#42 Back When Geek Humour Was A New Concept To Me
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2006y.html#43 Remote Tape drives
https://www.garlic.com/~lynn/2006i.html#34 TOD clock discussion
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090 EREP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090 EREP
Date: 11 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#20 IBM 30900 EREP

3090 group assumed that 3880 disk controller support was same as 3830,
but with 3mbyte/sec data streaming channel added ... and configured
the number of channels passed on planned throughput. However the 3830
horizontal microcode microprocessor was significantly faster. The 3880
except for special hardware for data moving, ad very slow vertical
microcode microprocessor ... as a result, except for pure data
transmission, all other operations had signiificantly higher channel
busy. As a result to meet planned throughput the number of 3090
channels had to be significantly increased ... and the increase in
number channels required another TCM. The 3090 group semi-facetiously
said that they would bill the 3880 group for the extra TCM increase in
3090 manufacturing costs. Eventually marketing spun the large increase
in channels as a wonderful I/O machine, when it was actually required
to offset the huge increase in channel busy from 3880 processing.

trivia: I worked with 3033 processor engineers when we con'ed them
into working on 16-CPU SMP in their spare time (a lot more interesting
than remapping 168 logic to 20% faster chips). After 16-CPU was
torpedoed, transfer to SJR on he west coast and allowed to play in
disk bldg14/engineering and bldg15/product test cross the street, and
then bldg15 gets the 1st engineering 3033. Then stayed in touch with
them when they start on trout/3090 after 3033 is out the door.

I've mentioned before when I 1st got involved with bldg14&5, they
mentioned that they hard tried MVS, but it had 15min MTBF requiring
manual reipl. Then a few months before 3380s were about to ship, FE
had a set of 57 simulated hardware errors and found that MVS failed
for all 57 errors requiring manual reipl and for 2/3rds of the errors,
no indication of what caused the failure.

posts mentioning getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning 3880 busy, increase in number 3090 channel, MVS
failure
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2017g.html#61 What is the most epic computer glitch you have ever seen?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Marketing

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Marketing
Date: 12 Apr, 2026
Blog: Facebook

In 80s, co-worker at IBM San Jose Research has left and was doing lots
of contracting work in silicon valley. He had redone a lot of
mainframe C, significantly improving instruction optimization for
mainframe and ported the Berkeley chip tools to the mainframe. One day
the local IBM marketing rep stopped by and asked him what he was doing
... and he said mainframe<->SGI ethernet support, so they can use SGI
graphical workstations as front-ends to the mainframe. The IBM rep
then told him he should do token-ring instead or customer might find
that their mainframe support wasn't as timely as in the past. I then
get a phone call and had to listen to an hour of four letter
words. The next morning, the senior engineering VP of the (large VLSI
chip) company has a press conference and says they are moving
everything off the IBM mainframe to SUN servers. IBM then have a bunch
of task forces to decide why silicon valley wasn't using IBM
mainframes ... but the IBM task forces weren't allowed to evaluate
some of the real reasons

Decade later IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" in
preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Part of the reorganization was unloading VLSI chip applications to
industry vendors and get a contract to port IBM VLSI chip design tool
from mainframe to SUN workstation (industry standard platform). Have a
50k statement Pascal/VS IBM chip design tool to try and move to
SUN. In retrospect, it would have been easier to rewrite it in "C",
SUN Pascal seemed to have never been used for other then educational
instruction. It was easy to drop by SUN hdqtrs, but they had
outsourced Pascal to a organization on the opposite side of the world
(space city, had put up the space station, did get space command
billcap).

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

past refs
https://www.garlic.com/~lynn/2024g.html#53 IBM RS/6000
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022c.html#7 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#125 Google Cloud
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2017e.html#59 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013b.html#21 New HD
https://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2011n.html#56 Virginia M. Rometty elected IBM president
https://www.garlic.com/~lynn/2011h.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2008e.html#24 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2001c.html#53 Varian (was Re: UNIVAC  - Help ??)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CSC, IBM Unbundle, IBM HONE, IBM System/R, SCI, FCS, IBM HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CSC, IBM Unbundle, IBM HONE, IBM System/R, SCI, FCS, IBM HA/CMP
Date: 13 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026.html#66 IBM CSC, IBM Unbundle, IBM HONE, IBM System/R, SCI, FCS, IBM HA/CMP

also in 1988, branch office asked me if I could help SLAC/Gustavson
with SCI standard (various uses including shared memory
multiprocessor, up to 64 cache consistency, some efforts Data General
and Sequent 64 4-i486 boards (256 processors), Convex 64 2-HP-snake
boards (128 processor), SGI 64 4-MIPS boards, etc),
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
https://www.scizzl.com/SGIarguesForSCI.html

Early 90s, IBM Kingston was "funding" Chen Supercomputing
company. Then after I left IBM in the 90s, Chen was CTO at Sequent and
I did some consulting for Chen (this was before IBM bought Sequent and
shut it down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

Had Sequent SCI NUMA. I was also doing consulting for financial
outsourcing and brought in Sequent 256-processor shared memory
multiprocessor.
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS
Date: 15 Apr, 2026
Blog: Facebook

as undergraduate, I had been hired fulltime by the univ responsible
for OS/360 (univ had 360/67 to replace 709/1401, originally for
tss/360 but didn't come to fruition, so ran as 360/65). Then the
Univ. library got ONR grant and used some of the money for 2321
datacell. IBM also selected it as betatest for the original CICS
program product (result of the IBM unbundling) and CICS support was
added to my tasks.  some CICS history ... website gone 404, but lives
on at the wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

1st problem at the univ, was CICS wouldn't come up, turned out
(betatest) CICS had some (undocumented) hard coded BDAM dataset
options and library had created BDAM datasets with different set of
options.

Before I graduate, I was hired into small group in the Boeing CFO
office to help with the formation of Boeing Computer Services
(consolidate all dataprocessing into independent business unit,
including offering services to non-Boeing entities). I think Renton
datacenter possibly largest in the world, 360/65s arriving faster than
they could be installed, boxes constantly staged in hallways around
the machine room).

When I graduate, I join the IBM Cambridge Scientific Center (instead
of staying with CFO) ... then less than decade later, I transfer out
to SJR on the west coast and worked with Jim Gray and Vera Watson on
the original SQL/relational, System/R. Then was able to do tech
transfer ("under the radar" while company was preoccupied with the
next, new DBMS, "EAGLE") to Endicott for SQL/DS. Then when EAGLE
imploded, there was request for how fast could System/R be ported to
MVS, which was eventually released for DB2 (originally for decision
support only). All System/R work having been done on VM/370 systems
(starting with VM/145) and met a lot of opposition from the IMS &
EAGLE forces. Did have a joint study with BofA who ordered 60 VM/4341s
for distributed operation.

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
original sql/relational System/R
rhttps://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS
Date: 15 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#24 IBM CICS

1988, Nick Donofrio approves HA/6000, originally for NYTimes to move
their newspaper system ("ATEX") off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX.

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when both
AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability and geographic
survivability (as counter to disaster/recovery) when out marketing
HA/CMP. One of the visits to 1-800 bellcore development showed that
S/88 would use a century of downtime in one software upgrade, while
HA/CMP had a couple extra "nines" (compared to S/88).

One of the first HA/CMP customer installs was new Indian Reservation
Casino in Connecticut, was suppose to have week of testing before
opening ... but after 24hrs, they decided to open the doors (based on
projected revenue; at the time was largest in the US, still one of the
largest in the country)
https://en.wikipedia.org/wiki/Foxwoods_Resort_Casino#Debt_default

Early Jan92, there was HA/CMP meeting with Oracle CEO and IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. Mid-jan92, I update FSD on
HA/CMP work with national labs and FSD decides to go with HA/CMP for
federal supercomputers. By end of Jan, we are told that cluster
scale-up is being transferred to Kingston for announce as IBM
Supercomputer (technical/scientific *ONLY*) and we aren't allowed to
work with anything that has more than four systems (we leave IBM a few
months later). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

HA/CMP posts
https://www.garlic.com/~lynn/suptopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS
Date: 15 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#24 IBM CICS
https://www.garlic.com/~lynn/2026b.html#25 IBM CICS

early 80s, I got HSDT project, T1 and faster computer links
(terrestrial and satellite) and battles with the communication group
(60s had 2701 that supported T1 links, 70s issues with VTAM cap'ed
controllers at 56kbits, early 80s, FSD came out with S/1 ZIRPEL T1
card for gov. customers who's 2701s were failing). Also working with
NSF director and was suppose to get $20M to interconnect the NSF
supercomputer centers. Then congress cuts the budget, some other
things happened and eventually there was RFP released (in part based
on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

Somebody was collecting internal SNA/VTAM misinformation email about
justification for converting internal network to SNA/VTAM as well as
using SNA/VTAM for NSFNET and forwarded it to us ... old archive post
(email heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM Internal Network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe
Date: 16 Apr, 2026
Blog: Facebook

Early 80s, I got HSDT project, T1 and faster computer links
(terrestrial and satellite) and battles with the communication group
(60s had 2701 that supported T1 links, 70s issues with VTAM cap'ed
controllers at 56kbits, early 80s, FSD came out with S/1 ZIRPEL T1
card for gov. customers who's 2701s were failing). Also working with
NSF director and was suppose to get $20M to interconnect the NSF
supercomputer centers. Then congress cuts the budget, some other
things happened and eventually there was RFP released (in part based
on what we already had running). NSF 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

IBM mainframe TCP/IP was somewhat kneecaped when it was rerlease,
getting aggregate 44kbytes/sec using large amount of 3090 CPU. I then
add RFC1044 support and in some tuning tests at Cray Research, between
Cray and 4341, got nearly full 4341 channel sustained throughput,
using only modest amount of 4341 CPU (something like 500 times
improvement in bytes transferred per instruction executed)

Somebody was collecting internal SNA/VTAM misinformation email about
justification for converting internal network to SNA/VTAM as well as
using SNA/VTAM for NSFNET and forwarded it to us ... old archive post
(email heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

1988, Nick Donofrio approves HA/6000, originally for NYTimes to move
their newspaper system ("ATEX") off DEC VAXCluster to RS/6000. I
rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX.

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when
both AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability
and geographic survivability (as counter to disaster/recovery)
when out marketing HA/CMP. One of the visits to 1-800 bellcore
development showed that S/88 would use a century of downtime in one
software upgrade, while HA/CMP had a couple extra "nines" (compared to
S/88).

One of the first HA/CMP customer installs was new Indian Reservation
Casino in Connecticut, was suppose to have week of testing before
opening ... but after 24hrs, they decided to open the doors (based on
projected revenue; at the time was largest in the US, still one of the
largest in the country)
https://en.wikipedia.org/wiki/Foxwoods_Resort_Casino#Debt_default

Early Jan92, there was HA/CMP meeting with Oracle CEO and IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. Mid-jan92, I update FSD on
HA/CMP work with national labs and FSD decides to go with HA/CMP for
federal supercomputers. By end of Jan, we are told that cluster
scale-up is being transferred to Kingston for announce as IBM
Supercomputer (technical/scientific *ONLY*) and we aren't allowed to
work with anything that has more than four systems (we leave IBM a few
months later). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS, 128-systems: 16BIPS

Sometime after leaving IBM, I was brought in as consultant into small
client/server startup. Two of the Oracle people that were in the
Ellison/Hester meeting are there responsible for something they called
"commerce server" and they wanted to do payment transactions. The
startup had also invented this technology they called "SSL" they
wanted to use. I was responsible for everything between commerce
servers (now frequently called e-commerce) and payment networks. I
then do talk: "Why Internet Isn't Business Critical Dataprocessing"
(based on processes, documentation and software I had to do for
e-commerce) that (Internet, IETF) RFC standards editor Postel
sponsored at ISI/USC.
https://en.wikipedia.org/wiki/Jon_Postel
He also had me help with the periodically re-released STD1.

Also, 1988, branch office asks if I could help LLNL (national lab)
standardize some serial stuff they were working with which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then IBM mainframe release some serial (when it was
already obsolete) as ESCON, initially 10mbyte/sec, upgrading to
17mbyte/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that drastically cuts native
throughput, eventually ships as FICON. Around 2010 was a max
configured z196 public "Peak I/O" benchmark getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time, a "FCS" was announced for
E5-2600 server blade claiming over million IOPS (two such FCS with
higher throughput than 104 FICON, running over FCS). Note IBM docs has
SAPs (system assist processors that do actual I/O), CPU be kept to 70%
... or 1.5M IOPS.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM Internal Network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP postings
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning availability
https://www.garlic.com/~lynn/submain.html#available
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ecommerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
1980 channel-exender work for STL (now SVL) posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

How We Put It Together

From: Lynn Wheeler <lynn@garlic.com>
Subject: How We Put It Together
Date: 16 Apr, 2026
Blog: Facebook

around 1980, from internal IBM conference "How we put it together"
... they gave up before they determined the level of MVS required and
how many other components would have to be changed. Just the following
would have taken at least a weekend ... and possibility that any
problems would require reversing the process (and then possibly a
repeat):

After the system had been installed for several months, the 3600
system was enhanced to support dial lines as well as leased lines.

This announcement was particularly attractive to the customer since it
had two remote 3600 systems that each required 1000 mile leased lines
which were only used for 30 minutes (maximum) a day.

After investigation, it was determined that the customer would have to
change the level of microcode in the 3600 controller to obtain the new
function.

This required the customer to

reassemble his 3600 application programs (APBs)

reassemble his 3600 SYSGENS (CPGENs)

install and use the new microcode

use a new level of 3600 starter diskette.

However, the new level of microcode required a new level of Subsystem
Support Services (SSS) and Program Validation Services (PVS).

The new level of SSS required a new level of VTAM.

The new level of VTAM required

a new level of NCP

reassembly of the customer written VTAM programs.

... snip ...

A year or so later, got "HSDT" project, T1 and faster computer links
(terrestrial and satellite) and battles with the communication group
(60s had 2701 that supported T1 links, 70s issues with VTAM cap'ed
controllers at 56kbits, early 80s, FSD eventually came out with S/1
ZIRPEL T1 card for gov. customers who's 2701s were failing), had to
resort to mostly non-IBM hardware. Also working with NSF director and
was suppose to get $20M to interconnect the NSF supercomputer centers
(before corporate politics blocked any participation). NSF 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director
tried to help by writing the company a letter (3Apr1986, NSF Director
to IBM Chief Scientist and IBM Senior VP and director of Research,
copying IBM CEO) with support from other gov. agencies ... but that
just made the internal politics worse (as did claims that what we
already had operational was at least 5yrs ahead of the winning bid),
as regional networks connect in, NSFnet becomes the NSFNET backbone,
precursor to modern internet.

IBM mainframe TCP/IP was somewhat kneecaped when it was release,
getting aggregate 44kbytes/sec using large amount of 3090 CPU. I then
add RFC1044 support and in some tuning tests at Cray Research, between
Cray and 4341, got nearly full 4341 channel sustained throughput,
using only modest amount of 4341 CPU (something like 500 times
improvement in bytes transferred per instruction executed)

Somebody was collecting internal SNA/VTAM misinformation email about
justification for converting internal network to SNA/VTAM as well as
using SNA/VTAM for NSFNET and forwarded it to us ... old archive post
(email heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

Concurrently, IBM branch office for a baby bell and the Boca S/1 group
talks me into taking the baby bell VTAM/NCP emulation implemented in
S/1s (much better price, performance, feature, function) and turning
it out as a TYPE-1 product, with later migration to RIOS/RS6000. Old
archived post with pieces of presentation I gave to Raleigh SNA ARB
meeting (executive running ARB wanted to know who allowed me to talk).
https://www.garlic.com/~lynn/99.html#67
and part of baby bell presentation at COMMON Spring '86 Conference
(session 43U, Series/1 As A Front End Processor)
https://www.garlic.com/~lynn/99.html#70

Both IBM groups had lots of familiarity with communication group
internal political tactics and tried to wall them all off, but what
the communication group then did to torpedo the effort can only be
described as fact is stranger than fiction. Trivia: IMS group was
interested for "hot standby", Vern Watts:
https://www.mercurynews.com/obituaries/vernice-lee-watts/

Large 3090 IMS "hot standby" with possibly 40k-60k terminals where IMS
could fall over in a few minutes, but hot-standby 3090 VTAM would take
90mins or more, to get all terminal sessions back up. The S/1
emulation could do "shadow/copy" duplicate sessions with the
hot-standby 3090.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM Internal Network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Some recent Posts mentioning S/1 VTAM/NCP emulation
https://www.garlic.com/~lynn/2026.html#40 IBM HSDT, Series/1 T1
https://www.garlic.com/~lynn/2025e.html#51 IBM VTAM/NCP
https://www.garlic.com/~lynn/2025d.html#47 IBM HSDT and SNA/VTAM
https://www.garlic.com/~lynn/2025c.html#70 Series/1 PU4/PU5 Support
https://www.garlic.com/~lynn/2025b.html#79 IBM 3081
https://www.garlic.com/~lynn/2025b.html#43 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#40 IBM APPN
https://www.garlic.com/~lynn/2025.html#109 IBM Process Control Minicomputers
https://www.garlic.com/~lynn/2025.html#97 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#6 IBM 37x5
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#48 IBM Telecommunication Controllers
https://www.garlic.com/~lynn/2024d.html#110 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#60 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023b.html#3 IBM 370
https://www.garlic.com/~lynn/2022h.html#98 IBM 360
https://www.garlic.com/~lynn/2022h.html#50 SystemView
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021j.html#14 IBM SNA ARB
https://www.garlic.com/~lynn/2021i.html#83 IBM Downturn
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2021c.html#91 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Silicon Valley Lab

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Silicon Valley Lab
Date: 17 Apr, 2026
Blog: Facebook

The original name was going to be Coyote Lab ... after IBM convention
of closest post office. That spring I was in DC for spring vacation
with the kids and SanFran Coyote Professional Ladies organization was
demonstrating on steps of capital. Within day or two ... IBM name
change to Santa Teresa Lab (for closest main street).

I had transferred out to SJR and worked with Jim Gray and Vera Watson
on original SQL/relational, System/R (all work being done on VM/370)
and also got to wander around datacenters in silicon valley including
disk bldg14/engineering and bldg15/product test (across the
street). They were doing 7x/24, prescheduled stand-alone testing and
mentioned that they had recently tried MVS, but it had 15min MTBF (in
that environment). I offered to rewrite I/O supervisor making it
bullet proof and never fail allowing any amount of on-demand
concurrent testing, greatly improving productivity. Bldg15 gets 1st
engineering 3033 (outside POK 3033 processor engineering) and since
testing only took percent or two of CPU, we scrounge up 3830 and 3330
and setup our own private online service).

1980, STL was bursting at the seams and moving 300 people/3270s from
IMS group to offsite (complex just south of main plant site) with
computing services back to STL datacenter. They had tried "remote
3270" but found the human factors unacceptable. I get con'ed into
doing channel-extender support so channel-attached 3270 controllers
could be positioned at offsite bldgs ... with no difference in human
factors (compared to inside STL). Actually slightly better, STL had
spread channel-attached 3270 controllers across all channels with 3830
dasd controllers. Turns out standard 3270 channel controllers had
excessive channel busy, placing them offsite at end of channel
controllers reduced channel busy (for same amount of 3270
transmission) and improved system throughput by 10-15%. There was some
consideration to move all 3270 controllers (including inside STL) for
all systems 10-15% improvement.

IMS and next great new DBMS "EAGLE" forces somewhat accounted for not
releasing System/R as product (although had joint study with BofA that
was ordering 60 VM/4341s for System/R). When Jim Gray departs for
Tandem, he asked me to pick up BofA support and IMS consulting. Was
able to do tech transfer to Endicott for SQL/DS ("under the radar",
while company was preoccupied with "EAGLE"). Later after "EAGLE"
imploded there was request for how fast could System/R be ported to
MVS .... eventually released as DB2 (originally for decision support
only)

Getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Silicon Valley Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Silicon Valley Lab
Date: 17 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#29 IBM Silicon Valley Lab

1988, IBM Branch office asks if I could help LLNL (national lab)
standardize some serial stuff they were working with which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then IBM mainframe release some serial (when it was
already obsolete) as ESCON, initially 10mbyte/sec, upgrading to
17mbyte/sec. Then some POK engineers become involved with "FCS" and
define a heavy-weight protocol that drastically cuts native
throughput, eventually ships as FICON. Around 2010 was a max
configured z196 public "Peak I/O" benchmark getting 2M IOPS using 104
FICON (20K IOPS/FICON). About the same time, a "FCS" was announced for
E5-2600 server blade claiming over million IOPS (two such FCS with
higher throughput than 104 FICON, running over FCS). Note IBM docs has
SAPs (system assist processors that do actual I/O), CPU be kept to 70%
... or 1.5M IOPS. Also no CKD DASD have been made for decades, all
being simulated on industry standard fixed-block devices.

Also 1988, Nick Donofrio approves HA/6000, originally for NYTimes to
move their newspaper system ("ATEX") off DEC VAXCluster to RS/6000
(running project out at Los Gatos lab). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Ingres, Informix) with VAXCluster
support in same source base with UNIX (planning on using Hursley 9333
for mid-range and full FCS for both technical and commercial scaleup).

IBM S/88 (relogo'ed Stratus) Product Administrator started taking us
around to their customers and also had me write a section for the
corporate continuous availability document (it gets pulled when both
AS400/Rochester and mainframe/POK complain they couldn't meet
requirements).  Had coined disaster survivability and geographic
survivability (as counter to disaster/recovery) when out marketing
HA/CMP. One of the visits to 1-800 bellcore development showed that
S/88 would use a century of downtime in one software upgrade, while
HA/CMP had a couple extra "nines" (compared to S/88).

One of the first HA/CMP customer installs was new Indian Reservation
Casino in Connecticut, was suppose to have week of testing before
opening ... but after 24hrs, they decided to open the doors (based on
projected revenue; at the time was largest in the US, still one of the
largest in the country)
https://en.wikipedia.org/wiki/Foxwoods_Resort_Casino#Debt_default

Early Jan92, there was HA/CMP meeting with Oracle CEO and IBM/AWD
executive Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. Mid-jan92, I update FSD on
HA/CMP work with national labs and FSD decides to go with HA/CMP for
federal supercomputers. By end of Jan, we are told that cluster
scale-up is being transferred to Kingston for announce as IBM
Supercomputer (technical/scientific *ONLY*) and we aren't allowed to
work with anything that has more than four systems (we leave IBM a few
months later). A couple weeks later, 17feb1992, Computerworld news
... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Some speculation that it would have eaten the mainframe in the
commercial market. 1993 industry benchmarks (number of program
iterations compared to the industry MIPS/BIPS reference platform):

ES/9000-982 : 8CPU 408MIPS, (51MIPS/CPU)
RS6000/990 (RIOS chipset) : (1-CPU) 126MIPS, 16-systems: 2BIPS,
128-systems: 16BIPS

Fibre-Channel Standard and/or FICON
https://www.garlic.com/~lynn/submisc.html#FICON
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Silicon Valley Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Silicon Valley Lab
Date: 18 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#29 IBM Silicon Valley Lab
https://www.garlic.com/~lynn/2026b.html#30 IBM Silicon Valley Lab

One of my hobbies when I graduated and 1st joined IBM (Cambridge
Scientific Center) was enhanced production operating systems for
internal datacenters and online sales&marketing support (CP67/CMS
virtual machine datacenters) HONE was one of the 1st (and long time)
customers. 23Jun1969 unbundling started to charge for (application)
software (managed to make case kernel software was still free), SE
services, maintenance, etc. SE training had included part of group
onsite at customer but they couldn't figure out how not to charge
customers for trainee SEs onsite. Solution was CP67/CMS datacenters
around US where branch people could login and practice with guest
operating systems running in virtual machines.

CSC had also ported APL\360 to CP67/CMS as CMS\APL and HONE started
providing CMS\APL-based sales&marketing support applications which
came to dominate all HONE activity (with guest operating system
practice just withering away). A little before I transferred to SJR,
all US HONE datacenters were consolidated in Palo Alto (trivia: when
FACEBOOK 1st moved into silicon valley, it was a new bldg built next
door to the former US HONE consolidated datacenter).

Announce for adding virtual memory to all 370s, also included doing
CP67->VM370 (but a lot of features were simplified or dropped,
including "wheeler scheduler" and multiprocessor support. Then I start
adding stuff back into VM370R2-base for my internal CSC/VM (including
kernel reorg as part of adding multiprocessor support back in). Then I
add multiprocessor support back into VM370R3-based CSC/VM, initially
for consolidated US HONE so they can upgrade all their 158s&168s to
2-CPU systems (getting twice throughput of the 1-CPU systems).

There was joke that I worked 1st shift in SJR, 2nd shift in
bldgs14&15, 3rd shift in STL, and 4th shift/weekends at HONE.

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM 23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management, "wheeler" scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Silicon Valley Lab

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Silicon Valley Lab
Date: 18 Apr, 2026
Blog: Facebook

re:
https://www.garlic.com/~lynn/2026b.html#29 IBM Silicon Valley Lab
https://www.garlic.com/~lynn/2026b.html#30 IBM Silicon Valley Lab
https://www.garlic.com/~lynn/2026b.html#31 IBM Silicon Valley Lab

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems
Interconnection standards to become the global protocol for computer
networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt

Meanwhile, IBM representatives, led by the company's capable director
of standards, Joseph De Blasi, masterfully steered the discussion,
keeping OSI's development in line with IBM's own business
interests. Computer scientist John Day, who designed protocols for the
ARPANET, was a key member of the U.S. delegation. In his 2008 book
Patterns in Network Architecture(Prentice Hall), Day recalled that IBM
representatives expertly intervened in disputes between delegates
"fighting over who would get a piece of the pie.... IBM played them
like a violin. It was truly magical to watch."


... snip ...

CSC member responsible for the CP67-based Science Center wide-area
network ... One of the CSC inventors of GML in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.


... snip ...

... morphs into the IBM internal network (RSCS&VNET, larger than
ARPANET/Internet from beginning until sometime mid/late 80s about time
that it was forced to convert to SNA/VTAM)) and the technology also
used for the corporate sponsored Univ. BITNET.
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.


... snip ...

newspaper article about some of Edson's Internet & TCP/IP IBM battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed, Internet &
TCP/IP) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Ed had transferred w/me out to SJR, 2nd half of 70s. Early 80s, got
HSDT, T1 and faster computer links (terrestrial and satellite) and
battles with communication group (60s IBM had 2701 that supported T1,
70s issues with SNA/VTAM capped links at 56kbits, early 80s, FSD came
out with S/1 ZIRPEL T1 card for gov. customers who's 2701s were
failing). Also working with NSF director and was suppose to get $20M
to interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happened and eventually there was RFP
released (in part based on what we already had running).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Posts mentioning GML, SGML, HTML
https://www.garlic.com/~lynn/submain.html#sgml
IBM Internal Network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home