List of Archived Posts

2025 Newsgroup Postings (05/11 - 07/25 )

Interactive Response
Interactive Response
Interactive Response
Interactive Response
Interactive Response
Interactive Response
Interactive Response
Interactive Response
Interactive Response
Living Wage
IBM System/R
IBM System/R
IBM 4341
IBM 4341
IBM 4341
Cluster Supercomputing
Cluster Supercomputing
IBM System/R
Is Parallel Programming Hard, And, If So, What Can You Do About It?
APL and HONE
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Is Parallel Programming Hard, And, If So, What Can You Do About It?
IBM 8100
IBM 4361 & DUMPRX
IBM AIX
360 Card Boot
IBM Downfall
IBM 360 Programming
IBM 360 Programming
360 Card Boot
Is Parallel Programming Hard, And, If So, What Can You Do About It?
IBM Downfall
IBM Downfall
IBM Downfall
TCP/IP, Ethernet, Token-Ring
IBM Downfall
IBM Downfall
IBM Mainframe
Is Parallel Programming Hard, And, If So, What Can You Do About
IBM 3090
IBM & DEC DBMS
SNA & TCP/IP
SNA & TCP/IP
Terminal Line Speed
CP67 at NPG
IBM Germany and 370/125
IBM Germany and 370/125
IBM 3270 Terminals
IBM Technology
IBM And Amdahl Mainframe
IBM RS/6000
IBM Basic Beliefs
IBM 370 Workstation
IBM 3270 Terminals
IBM 3270 Terminals
Univ, 360/67, OS/360, Boeing, Boyd
IBM OS/2
IBM Future System And Follow-on Mainframes
IBM Innovation
Why I've Dropped In
IBM Innovation
IBM Future System And Follow-on Mainframes
IBM Future System And Follow-on Mainframes
mainframe vs mini, old and slow base and bounds, Why I've Dropped In
IBM Vintage Mainframe
IBM 370 Workstation
IBM 370 Workstation
IBM 370 Workstation
Sun Microsystems
Tandem Computers
Series/1 PU4/PU5 Support
IBM Networking and SNA 1974
IBM RS/6000
IBM Networking and SNA 1974
IBM RS/6000
MVS Capture Ratio
IBM OCO-wars
IBM 4341
IBM 4341
IBM System/360
IBM CICS, 3-tier
IBM CICS, 3-tier
IBM HONE
IBM HONE
IBM SNA
IBM SNA
IBM SNA
The Rise And Fall Of Unix
IBM SNA
Open-Source Operating System
4th Generation Programming Language
FCS, ESCON, FICON
FCS, ESCON, FICON
FCS, ESCON, FICON
FCS, ESCON, FICON
FCS, ESCON, FICON
5-CPU 370/125
HSDT Link Encryptors
5-CPU 370/125
CICS, 370 TOD
When Big Blue Went to War
More 4341
More 4341
IBM Innovation
IBM Innovation
IBM Innovation
IBM 3380 and 3880
IBM San Jose Disk
IBM OS/360
IBM San Jose Disk
IBM OS/360
IBM OS/360
IBM Virtual Memory (360/67 and 370)
IBM VNET/RSCS
IBM VNET/RSCS
IBM VNET/RSCS
Internet
Internet
Library Catalog

Interactive Response

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
Some other details (from recent post) ... related to quarter second response
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO

Early MVS days, CERN did MVS/TSO comparison with VM370/CMS with 1974 presentation of analysis at SHARE ... inside IBM, copies of the presentation were stamped "IBM Confidential - Restricted" (2nd highest security classification), only available on "need to know" basis (for those that didn't directly get copy at SHARE)

MVS/TSO trivia: late 70s, SJR got a 370/168 for MVS and 370/158 for VM/370 (replacing MVT 370/195) and several strings of 3330s all with two channel switch 3830s connecting to both systems .... but strings&controllers were labeled MVS or VM/370 and strict rules that no MVS use of VM/370 controller/strings. One morning, an MVS 3330 was placed on 3330 string and within a few minutes, operations were getting irate phone calls from all over the bldg about what happened to response. Analysis showed that the problem was MVS 3330 (OS/360 filesystem extensive use of multi-track search locking up controller and all drives on that controller) had been placed on VM/370 3330 string and demands that the offending MVS 3330 be moved. Operations said they would have to wait until offshift. Then a single pack VS1 (highly optimized for VM370 and hand-shaking) is put up on an MVS string and brought up on the loaded 370/158 VM370 ... and was able to bring the MVS/168 to a crawl ... alleviating a lot of the problems for VM370 users (operations almost immediately agreed to move the offending MVS 3330).

Trivia: one of my hobbies after joining IBM was highly optimized operating systems for internal datacenters. In early 80s, there were increasing studies showing quarter second response improved productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was introduced with lots of 3278 hardware moved back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (impossible to achieve quarter second). Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch). 3272/3277 required .164sec system response (for human to see quarter second). Fortunately I had numerous IBM systems in silicon valley with (90th percentile) .11sec system response, I don't believe any TSO users ever noticed 3278 issues, since they rarely ever saw even one sec system response). Later, IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput as 3278 emulation cards.

Future System was going to completely replace 370s:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

and internal politics were killing off 370 efforts (lack of new 370s is credited giving clone 370 makers their market foothold), when FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. Head of POK also lobbying corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but have to recreate a development group from scratch).

Endicott starts on XEDIT for release to customers. I send Endicott email asking might they consider one of the internal 3270 fullscreen editors, that were much more mature, more function and faster. red, ned, xedit, edgar, etc. had similar capability ("EDIT" was the old CP67/CMS editor) ... but simple cpu usage test that i did (summery from '79) of the same set of operations on the same file by all editors showed the following cpu uses (at the time, "RED" was my choice):


RED        2.91/3.12
EDIT       2.53/2.81
NED       15.70/16.52
XEDIT     14.05/14.88
EDGAR      5.96/6.45
SPF        6.66/7.52
ZED        5.83/6.52


Endicott's reply was that it was the RED-author's fault that it was so much better than XEDIT and therefor it should be his responsibility to bring XEDIT up to RED level.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
internal CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm

posts mentioning RED, NED, XEDIT, EDGAR, SPF, ZED:
https://www.garlic.com/~lynn/2024.html#105 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2011p.html#112 SPF in 1978
https://www.garlic.com/~lynn/2011m.html#41 CMS load module format
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2003d.html#22 Which Editor

some posts mentioning .11sec system response and 3272/3277
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2013l.html#65 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012n.html#37 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2011d.html#53 3270 Terminal
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response

Trivia: 90s, i86 chip makers implemented on-the-fly, pipelined translation of i86 instructions to RISC micro-ops (for execution, largely negating performance difference with RISC systems). Also Somerset/AIM (apple, ibm, motorola) was formed to do single chip 801/RISC (with motorola 88k bus and cache supporting multiprocessor configurations). Industry benchmark is number of program iterations compared to reference platform (for MIPS rating); 1999:
• single core IBM PowerPC 440, 1BIPS
• single core Pentium3, 2.054BIPS


and Dec2000:
• IBM z900, 16 processors 2.5BIPS (156MIPS/processor)

2010:
• IBM z196, 80 processors, 50BIPS (625MIPS/processors)
• E5-2600 server blade (two 8-core XEON chips) 500BIPS (30BIPS/core)


Note: no CKD DASD have been made for decades, all being emulated on industry fixed-block devices (increasingly SSD).

Cache miss/memory latency, when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processor cycles (memory is new disk). Current equivalent to 60s multitasking are things like out-of-order execution, branch prediction, speculative execution, etc (and to further improve things, translating CPU instructions into RISC micro-ops for actual execution scheduling). Note that individual instruction timings can take multiple cycles (translation, broken into multiple parts, etc) ... but there is large amount of concurrent pipelining .... so can complete one instruction per cycle, even while it might take 10-50 cycles to process each instruction.

60s undergraduate, took 2 credit hr intro to fortran/computers, end of semester was hired to reimplement 1401 MPIO on 360/30. Univ was getting 360/67 for tss/360, replacing 709/1401 and temporarily got 360/30 replacing 1401. Univ. shutdown datacenter on weekends and I had whole place dedicated, although 48hrs w/o sleep affected monday classes. I was given pile of hardware & software manuals and got to design/implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had 2000 card assembler program. 360/67 arrived within year of taking intro class (tss/360 didn't come to production and ran as 360/65) and was hired fulltime responsible for os/360. Student fortran ran under second on 709 (tape->tape) but over minute on os/360. I install HASP and cut the time in half. I then start redoing MFTR11 STAGE2 SYSGEN, carefully placing datasets and PDS members to optimize seeks and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install UofWaterloo WATFOR (single step monitor, batch card tray of jobs, ran at 20,000 cards/min on 360/65).

CSC comes out to install CP67/CMS (precursor to VM370, 3rd site after CSC itself and MIT Lincoln Labs). I mostly get to play with it during my weekend dedicated time, started out reWriting pathlengths for os360 virtual machine, os360 job stream ran 322secs on real hardware and initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).

CP67 originally came with auto terminal type with support for 1052 & 2741 terminals. Univ. had some TTY, so I integrate in ASCII support (including auto terminal type support). I then want to have single dial-in number ("hunt group") for all terminals, but IBM had taken short-cut, while could change terminal type port scanner, baud rate was hardwired for each port. That starts program to do clone controller, build channel interface board for Interdata/3 programmed to emulate IBM controller ... but including port auto-baud (later upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces). Interdata (and later Perkin-Elmer) sell it as clone controller and four of us are written responsible for some part of IBM clone business.

I then add terminal support to HASP for MVT18, with an editor emulating "CMS EDIT" syntax for simple CRJE.

In prior life, my wife was in the GBURG JES group reporting to Crabby & one of the ASP "catchers" for JES3; also co-author of JESUS (JES Unified System) specification (all the features of JES2 & JES3 that the respective customers couldn't live w/o, for various reasons never came to fruition). She was then con'ed into transfer to POK responsible for mainframe loosely-coupled architecture (Peer-Coupled Shared Data). She didn't remain long because 1) periodic battles with communication group trying to force her to use VTAM for loosely-coupled operation and 2) little uptake (until much latter with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby (she has story asking Vern Watts who he would ask permission, he replied "nobody" ... he would just tell them when it was all done).

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
360&370 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
Peer-Coupled Shared Data Architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response

other trivia: i got to wander around silicon valley datacenters after transfer to SJR, including disk bldg14/engineering and bldg15/product test, across the street. They were running prescheduled, 7x24, stand-alone testing and mentioned that they had recently tried MVS ... but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor to be bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. I also drastically cut the queued I/O redrive pathlength (1/10th MVS time from interrupt to redrive SIOF) and significantly enhance multi-channel path efficiency (in addition to never fail).

IBM had quideline that new generation product had to have performance (should be more but) not more than 5% less than previous. Initial test of 3880 showed it failed. It supported "data streaming" channels (previous channels were end-to-end hand-shake for every byte, data-streaming cut overhead by going to multiple bytes, higher datarate, but less processing) ... and they were able to get away with much slower processor than in 3830. However, slower processing significantly increased controller channel busy for every other kind of operation, including from end of channel program data transfer to presenting ending interrupt (significantly increase in time from SIOF to channel program ending interrupt) reducing aggregate I/O throughput. Attempt to mask the problem, they change 3880 to present ending interrupt and do final controller cleanup overlapped with operating system interrupt processing overhead (modulo niggling problem finding controller error and need to present "unit check" with an interrupt).

Whole thing tested fine with MVS ... the enormous MVS interrupt to redrive pathlength was more than enough to mask the 3880 controller fudge. However the 3880 fudge didn't work for me, I would hit 3880 with redrive SIOF long before it was done, which it then had to respond with CU-busy, I then had to requeue the request and wait for the CUE interrupt (indicating the controller was really free).

I periodically pontificated that a lot of the XA/370 architecture was to mask MVS issues (and my 370 redrive pathlength was close to the elapsed time that of the XA/370 hardware redrive).

Slightly other issues ... getting within a few months for 3880 first customer ship (FCS), FE (field engineering) had test of 57 simulated errors that were likely to occur and MVS was (still) failing in all 57 cases (requiring manual re-ipl) and in 2/3rds of the cases no indication for what caused the failure.

I then did a (IBM internal only) research report on all the I/O integrity work and it was impossible to believe the enormous grief that the MVS organization caused me for mentioning MVS 15min MTBF.

... trivia: MVS wrath at my mentioning VM370 "never fail" & MVS "15min MTBF" ... remember, POK had only recently convinced corporate to kill VM370, shutdown the group and transfer all the people to POK for MVS/XA. Endicott had managed to save the VM370 product mission (for mid-range) but was still recreating a development from scratch and bringing it up to speed ... can find comments about ibm code quality during the period in vmshare archive
http://vm.marist.edu/~vmshare

getting to play disk engineer in bldgs14/15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response

... also, as undergraduate in the 60s, univ hired me fulltime responsible for OS/360 on their 360/67 (running as 360/65) ... had replaced 709/1401. Student fortran ran under second on 709. Initially on OS/360, it was well over a minute. I install HASP, cutting time in half. I then start redoing (MFTR11) stage2 sysgen to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install UofWaterloo WATFOR.

Turns out a major part of that 12.9 secs, was OS/360 had major implementation goal running on minimal real-storage configurations, so things like file OPEN SVC had a long string of SVCLIB modules that had to be sequentially loaded ... I got tens of second performance improvement by carefully placing those SVCLIB members (both in multi-track search of PDS directory, and the actual loading).

One of my problems was PTF that replaced SVCLIB and LINKLIB PDS members that disturbed the careful placement, and student fortran would start inching up towards 20secs (from 12.9) and I would have to do a mini-sysgen to get the ordering restored.

some other recent posts mentioning student fortran, 12.9secs, WATFOR
https://www.garlic.com/~lynn/2025b.html#121 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#103 Mainframe dumps and debugging
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#98 RSCS/VNET
https://www.garlic.com/~lynn/2024g.html#69 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#62 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response

an anecdote: 1979 major national grocer (hundreds of stores organized in regions) was having severe performance problem ... and after bringing through all the standard IBM corporate performance experts ... they got around to asking me. Datacenter had several CECs in loosely-coupled configuration (each CEC with a stores from couple dedicated regions). I'm brought into classroom with tables piled high with CEC system activity reports. After more than 30mins, I notice a specific 3330 DASD peaking at 7-8 I/Os (activity summed across all the CEC activity reports) during worst performance period. I asked what it was. It was shared DASD (across all CECs) with the store controller apps PDS dataset with 3cyl PDS directory.

Then it was obvious ... every store controller app load for hundreds of stores, was doing multi-track search avg. 1.5cyls, aka 60revs/sec, full cyl, 19tracks, first search 19/60=.317sec, 2nd search 9.5/60=.158sec ... multi-track search for each store controller app load taking .475sec ... (multi-track search locks the device, controller, and channel for the duration). Effectively limited to two store controller app loads (for the hundreds of stores) took avg of four multi-track search I/Os and .951secs (during which DASD, controller, channel was blocked) ... other 3-4 I/Os per second representing the rest of the 7-8 I/Os per second (for the shared DASD across all CECs).

Solution was to partition the store controller app PDS DATASET into multiple files and provide a dedicated set (on non-shared) 3330 (and non-shared controller) for each CEC.

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response

pure random: 1988 branch office asks if I could help LLNL (national lab) standardize some serial stuff they working with (including long runs from machine rooms to high performance large graphics in offices) quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). Then POK finally gets their stuff shipped with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then POK becomes involved with FCS and define a heavy weight protocol that eventually ships as FICON.

Latest public benchmark I found was z196, "Peak I/O" getting 2M IOPS using 104 FICON (about 20,000 IOPS/FCS). About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS higher throughput than 104 FICON). Also IBM pubs was that SAPs (system assist processors that actually do I/O) should be kept to 70% CPU (or around 1.5M IOPS). Also no CKD DASD has been made for decades, all being simulated on industry standard fixed-block devices.

FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
... snip ...

Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf

above mention zHPF, a little more similar to what I had done in 1980 and also in the original native FCS, early documents claimed something like 30% throughput improvement ... pg39 claims increase in 4k IOs/sec for z196 from 20,000/sec per FCS to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning zHPF
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025.html#81 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2018f.html#21 IBM today
https://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016g.html#28 Computer hard drives have shrunk like crazy over the last 60 years -- here's a look back
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response

Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS, others went to the IBM Cambridge Science Center on the 4th flr and do virtual machines, internal network, bunch of performance work (some evolving into capacity planning), inventing GML in 1969 (after decade, morphs into ISO standard SGML and after another decade morphs into HTML at CERN). CSC 1st wanted a 360/50 to hardware modify with virtual memory, but all the spare 50s were going to FAA/ATC, so they had to settle for 360/40 and did CP40/CMS. When 360/67 standard with virtual memory becomes available, CP40/CMS morphs into CP67/CMS (precursor to VM370).

3272/3277 had hardware response of .086 ... and I had a bunch of VM370s inside IBM that had 90th percentile of .11 seconds ... giving .196secs for human response. For the 3274/3278, they move a lot of hardware to the 3274 reducing 3278 manufacturing cost but really driving up coax protocol latency and hardware response becomes .3sec-.5sec, depending on amount of data. Letters to 3278 "product administrator" got response that 3278 wasn't for interactive computing, but data entry (MVS/TSO users never notice because it was really rare that they saw even 1sec system response).

PROFS trivia: PROFS group went around gathering internal apps to wrap menus around and picked up very early copy of VMSG for the email client. Then when VMSG author tried to offer them a much enhanced version, they wanted him shutdown & fired. It all quieted down when he demonstrated his initials in non-displayed field in every email. After that, he only shared his source with me and one other person.

While I was at SJR, I also worked with Jim Gray and Vera Watson on the original SQL/relational (System/R, originally all done on VM370). Then when the company was preoccupied with the next great DBMS, "EAGLE" ... we managed to do tech transfer to Endicott for SQL/DS. Later when "EAGLE" imploded, there was request for how fast could System/R be ported to MVS ... eventually released as DB2 (originally for decision support only). Fall of 1980, Jim had left IBM for Tandem and tries to palm off bunch of stuff on me.

Also early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite, even do double hop satellite between west coast and europe) and lots of conflicts with the communication group (60s, IBM had 2701 telecommunication controller supporting T1, 70s move to SNA/VTAM, issues caped controller links at 56kbits/sec). Was working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, some other things happen and then a RFP is released, in part based on what we already had running. Communication group was fiercely fighting off client/server and distributed computing and we weren't allowed to bid.

trivia: NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Communication also tried to block release of mainframe TCP/IP support, when that failed, they said that since they had corporate ownership of everything that crossed datacenter walls, it had to be shipped through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 processor.

I then do enhancements for RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got 4341 sustained channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM systems for internal datacenters posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response

Lots of places got 360/67s for tss/360, but most places just used them as 360/65s for OS/360. CSC did CP67, and a lot of places started using 360/67s for CP67/CMS. UofMichigan did their own virtual memory system (MTS) for 360/67 (later ported to MTS/370). Stanford did their own virtual memory system for 360/67 which included WYLBUR .... which was later ported to MVS.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning 360/67, cp67/cms, michigan, mts, stanford, wylbur
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries

--
virtualization experience starting Jan1968, online at home since Mar1970

Interactive Response

From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025c.html#7 Interactive Response

ORVYL and WYLBUR
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
... this wiki entry looks like work in progress
https://en.wikipedia.org/wiki/Talk%3AORVYL_and_WYLBUR
Orvyl is a time shariang monitor that took advantage of the paging capability of the IBM 360/67 at Stanford's Campus Computing center. It was written by Roger Fajman with major improvements by Richard Carr, I believe over the summer in 1968. Wylbur is a text editor and time-sharing system. Wylbur was originally written by John Borgelt alongside Richard, again, I believe, in the summer of 1968. Milten monitored and supervised all the computer terminal input ports that allowed multiple users access to Wylbur and Orvyl. John Halperin wrote Milten. Joe Wells for Wylbur and John Halperin for Milten converted them to run under MVT on the 360/91 at SLAC and eventually OS/VS2 when SLAC obtained the two 360/168 computers. Joe made major improvements to Wylbur including the 'Exec' file capability that allowed one to script and run Wylbur Commands. He also built automatic file recovery for when the entire MVT/MVS system crashed which was not infrequent. This made Joe very popular with the SLAC physics community. John extended Milten to operate hundreds of terminals using an IBM3705 communications controller. These changes were eventually back-ported to the campus version of Wylbur when Orvyl was retired.
... snip ...

... trivia: "Metz" frequently mentioned (in the wiki), is (also) person from online "bit.listserv.ibm-main" that asked me to track down decision to add virtual memory to all 370s ... archived post w/pieces of email exchange with staff member to executive making the decision
https://www.garlic.com/~lynn/2011d.html#73

... trivia2: At Dec81 SIGOPS, Jim Gray (I had worked with Jim & Vera Watson on original SQL/Relational, System/R, before he left for Tandem fall 1980) asked me if I could help Richard (Tandem co-worker) get his Stanford PhD, it involved Global LRU page replacement ... and there was ongoing battle with the "Local LRU page replacement" forces. I had huge amount of data from 60s & early 70s with both "global" and "local" implementations done for CP67. Late 70s & early 80s, I had been blamed for online computer conferencing on the internal network; it really took off spring 1981 when I distributed trip report about visit to Jim at Tandem. While only 300 directly participated, claims that 25,000 were reading and folklore when corporate executive committee was told, 5of6 wanted to fire me. In any case, IBM executives blocked me from sending reply for nearly a year.

... trivia3: In prior life, my wife was in Gburg JES group, reporting to Crabby and one of the catchers for ASP (to turn into JES3) and co-author of JESUS (JES Unified System) specification (all the features of JES2 & JES3 that the respective customers couldn't live w/o; for various reason never came to fruition). She was then con'ed into transfer to POK responsible for mainframe loosely-coupled architecture (Peer-Coupled Shared Data). She didn't remain long because 1) periodic battles with communication group trying to force her to use VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby (she has story asking Vern Watts who he would ask permission, he replied "nobody" ... he would just tell them when it was all done).

virtual memory, page replacement, paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
original sql/relational implementation, System/R
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
loosely-coupled, hot-standby posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

Living Wage

From: Lynn Wheeler <lynn@garlic.com>
Subject: Living Wage
Date: 13 May, 2025
Blog: Facebook
In the 90s, congress asked GAO for studies on paying workers below living wage ... GAO report found it cost (city/state/federal) govs. avg $10K/worker/year .... basically worked out to an indirect gov. subsidy to their employers. The interesting thing is that it has been 30yrs since that report ... and have yet to see congress this century asking the GAO to update the study.
https://www.gao.gov/assets/hehs-95-133.pdf

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/R

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 15 May, 2025
Blog: Facebook
I worked with Jim Gray and Vera Watson on System/R ... had a joint study with BofA, getting 60 VM/4341s for betatest. Then helped with transferring technology to Endicott for SQL/DS ... "under the radar" when IBM was preoccupied with the next great DBMS, "EAGLE" ... then when "EAGLE" implodes, there was request for how fast could System/R be ported to MVS, eventually released as DB2, originally for decision support only.

recent posts about doing some work same time working on System/R
https://www.garlic.com/~lynn/2025b.html#90 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#112 System Throughput and Availability II
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response

... then 1988 I got HA/6000 project, original for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having simple relational for OS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives).

other system/r details
http://www.mcjones.org/System_R/
https://en.wikipedia.org/wiki/IBM_System_R

trivia: 1st relational (non-sql) shipped; Some of the MIT CTSS/7094 people go to the 5th flr to do MULTICS, others went to the IBM cambridge science center on the 4th flr to do virtual machines (modifying 360/40 with virtual memory and doing cp40/cms, CP40 morphs into CP67 when 360/67 standard with virtual memory becomes available), internal network, invent GML (in 1969, which later morphs into SGML standard and HTML at CERN), performance work (some of which morphs into capacity planning), etc. Then when decision is made to add virtual memory to all 370s, some of the science center splits off from CSC and takes over the IBM Boston Programming Center on the 3rd flr for the VM370 development group (and all of SJR System/R was done on VM370). Multics ships the 1st relational product,
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store
https://www.mcjones.org/System_R/mrds.html

and I have transferred out to San Jose Research from CSC.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
posts getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/R

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 15 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R

other trivia: both multics (also 1st relational product) and tss/360 were single level store ... also adapted for future system and later s/38. Note one of the last nails in the FS coffin was analysis by IBM Houston Science Center than if 370/195 applications were redone for FS machine made out of the fastest available hardware technology, it would have throughput of 370/145 (about 30 times slowdown; for S/38 market, there was significant technology hardware headroom between the market requirements and available hardware).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

I continued to work on 360&370 all during FS, including periodically ridiculing what they had done (after joining IBM, I did an internal page-mapped filesystem for CP67/CMS showing at least three times the throughput ... and claimed I learned what not to do from TSS/360)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 15 May, 2025
Blog: Facebook
I had early engineering 4341 (E5) in bldg15 ... running one of my enhanced operating systems. Branch office finds out and in Jan1979, asks me to do benchmark for national lab looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). The E5 even had slowed down processor clock by 20%, and benchmark was still successful.

Then in the early 80s, large corporations start making orders for hundreds of vm4341s at a time for placement out in non-datacenter departmental areas (sort of the leading edge of the coming distributed computing tsunami).

When I transferred from Cambridge Science Center to San Jose Research, I get to wander around datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test, across the street. They are doing 7x24, prescheduled stand alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl in that environment). I offer to rewrite I/O supervisor to be bullet proof and never fail allowing any amount of on-demand, concurrent testing, greatly improving productivity.

Bldg15 gets very early engineering systems for I/O testing and get the first engineering 3033 outside POK processor development. I/O testing only takes a percent or two of the 3033, so we scrounge a 3830 and 3330 string to set up our own private online service (and run 3270 coax under the street to my office in SJR/bldg28). Then bldg15 gets an early engineering 4341 and I joke with 4341 people in Endicott that I have significantly more 4341 availability than they do.

I'm also working Jim Gray and Vera Watson on the original SQL/relational System/R ... all work done on VM370 ... and get a System/R pilot at BofA ordering 60 VM/4341s for distributed operation. We then do System/R technology transfer to Endicott for SQL/DS

In spring 1979, some USAFDC (in the Pentagon) wanted to come by to talk to me about 20 VM/4341 systems, visit kept being delayed, by the time they came by (six months later), it had grown from 20 to 210.

I also get HSDT project, T1 (1.5mbit/sec) and faster computer links (both terrestrial and satellite) and lots of conflict with communication group (60s, IBM 2701 telecommunication controller supported T1 but 70s transition to SNA/VTAM and the associated issues seem to cap controllers at 56kbit/sec links) ... and looking at more reasonable speeds for distributed operation.

Mid-80s, communication group is fighting off client/server and distributed computing (preserving dumb terminal paradigm) and trying to block mainframe release of TCP/IP support. When they loose, then they claim that since they have corporate responsibility for everything that crosses datacenter walls, it has to be released through them. What ships gets aggregate 44kbytes/sec using nearly whole 3090 processor. I then do changes for RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, get sustained 4341 channel throughput, using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

I had helped Endicott with ECPS microcode assist for the 138/148 (and then used for 4300s followons) .... old archive post with initial analysis selecting 6kbytes of 370 kernel instruction paths for moving to microcode.
https://www.garlic.com/~lynn/94.html#21

Endicott then wants to pre-install VM370 on every machine shipped ... however POK was in the process of convincing corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA ... and it is vetoed. Endicott eventually manages to save the VM370 product mission for the mid-range, but has to recreate a development group from scratch (and was never able to get permission to pre-install vm370 on every machine).

Posts getting to play disk engineer in bldgs 14/15
https://www.garlic.com/~lynn/subtopic.html#disk
post mentioning CP67L, CSC/VM, SJR/VM systems for internal datacenters
https://www.garlic.com/~lynn/submisc.html#cscvm
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 15 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341

Long ago and far away; Science Center had added original (non-dat) 370 instructions support to CP67 for (vanilla) 370 virtual machine option. Then after there was decision to add virtual memory to all 370s, there was joint project with Cambridge and Endicott (distributed development project using the CSC CP67-based science center wide-area network as it was evolving into the corporate internal network) to expand CP67 370 virtual machine support to full 370 virtual memory architecture which was "CP67-H", then there was modification to CP67 to run on 370 virtual memory architecture which was "CP67-I". Because Cambridge also had profs, staff, students from Boston/Cambridge institutions, CP67-L ran on the real 360/67, CP67-H ran in a CP67-L 360/67 virtual machine and CP67-I ran in a CP67-H 370 virtual machine (countermeasure to leaking unannounced 370 virtual memory). This was in regular operation a year before the first engineering machine (370/145) with virtual memory was operational and CMS run in CP67-I virtual machine (also CP67-I was used to verify the engineering 370/145 virtual memory implementation) ... aka


CMS running in CP67-I 370 virtual machine
CP67-I running in CP67-H 370 virtual machine
CP67-H running in a CP67-L 360/67 virtual machine
CP67-L running on real 360/67 (non-IBMers restricted here)


Later three San Jose engineers came out to Cambridge and added 2305 & 3330 device support to CP67-I ... for CP67-SJ ... which was in wide use on (internal real) 370 virtual memory machines. As part of all this, original multi-level source update support had also been added to CMS.

trivia: I was asked to track down decision to add virtual memory to all 370s ... and found staff member to executive making the decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result, typical 1mbyte 360/165 was limited to four concurrent regions, insufficient to keep system busy and justified. Moving MVT into 16mbyte virtual address space (similar to running MVT in CP67 virtual machine), allowed increasing number of concurrent running regions by factor of four times (capped at 15 because of storage protect keys) with little or no paging (VS2/SVS).

Some of the MIT CTSS/7094 people had gone to the 5th flr for multics, others went to the 4th flr for the IBM Science Cener. With the decision to add virtual memory to all 370s, some split off from the science center and take over the IBM Boston Programming Center on the 3rd flr for the VM370 development group. The VM370 product work continued on in parallel with the CP67-H, CP67-I, CP67-SJ.

other trivia: part of SE training had been part of group onsite at customer. after 23jun1969 unbundling announce and starting to charge for SE services, they couldn't figure out how not to charge for trainee SEs at customer site. HONE CP67 systems were deployed supporting online branch office SE use practicing with guest operating systems in virtual machines. With the original announce of 370, the HONE CP67 systems were upgraded with the non-dat 370 virtual machine support. Science Center had also ported APL\360 to CMS for CMS\APL ... and HONE started offering sales & marketing support APL-based applications ... which eventually comes to dominate all HONE activity (and guest operating systems use withered away) ... after graduating and joining science center, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was early (and long time) customer.

Note: In the morph from CP67->VM370, lots of features were simplified and/or dropped. For VM370R2, I started adding CP67 enhancements into my CSC/VM for internal datacenters. Then for a VM370R3-based CSC/VM, I added in other features, including multiprocessor support (originally for the US HONE systems that had been consolidated in silicon valley), so they could add a 2nd processor to each system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

misc recent past posts mentioning Endicott and CP67-H,I,SJ
https://www.garlic.com/~lynn/2025b.html#6 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#120 Microcode and Virtual Machine
https://www.garlic.com/~lynn/2025.html#10 IBM 37x5
https://www.garlic.com/~lynn/2024g.html#108 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 16 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#13 IBM 4341

CSC port of APL\360 to CMS\APL ... required redoing APL\360 storage management. APL\360 had swapped 16kbyte (sometimes 32kbyte) workspaces, APL was define a new workspace location for every assignment statement (even if item already existed), APL\360 would quickly exhaust all workspace storage and need to do garbage collection and then compact assigned storage; since the complete workspace was swapped ... not big problem. Initial move to CMS\APL wih demand page large workspaces resulting in enormous page thrashing. Also APIs for system services (like file I/O) was added, combination enabled lots of real world applications.

Then move from CP67 to VM370 and consolidating all US HONE datacenters in silicon valley ... across back parking lot from the IBM Palo Alto Science Center. PASC had done lots more work on APL for what became (VM370/CMS) APL\CMS ... PASC also did the 370/145 APL microcode assist ... increase performance by factor of ten times (HONE couldn't really use since it needed large multi-mbyte real memory of 370/168). PASC also improved FORTRAN H optimization (available internally as FORTRAN-Q, eventually released to customers as FORTRAN-HX) and helped HONE with moving some of the larger compute intensive HONE APPs to FORTRAN and being able to call from APL\CMS.

The consolidated US HONE configured the multiple systems into large "single-system-image", loosely-coupled, shared DASD operation with load-balancing and fall-over across the complex (largest IBM SSI configuration, internally or customers). Then I added multiprocessor support to VM370R3-based CSC/VM ... so HONE could add a 2nd processor to each system ... (with some cache affinity and other hacks) two processor systems were getting twice the throughput of the previous single processor operation (at a time when MVS documentation claimed their two processor operation only had 1.2-1.5 the throughput of single processor operation). I made a joke some 30+ yrs later when similar SSI capability was released to customers ("
from the annals of release no software before its time

").

PASC also helped with HONE SEQUOIA, a few hundred kilobyte of APL code that was integrated into the share memory image of the APL executable (so only a single copy existed for all users) ... it basically provided a high-level menu environment for the sales&marketing users (hiding most details and operation of CMS and APL).

There was a scenario that was repeated a couple times in the later 70s and early 80, where a branch manager was promoted to executive in DPD hdqtrs (with HONE reporting to them) and was aghast to discover that HONE was VM370-based (and not MVS). They would believe that if they directed HONE to move everything to a MVS-base, their career would be made ... almost every other activity stopped ... while the attempt to get a MVS HONE operational ... after a year or so, it would be declared a success, heads would roll uphill, and things return to normal VM370 operation. Towards middle of the 80s, somebody decided that they couldn't move from VM370 to MVS ... because of HONE's use of my (by then) internal SJR/VM ... and it might be possible if it was done in two stages, mandate HONE move to a standard product VM (because what would happen to the whole IBM sales&marketing empire if I was hit by a bus) before attempting the move to MVS.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM systems
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE systems
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputing

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 17 May, 2025
Blog: Facebook
I was doing a lot with early engineering IBM 4341 and in Jan1979, branch office found out and cons me into do benchmark for national lab that was looking at getting 70 for compute farm (sort of leading edge of coming cluster supercomputing tsunami). The engineering 4341 had processor clock was slowed down 20% (as they worked out kinks in timing), but benchmark was still succesful (was fortran benchmark from 60s CDC6600 ... and engineering 4341 benchmark ran about same as 60s CDC6000).

A couple years later, got HSDT project, T1 (1.5mbits/sec) and faster computer links (both terrestrial and satellite) and lots of conflict with the communication products group (60s, IBM was selling 2701 telecommunication controller that supported T1, then 70s move to SNA/VTAM, issues appeared to cap controllers at 56kbit links). Was also working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer center. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern Internet.

The communication group was also fiercely fighting off client/server and distributed computing and trying to block release of IBM mainframe TCP/IP support. When that failed, they said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 CPU. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput, using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

1988 got HA/6000 project, originally for NYTimes to move their newspaper system (ATEX), off DEC VAXCluster to RS/6000. I rename it it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR/UCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that have VAXCluster support in same source based with UNIX. IBM S/88 product administrator starts taking us around to their customers and also has me do a section for the corporate strategic continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they couldn't meet requirements). Then was also working with LLNL porting their LINCS/UNITREE filesystem to HA/CMP and NCAR/UCAR spin-off Mesa Archival filesystem to HA/CMP.

Also 1988, IBM branch office asks if I can help LLNL (national lab) standardize some serial stuff they were working with ... which quickly becomes fibre-standard channel ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec) ... some competition with LANL standardization of Cray 100mbyte/sec for HIPPI (and later serial version).

Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told that we can't work on anything with more than four processors (we leave IBM a few months later).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Cluster Supercomputing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cluster Supercomputing
Date: 17 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing

About same time asked to help LLNL on what becomes FCS, branch office also asked if I could get involved in SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Decade later did some consulting for Steve Chen (designed Cray XMP & YMP),
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
https://en.wikipedia.org/wiki/Cray_X-MP
https://en.wikipedia.org/wiki/Cray_Y-MP
but at that time, Sequent CTO (before IBM bought Sequent and shut it down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
Sequent had used SCI for a (numa) 256 i486 machine
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some archived posts mentioning SCI
https://www.garlic.com/~lynn/2024g.html#85 IBM S/38
https://www.garlic.com/~lynn/2024e.html#90 Mainframe Processor and I/O
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#70 Microprocessor Optimization Primer
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#95 5 Easy Steps to a High Performance Cluster
https://www.garlic.com/~lynn/2014d.html#18 IBM ACS
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#94 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2011p.html#40 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#39 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010.html#92 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2010.html#41 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/R

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/R
Date: 18 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
https://www.garlic.com/~lynn/2025c.html#11 IBM System/R

At Dec81 SIGOPS, Jim asked me if I could help Carr (Tandem co-worker) get his Stanford PhD, it involved Global LRU page replacement ... and there was ongoing battle with the "Local LRU page replacement" forces. I had huge amount of data from 60s & early 70s with both "global" and "local" implementations done for CP67. As undergraduate in 60s, I rewrote lots of CP67 (virtual machine precursor to VM370), including doing Global LRU ... about the time there was bunch of ACM literature appearing about "Local LRU". Then early 70s, IBM Grenoble Scientific Center modified CP67 with "Local LRU" and "working set dispatcher". Grenoble had 1024kbyte memory 360/67, 155 pages after fixed storage and 35 users. CSC was running my implementation on 768kbye memory 360/67, 104pages after fixed storage with 75-80 users with similar workloads, but better throughput and interactive response.

Late 70s & early 80s, I had been blamed for online computer conferencing on the internal network; it really took off spring 1981 when I distributed trip report about visit to Jim at Tandem. While only 300 directly participated, claims that 25,000 were reading and folklore when corporate executive committee was told, 5of6 wanted to fire me. In any case, IBM executives blocked me from sending my Global/Local reply for nearly a year (19Oct1982).

Page I/O, Global LRU replacement, virtual memory posts
https://www.garlic.com/~lynn/subtopic.html#clock
Dynamic Adaptive Resource Managment (fairshare) scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some past (CP67, global/local, Cambridge/Grenoble) refs
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2024g.html#107 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#34 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)

--
virtualization experience starting Jan1968, online at home since Mar1970

Is Parallel Programming Hard, And, If So, What Can You Do About It?

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Sun, 18 May 2025 14:10:00 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Think of what a cache is for in the first place. The only reason they work is because of the "principle of locality". This can also be expressed as saying that typical patterns of data access by application programs follow a Pareto distribution, less formally known by monikers like the "80/20 rule" or the "90/10 rule".

IBM "added" full-track "-13" cache to 3880 dasd control for 3380 disk (ten records/track) ... claiming 90% "hit rate". Issue was that there was a lot of sequential file reading ... the 1st record read for track would be a "miss" but bring in the whole track, resulting in the next nine reads being "hits".

system services offered option for application doing sequential i/o to specify full-track i/o (into processor memory) ... which would result in the zero hit rate for the controller cache (IBM standard batch operating system did contiguous allocation on file creation).

About the same time, we did system mod. that did highly efficient trace/capture of every record operation which was deployed on numerous production systems. Then traces were fed to sophisticated simulator that could vary algorithms, caches, kinds of caches, sizes of caches, distribution of caches, etc.

Given a fixed amount of cache storage, it was always better to have a global system cache ... than partitioned/distributed; except a few edge cases. Example, if device track cache could be used to immediately start transfering data, rather having to rotate to start of track before starting transfer.

posts mentioning record activity trace/capture
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2022b.html#83 IBM 3380 disks
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?

more recent posts mentioning 3880-13 or 3880-23
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2023g.html#7 Vintage 3880-11 & 3880-13
https://www.garlic.com/~lynn/2022b.html#83 IBM 3380 disks
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2014l.html#81 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2013d.html#3 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012d.html#78 megabytes per second
https://www.garlic.com/~lynn/2012d.html#75 megabytes per second
https://www.garlic.com/~lynn/2012d.html#72 megabytes per second
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2012c.html#34 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#68 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#67 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2010g.html#55 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010g.html#11 Mainframe Executive article on the death of tape
https://www.garlic.com/~lynn/2010.html#51 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

APL and HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: APL and HONE
Date: 19 May, 2025
Blog: Facebook
23jun1969, IBM unbundling starts to charge for (application) software (made case that kernel software still free), SE services, maint., etc. SE training use to include part of large group at customer site, however after unbundling, they couldn't figure out how to not charge for SE training time. As a result CP67 HONE datacenters were setup where branch office SEs could login online to HONE and practice with guest operating systems running in virtual machines. One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was one of the first (and long time) customer.

Cambridge Science Center also ports APL\360 to CMS as CMS\APL, redoing storage management (APL\360 was 16kbyte, sometimes 32kbyte, swapped workspaces, new location for variables on every assignment, even if already existed, quickly ran through the workspace, then garbages collect and coalesce everything to contiguous area and start again; move to CMS\APL and demand paged hundreds of kilobyte/megabyte workspaces resulted in severe page thrashing) and APIs for system services (like file I/O), enabling lots of real world applications. HONE then started offering APL-based sales&marketing support applications, which came to dominate all HONE activity (and guest operating system practice just withered away). With the propagation of clone HONE datacenters around the world, HONE was easily the largest APL operation in the world.

After decision to added virtual memory to all 370s, decided also to morph CP67 to VM370 and HONE consolidates all US datacenters in silicon valley (across the back parking lot from IBM Palo Alto Science Center) upgrading from CP67/CMS to VM370/CMS (trivia: when facebook 1st moves into silicon valley, it is a new bldg built next door to the former consolidated US HONE datacenter). PASC also does APL microcode assist for 370/145 and releases APL\CMS (claiming ten times performance improvement, equivalent to 370/168). HONE still needed real 370/168s for the larger real memory sizes. Non-CMS went from APL\360 to APL\SV and then VS/APL which replaces APL\CMS on VM370. PASC also responsible for internal Fortran-Q optimization (eventually released as FORTRAN-HX for customers) and also helps HONE with invoking some of the reprogramed sales&marketing FORTRAN APPs from APL.

One of the other CSC members, in the early 70s, had done an analytical System Model in APL, which was made available on HONE as the Performance Predictor, SEs could enter customer workload and configuration information and ask "what-if" questions about workload&configuration changes. After IBM troubles in the early 90s and unloading all sort of stuff, a descendant of the Performance Predictor was acquired by performance consultant in Europe, who ran it through a APL->C translator and using it for large system performance consulting. Turn of the century I was doing some performance work for operation doing financial outsourcing, datacenter with 40+ max configured IBM mainframes (@$30M, constant upgrades, no system older than 18months), all running the same 450K statement Cobol program (had a large group for decades responsible for performance care and feeding). I was still able to find 14% improvement and the other consultant (w/performance predictor) found another 7%.

23jun1969 Unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent posts mentioning performance predictor
https://www.garlic.com/~lynn/2025b.html#68 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

Is Parallel Programming Hard, And, If So, What Can You Do About It?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Tue, 20 May 2025 16:38:31 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
I presume you know that the 3880 controller did not do what today we call command queuing, so I think you were referring to a potential queue in the host. That being the case, the controller doesn't know if there is a queue or not. So given that, why not start reading record 1 on the next track. If a request comes in, you can abandon the read to service the request - no harm, no foul. If there isn't, and you subsequently get a request for that track, it's a big win. The only potential loss is if you get a request for the track that was LRU and got pushed out of the cache.

re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?

over optimizing full track read ahead could lock out other tasks that had competing requirements for other parts of the disk.

trivia: early 70s, IBM decided to add virtual memory to all 370s. Early last decade I was asked to tract down the decsion. I found staff member to executive making the decision. Basically MVT (IBM's high end, major batch system) storage management was so bad that (multiprogramming) region sizes had to be specified four times larger than used, as a result typical (high-end) 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Running MVT in a 16mbyte virtual address space (sort of like running MVT in CP67 16mbyte virtual machine) would allow concurrent regions to be increased by factor of four times (caped at 15 because of 4bit storage protect key) with little or no paging. Later as high-end systems got larger, they needed more than 15 concurrent running regions ... and so switched from VS2/SVS (single 16mbyte virtual address space) to VS2/MVS (a separate 16mbyte virtual address space for each "region", went through MVT->VS2/SVS->VS2/MVS)

along the way, I had been pontificating that DASD (disks) relative system throughput has been decreasing ... in 1st part of 80s, I turned out analysis that in the 15yr period since the IBM 360 1st ships, DASD/disk relative system throughput had declined by an order of magnitude (i.e. DASD got 4-5 times faster while systems got 40-50 times faster). Some DASD division executive took exception and assigned the division performance group to refute the claim ... after a few weeks, they came back and bascially said I had slightly understated the issue. The performance group then respun the analysis for user group presenation on how to configure disks and filesystem to improve system throughput (SHARE63, B874, 16Aug1984).

1970 IBM 2305 fixed-head disk controller supported 8 separate psuedo device addresses ("multiple exposure") for each 2305 disk ... each having channel program that the controller could optimize. In 1975, I was asked to help enhance low-end 370 that had integrated channels and integrated device controllers ... and I wanted to upgrade microcode so I just update a queue of channel programs that the (integrated microcode) controller could optimize (wasn't allowed to ship the product).

Later I wanted to add "multiple exposure" support to 3830 (precursor to the 3880) for 3350 (moveable arm) disks (IBM east coast group was working on emulated electronic memory disks, considered it might compete and got it vetoed. sometime later they got shutdown, they were told IBM was selling all electronic memory it could make as higher markup processor memory).

getting to play disk engineer in (SJ DASD) bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Is Parallel Programming Hard, And, If So, What Can You Do About It?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Newsgroups: comp.arch
Date: Wed, 21 May 2025 07:06:26 -1000
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
Only if the cores and/or "hardware threads" do not interfere with one another? Fwiw, an example of an embarrassingly parallel algorithm is computing the Mandelbrot set. Actually, this reminds me of the "alias" problem with Intel hyper threading in the past.

re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#20 Is Parallel Programming Hard, And, If So, What Can You Do About It?

shortly after graduating and joining IBM, I got roped into helping with hyperthreading the 370/195. It had pipelined, out-of-order execution, but conditional branches drained the pipeline and most code only ran system had half rated throughput. Two hardware i-streams ... each running at half throughput would (might) keep system full throughput.

hardware hyperthreading mentioned in this about Amdahl winning the battle to make ACS, 360 compatible (folklore it was shutdown because IBM was concerned that it would advance the state-of-the-art too fast and IBM would loose control of the market, and Amdahl leaves IBM).
https://people.computing.clemson.edu/~mark/acs_end.html

Then decision was made to add virtual memory to all 370s, and it was decided it would be too difficult to add it to 370/195 and all new 195 activity was shutdown (note operating system for 195 was MVT and its shared memory multiprocessor support on 360/65MP was only getting 1.2-1.5 throughput of single processor, so running 195 with simulated multiprocessor with two i-streams ... would only be more like .6 times fully rated throughput (all hardware might be running at 100%, but the SMP overhead would limit productive throughput); trivia the multiprocessor overhead continues up throught MVS.

also after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and the online sales&marketing support HONE systems were early (& long time) customer. Then with decision to add virtual memory to all 370s, there was also decision to do VM370 and in the morph of CP67->VM370 a lot of things were simplified and/or dropped (including multiprocessor support). I then start adding stuff back into VM370 and initially do multiprocessor support for the HONE 168s so they can add 2nd processor to all their systems (and managed to get twice single processor throughput with some cache affinity hacks and other stuff).

In the mid-70s, after Future Systems implodes,
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

I get roped into helping with a 370 16-CPU multiprocessor design. It was going fine until somebody tells head of POK (high end 370 processors) that it could be decades before POK's favorite son operating system (now "MVS") had ("effective") 16-cpu support (POK doesn't ship a 16-CPU system until after the turn of the century) ... and some of us are invited to never visit POK again.

SMP, tightly-coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 8100

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 8100
Date: 22 May, 2025
Blog: Facebook
In prior life, my wife was asked by Evans to audit/review 8100 ... shortly later it was canceled.

Later communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal design) and block release of mainframe tcp/ip support. When that was overturned, they said that since they had corporate ownership of everything that crossed datacenter walls it had to be released through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).

I had gotten HSDT project in early 80s, T1 (1.5mbits/sec, full-duplex, aggregate 300kbytes/sec) and faster computer links and lots of conflict with the communication group (in 60s, IBM had 2701 telecommunication controller that supported T1 links, however with the transition to SNA/VTAM in the 70s, and the associated issues, seemed to cap controllers at 56kbit/sec links). trivia: also EU T1, 2mbits/sec, full-duplex, aggregate 400kbytes/sec.

posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt
posts mentioning RFC1044
https://www.garlic.com/~lynn/subnetwork.html#1044

some other posts mentioning 8100
https://www.garlic.com/~lynn/2025b.html#4 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2024g.html#56 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2023c.html#60 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022g.html#62 IBM DPD
https://www.garlic.com/~lynn/2021f.html#89 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2015e.html#86 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015.html#71 Remembrance of things past
https://www.garlic.com/~lynn/2014e.html#20 IBM 8150?
https://www.garlic.com/~lynn/2013l.html#32 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013b.html#57 Dualcase vs monocase. Was: Article for the boss
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2012h.html#66 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2011p.html#66 Migration off mainframe
https://www.garlic.com/~lynn/2011m.html#28 Supervisory Processors
https://www.garlic.com/~lynn/2011d.html#31 The first personal computer (PC)
https://www.garlic.com/~lynn/2011.html#0 I actually miss working at IBM
https://www.garlic.com/~lynn/2008k.html#22 CLIs and GUIs
https://www.garlic.com/~lynn/2007f.html#55 Is computer history taught now?
https://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4361 & DUMPRX

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4361 & DUMPRX
Date: 23 May, 2025
Blog: Facebook
After 3033 was out the door, the processor engineers start on trout/3090 ... also begin adapting 4331 for 3092 service processor with highly modified vm370 version6 (and all service screens done in CMS IOS3270). Then 3092 was upgraded from 4331 to a pair of 4361s.

trivia; early in REX (before rename to rexx customer release), I wanted to demonstrate that REX wasn't just another pretty scripting language ... demonstration was to redo a large assembler application (IPCS, dump analysis) in REX, working half time in three months with ten times the function and ten times the performance (finished early so built a library of automated scripts that looked for common failure signatures). I thought it would replace the existing version (especially since it was in use by nearly every internal datacenter and PSRs), but for some reason it wasn't. Eventually I did get permission to give talks at user group meetings on how I did the implementation ... and within a few months similar implementations started appearing.

Then got email from the 3092 group asking if they can include it with release of 3090.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
... all 3090 machines came with at least two FBA3370 (for 3092), even MVS systems which never had FBA support.

Date: 23 December 1986, 10:38:21 EST
To: wheeler

Re: DUMPRX

Lynn, do you remember some notes or calls about putting DUMPRX into an IBM product? Well .....

From the last time I asked you for help you know I work in the 3090/3092 development/support group. We use DUMPRX exclusively for looking at testfloor and field problems (VM and CP dumps). What I pushed for back aways and what I am pushing for now is to include DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do not have the new XEDIT.

In any case, we (3090/3092 development) would assume full responsibility for DUMPRX as we release it. Any changes/enhancements would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be on vacation from 12/24 through 01/04.


... snip ... top of post, old email index

4361:
https://web.archive.org/web/20220121184235/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH4361.html

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

some recent posts mentioning dumprx & 3092
https://www.garlic.com/~lynn/2024f.html#114 REXX
https://www.garlic.com/~lynn/2024e.html#21 360/50 and CP-40
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX
https://www.garlic.com/~lynn/2024b.html#1 Vintage REXX
https://www.garlic.com/~lynn/2024.html#60 IOS3270 Green Card and DUMPRX
https://www.garlic.com/~lynn/2023g.html#69 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023g.html#38 Computer "DUMPS"
https://www.garlic.com/~lynn/2023f.html#45 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#74 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#101 PSR, IOS3270, 3092, & DUMPRX
https://www.garlic.com/~lynn/2022h.html#34 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021j.html#84 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 24 May, 2025
Blog: Facebook
1988 got HA/6000 project, originally for NYTimes to move their newspaper system (ATEX), off DEC VAXCluster to RS/6000. I rename it it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR/UCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix) that have VAXCluster support in same source based with UNIX. IBM S/88 product administrator starts taking us around to their customers and also has me do a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain that they couldn't meet requirements). Then was also working with LLNL porting their LINCS/UNITREE filesystem to HA/CMP and NCAR/UCAR spin-off Mesa Archival filesystem to HA/CMP.

Also 1988, IBM branch office asks if I can help LLNL (national lab) standardize some serial stuff they were working with ... which quickly becomes fibre-standard channel ("FCS", including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec) ... some competition with LANL standardization of Cray 100mbyte/sec for HIPPI (and later serial version).

About same time (1988) asked me to help LLNL on what becomes FCS (which IBM later uses as base for FICON), branch office also asked if I could get involved in SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Decade later did some consulting for Steve Chen (designed Cray XMP & YMP),
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
https://en.wikipedia.org/wiki/Cray_X-MP
https://en.wikipedia.org/wiki/Cray_Y-MP
but at that time, Sequent CTO (before IBM bought Sequent and shut it down).
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
Sequent had used SCI for a (numa) 256 i486 machine
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told that we can't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
FCS &(/or) FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning ha/cmp, fcs, sci, sequent, chen
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018b.html#53 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall
https://www.garlic.com/~lynn/2014m.html#140 IBM Continues To Crumble
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out

--
virtualization experience starting Jan1968, online at home since Mar1970

360 Card Boot

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 Card Boot
Date: 25 May, 2025
Blog: Facebook
(360) basic programming systems ... was all card base. There was BPS "loader" that was about 100(?) cards ... behind which you placed TXT deck ... output of compilers and assemblers. CSC came out to install (virtual machine) CP67 (precursor to VM370) at the univ (3rd installation after CSC itself and MIT Lincoln labs), and I mostly got to play with it during my dedicated weekend 48hrs. At the time all the source was on OS/360, assembled there and the assembled output TXT decks placed in card tray with BPS txt deck at the front. Tended to use felt pen to do a diagonal stripe across the top of the individual TXT deck with the module name (making it easy to replace individual modules). The tray of cards would be placed in 2540 card reader, dial in "00C", and hit the IPL button. The last module in the deck was CP67 CPINIT, which would get control from BPS loader after all cards were read from the reader, and write the storage image to disk. It was then possible to dialin the disk address, hit IPL, and CPINIT would get control and reverse the process, reading the storage image back into memory.

It was also possible to write a tray of cards to tape and do the initial IPL from the tape drive (rather than the card reader).

There were also simple 2card, 3card, and 7card loaders. Some assembler programs could have "PUNCH" statements at the front of the assembler source, that would punch a 2, 3, or 7 card loader prefixing the assembled TXT output ... which could be placed in the card reader and loaded.

from long ago and far away
https://www.mail-archive.com/ibm-main@bama.ua.edu/msg43867.html

I always called it "Basic Programming System", but officially "Basic Programming Support" and "Basic Operating System"
http://www.bitsavers.org/pdf/ibm/360/bos_bps/C24-3420-0_BPS_BOS_Programming_Systems_Summary_Aug65.pdf

IBM 360
https://en.wikipedia.org/wiki/IBM_System/360
A little-known and little-used suite of 80-column punched-card utility programs known as Basic Programming Support (BPS) (jocularly: Barely Programming Support), a precursor of TOS, was available for smaller systems.
... snip ...

I had taken 2credit hr intro to fortran/computers and at the end of semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ was getting a 360/67 for tss/360 to replace 709/1401 and temporarily pending 360/67 being available, the 1401 was replaced with 360/30 (which had 1401 emulation and could run 1401 mode, so my rewriting it in 360 assembler wasn't really needed). The univ shutdown datacenter on weekends and I would have the place dedicated (although 48hrs w/o sleep made monday classes difficult). I was given a stack of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management and after a few weeks had a 2000 card assembler program that took 30mins to assemble under os/360 (stand alone monitor loaded with the BPS loader, could do card->tape and tape->printer/punch concurrently). I then used assembler option that would assemble with os/360 system services to run under os/360 (that took 60mins to assemble, each DCB macro taking 5-6mins). Within a year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production use).

some past BPS Loader posts
https://www.garlic.com/~lynn/2025.html#79 360/370 IPL
https://www.garlic.com/~lynn/2024g.html#78 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024b.html#2 Can a compiler work without an Operating System?
https://www.garlic.com/~lynn/2023g.html#83 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2022.html#116 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#114 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#25 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2017g.html#30 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2007n.html#57 IBM System/360 DOS still going strong as Z/VSE
https://www.garlic.com/~lynn/2007f.html#1 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006v.html#5 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#62 PLX

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 25 May, 2025
Blog: Facebook
Note AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.

Then IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier).
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

20yrs earlier, 1972, Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

trivia: I was introduced to (USAF retired) John Boyd in the early 80s and sponsored his briefings at IBM. Then 89/90, the Commandant of the Marine Corps (approx. same number of people as IBM) leverages Boyd for a corps makeover (at a time when IBM was desperately in need of makeover). By the time Boyd passes in 1997, the USAF had pretty much disowned him and it is the Marines at Arlington (and his effects goes to the Gray Research and Library in Quantico); the former commandant continued to sponsor Boyd conferences at Quantico MCU.

communication group was fighting off client/server and distributed computing (try to preserve dumb terminal paradigm) and attempted to block release of mainframe TCP/IP. When that was reversed, they change strategy to since they had corporate strategic responsibility for everything that crossed datacenter walls .... it had to be released through them; what ships get aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, I get sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executing).

Earlier in the 80s, I had gotten HSDT, T1 (US&EU T1 (1.5mbits/sec and 2mbits/sec; full-duplex; aggregate 300kbytes and 400kbytes) and faster computer links and lots of conflict with communication group (60s, IBM had 2701 telecommunication controller supporting T1, then with IBM's move to SNA/VTAM in the 70s and the associated issues, appeared to cap controllers at 56kbit/sec links).

HSDT was working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone (precursor to modern internet).

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Programming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Programming
Date: 26 May, 2025
Blog: Facebook
I was undergraduate, but hired fulltime responsible of OS/360. CSC came out to univ to install CP67 (3rd after CSC itself and MIT Lincoln Labs, morphs into VM370). It had 1052&2741 terminal support and could automagic do terminal type identification and switch port scanner type with controller SAD ccw). Univ. had some tty/ascii ... and so add ascii support (integrated with line auto terminal type). I then want a single dial-in number ("hunt group") for all terminal types ... it didn't quite work, IBM controller could change port scanner terminal type, but had taken short cut and hard wired baud rate.

This kicks off univ clone controller project, build a channel interface board for Interdata/3 programmed to emulate IBM controller with the additional ability to do line auto baud. This is then upgraded with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces. Interdata (and later Perkin/Elmer) start selling it as a IBM clone controller (and four of use were written up responsible for some part of the clone controller business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Turn of century visiting east coast datacenter that handled most of the point-of-sale credit card terminal dialup calls east of the mississippi ... which were handled by descendant of our Interdata box.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM clone/plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

- virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Programming

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Programming
Date: 26 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#27 IBM 360 Programming

very early in 80s/REX (before renamed REXX and released to customers), I wanted to show it wasn't just another pretty scripting language ... so objective was working part-time for 3months would rewrite large assembler program (IPCS, dump analysis) in REX with ten times the function and ten times the performance (coding tricks to have interpreted REX running faster than assembler), finished early and did automated library that look for common failure signatures. I thought it would then ship to customers, but for whatever reason, it didn't (even though it was in use by nearly every PSR and the internal datacenters). I eventually get permission to give talks on how it was implemented at customer user group meetings ... and within a few months, similar implementations started appearing.

Later 3092 (3090 service processor) group asked about including it as part of 3092 (almost 40yrs ago):

Date: 23 December 1986, 10:38:21 EST
To: wheeler
Re: DUMPRX

Lynn, do you remember some notes or calls about putting DUMPRX into an IBM product? Well .....

From the last time I asked you for help you know I work in the 3090/3092 development/support group. We use DUMPRX exclusively for looking at testfloor and field problems (VM and CP dumps). What I pushed for back aways and what I am pushing for now is to include DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do not have the new XEDIT.

In any case, we (3090/3092 development) would assume full responsibility for DUMPRX as we release it. Any changes/enhancements would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be on vacation from 12/24 through 01/04.


... snip ... top of post, old email index

somebody did CMS IOS3270 green card version (trivia: 3092 ... 3090 service processor, pair of 4361s running a modified vm370R6, and all the service screens were IOS3270) .... I've done a rough translation to HTML:
https://www.garlic.com/~lynn/gcard.html

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

360 Card Boot

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 Card Boot
Date: 26 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#25 360 Card Boot

After transferring from IBM Cambridge Science Center to IBM San Jose Research on the west coast, I got to wander around datacenters in silicon valley, including disk bldg14/engineering and bldg15/product-test across the street. They had been doing 7x24, pre-scheduled, stand-alone testing (ipling FRIEND/? from cards or tape). They mentioned that they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor making it bullet-proof and never fail, enabling any amount of on-demand, concurrent testing greatly improving productivity (still could IPL "FRIEND" but from virtual cards and virtual card reader in virtual machine). I then write an internal-only research report on the I/O integrity work and happen to mention the MVS "15min MTBF", bringing down the wrath of the POK MVS organization on my head.

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
enhanced production systems for internal datacenter posts, CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Is Parallel Programming Hard, And, If So, What Can You Do About It?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About  It?
Newsgroups: comp.arch
Date: Mon, 26 May 2025 15:36:22 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
Yup. And IIRC the IBM 3380 had a linear actuator with two heads per arm, one covering the outer cylinders, the other the inner cylinders. The tradeoff was shorter seeks, thus smaller seek time but higher cost due to more heads.

re:
https://www.garlic.com/~lynn/2025c.html#18 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#20 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025c.html#21 Is Parallel Programming Hard, And, If So, What Can You Do About It?

original 3380 had 20 track spacings per data track, they then cut the spacing in half, doubling number of tracks per platter (and double the capacity), then cut it again for triple the number of tracks per platter (and triple capacity).

doing some analysis moving data from 3350s to 3380s ... avg 3350 accesses per second divided by drive megabytes ... for avg access/sec/mbyte. 3380 mbytes increased significantly more than avg. access/sec ... could move all 3350 data to much smaller number of 3380s but with much worse performance/throughput.

at customer user group get-togethers there were discussions about how to convince bean counters that performance/throughput sensitive data needed to have much higher accesses/sec/mbyte. Eventually IBM offers a 3380 with the 1/3 data track spacing as the original 3380, but only enabled for the same number of tracks as the original 3380 (as a high performance/throughput drive, since head only had to travel 1/3rd as far).

other trivia: 2301 fixed head drum was effectively same as 2303 fixed head drum, but transferred four heads in parallel, 1/4 the number of tracks, tracks four times larger and four times the transfer rate (still same avg. rotational delay).

late 60s, 2305 fixed head disk first appeared with 360/85 and block mux channels. There were two models, one with single head per track and one with pairs of heads per data track, half the number of data tracks and half the total capacity (same number of total heads). The were offset 180 degrees, and would transfer from both heads concurrently for double the data rate with a quarter avg rotational delay (instead half avg rotational delay),

2305
http://www.bitsavers.org/pdf/ibm/2835/GA26-1589-5_2835_2305_Reference_Oct83.pdf

getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 27 May, 2025
Blog: Facebook
1972, Learson tried (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

FS completely different from 370 and going to completely replace it (during FS, internal politics was killing off 370 efforts, limited new 370 is credited with giving 370 system clone makers their market foothold). One of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 apps were redone for FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

trivia: I continued to work on 360&370 all during FS, periodically ridiculing what they were doing (which wasn't exactly career enhancing activity)

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd for makeover of the corps (at a time when IBM was desperately in need of a makeover). Then IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Early 80s, I had submitted an IBM speakup with supporting documentation that I was significantly underpaid. I got back response from head of HR saying that after a complete review of my entire employment history, I was being paid exactly what I was suppose to be. I then took the original and the reply and sent it back with cover letter say I was being asked to interview upcoming graduates for a new group that would work under my direction ... and they were getting starting salary offers 1/3rd more than I was making. I never got a written response, but within a few weeks, I got a 1/3rd raise (putting me on same level with new college graduates offers that I was interviewing). Numerous people reminded me that "business ethics" in IBM was oxymoron.

Late 80s, AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.

About same time IBM brings in the former president of AMEX as CEO, AMEX spins off its financial transaction outsourcing business as First Data (that had previously reported to new IBM CEO), in what was the largest IPO up until that time (that included multiple mega-mainframe datacenters). trivia: turn of century I was asked to look at performance at one of these datacenters, greater than 40 max-configured IBM mainframes (@$30M), constantly rolling updates, all running same 450K cobol statement application, number needed to finish financial settlement in the overnight batch window (had a large performance group that was responsible for care and feeding for a couple decades, but got somewhat myopically focused). Using some different performance analysis technology, I was able to find 14% improvement. Interview for IBM System Magazine (although some history info slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history
Today hes the chief scientist for First Data Corp., and his Web site extends his influence into the current IBM* business and beyond.
... snip ...

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

misc. archive posts about getting 1/3rd raise
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017.html#78 IBM Disk Engineering
https://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 27 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall

Early 80s, I also get HSDT project, T1 (1.5mbit/sec) and faster computer links (both terrestrial and satellite) and lots of conflict with communication group (60s, IBM 2701 telecommunication controller supported T1 but 70s transition to SNA/VTAM and the associated issues seem to cap controllers at 56kbit/sec links) ... and looking at more reasonable speeds for distributed operation. Also working with NSF Director, was supposed to get $20M to interconnect NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

Mid-80s, communication group is fighting off client/server and distributed computing (preserving dumb terminal paradigm) and trying to block mainframe release of TCP/IP support. When they loose, then they claim that since they have corporate responsibility for everything that crosses datacenter walls, it has to be released through them. What ships gets aggregate 44kbytes/sec using nearly whole 3090 processor. I then do changes for RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, get sustained 4341 channel throughput, using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 28 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall

One of the things that happened with FS and the internal politics killing 370 ... the lack of new 370 products during (& after) FS was giving the clone 370 system makers their market foothold ... and IBM sales&marketing having to fall back to "FUD" marketing.

Amdahl had won the battle to make ACS, "360 compatible" ... then ACS/360 was killed (folklore IBM was concerned that it would advance the state-of-the-art too fast and IBM would loose control of the market) ... and Amdahl leaves IBM to form his own computer company (before FS started) ... also lists some ACS/360 features that don't show up until more than 20yrs later with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html

When FS imploded there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 303x and 3081 efforts.

For the 303x channel director they took 158 engine with just the integrated channel microcode (and no 370 microcode). A 3031 was two 158 engines (one with the integrated channel microcode and the other just the 370 microcode). A 3032 was 168-3 reworked to use the 303x channel director for external channels. A 3033 started out 168-3 logic remapped to 20% faster chips.

3081 was suppose to be multiprocessor only using some warmed over FS technology. The initial 2-CPU 3081D has less aggregate MIPS than a single CPU Amdahl. They double the processor cache size for the 3081K bringing it up to about the same aggregate MIPS as the single CPU Amdahl (however IBM MVS docs, list MVS multiprocessor overhead only getting 1.2-1.5 times the throughput of a single processor ... so even with approx same aggregate MIPS as the single CPU Amdahl, a MVS 3081K would only have approx. .6-.75 times the throughput)

After FS implodes, I had also gotten roped into helping with a 16-CPU 370 and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before MVS had effective 16-CPU support (i.e. the MVS multiprocessor overhead playing the major role ... and POK doesn't ship a 16-CPU multiprocessor until after the turn of the century). The head of POK then asks some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP, Ethernet, Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP, Ethernet, Token-Ring
Date: 28 May, 2025
Blog: Facebook
The new IBM Almaden Research bldg was heavily provisioned with CAT wiring, assuming 16mbit TR .... however they found that ten mbit Ethernet had higher aggregate LAN throughput over CAT wiring (than 16mbit T/R) and lower latency.

IBM AWD Workstation for PC/RT (PC/AT 16bit bus) did their own cards, including 4mbit T/R cards. Then for RS/6000 (with microchannel), they were told they couldn't do their own cards, but had to use standard PS2 microchannel cards. However the communication group had severely performance kneecapped the PS2 microchannel cards ... and the microchannel 16mbit T/R cards had lower card throughput than the PC/RT 4mbit T/R cards. Furthermore $69 10mbit Ethernet cards had 8.5mbit/sec card throughput (way higher than the $800 16mbit T/R microchannel cards).

The IBM communication was also fiercely fighting off release of IBM mainframe TCP/IP support, when they lost, they changed their strategy ... since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them; ... what shipped got aggregate 44mbyte/sec throughput using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 29 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#33 IBM Downfall

After graduating and joining IBM science center, one of my hobbies was enhanced operating systems for internal datacenters (and the online branch office sales&marketing support HONE system was one of my first and long time customer). I also got to attend customer user group meetings (like SHARE) and drop by customers. Director of one of the financial industry largest IBM datacenters liked me to stop by and talk technology. Somewhere along the way the IBM branch manager managed to horribly offend the customer and in retaliation they were ordering an Amdahl system (lone Amdahl in vast sea of IBM "blue"). Up until that time Amdahl had been selling into the scientific/technology and university markets, but this one would be the 1st for a "true blue" commercial account. I was then asked to go onsite for 6-12months (to help obfuscate why the Amdahl order). I talk it over with the customer and decided to decline the IBM offer. I was then told that the branch manager was a good sailing buddy of IBM CEO and if I didn't do it, I could forget raises, promotion, and career. One of the first times that I was told in IBM, "business ethics" was an oxymoron.

trvia: some of the MIT CTSS/7094 people went to the 5th flr for Project MAC and did Multics and others went to the 4th flr for the IBM science center and did virtual machines, internal network, performance tools, invented GML in 1969, etc. There was some friendly rivalry between 4th & 5th flrs, at one point I was able to point out that I had more internal datacenters running my enhanced operating systems than the total number of Multics installations that ever existed.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past posts mentioning branch manager horribly offending customer:
https://www.garlic.com/~lynn/2025b.html#42 IBM 70s & 80s
https://www.garlic.com/~lynn/2025.html#64 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#19 60s Computers
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#62 Amdahl and other trivia
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024f.html#23 Future System, Single-Level-Store, S/38
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall
Date: 29 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#31 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#33 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#35 IBM Downfall

When we were doing HA/CMP ... we spent a lot of time with the TA to the FSD President (he was working 1st shift as TA, and 2nd shift he was ADA programming for the latest FAA project). In Jan92, we convinced FSD to go with HA/CMP for gov. supercomputers. A couple weeks later cluster scaleup was being transferred for announce as IBM supercomputer (for technical/scientific *only*) and we were told we weren't allowed to work on anything with more than four processors (we leave IBM a few months later).

We had been spending so much time in the Marriott on Democracy that some of them started to think we were Marriott employees.

recent HA/6000, HA/CMP, LANL, LLNL, NCAR, FSC, SCI, etc
https://www.garlic.com/~lynn/2025c.html#24 IBM AIX

... after leaving IBM, we did a project with Fox & Template
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514/
Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

Joe Fox had a 44 year career in the computer business- and was a vice president in charge of 5000 people for 7 years in the federal division of IBM. He then spent 21 years as founder and chairman of a software corporation. He started the 3 person company in the Washington D. C. area. He took it public as Template Software in 1995, and sold it and retired in 1999.

... snip ...

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recently post mentioning HA/CMP & FSD email
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2024f.html#67 IBM "THINK"
https://www.garlic.com/~lynn/2014d.html#52 [CM] Ten recollections about the early WWW and Internet

other posts mentioning work with Fox & company after leaving IBM
https://www.garlic.com/~lynn/2023d.html#82 Taligent and Pink
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe
Date: 30 May, 2025
Blog: Facebook
FS was completely different than 370 and was going to completely replace it (internal politics was killing off 370 efforts during FS and lack of new 370s is credited with given the 370 clone makers their market foothold ... along with forcing IBM sales&marketing to fall back on FUD). One of the last nails in the FS coffin was analysis by the IBM Houston Scientific Center that if 370/195 applications were redone for FS machine made out of the fastest available technology, it would have throughput of 370/145 (factor of 30 times slowdown). When FS finally implodes therre is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&30831 in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

They took 158 engine with just the integrated channel microcode for the 303x channel director. A 3031 was two 158 engines, one with just the integrated channel microcode and 2nd with just the 370 microcode. A 3032 was 168-3 reworked to use 303x channel director. A 3033 started out with 168-3 logic remapped to 20% faster chips.

The 3081 was suppose to be multiprocessor-only starting with 3081D that had lower aggregate MIPS than Amdahl single processor. They quickly double the processor cache sizes for the 3081K bringing aggregate MIPS up to about the same as Amdahl single processor. However MVS docs were that 2-CPU support only had 1.2-1.5 throughput of single CPU (aka 3081K even with same aggregate MIPS as Amdahl single processor, the MVS 3081K throughput only about .6-.75 times, because MVS's multiprocessor overhead). Then they lash two 3081Ks together for a 4-CPU system to try and get something with more MVS throughput than single processor Amdahl machine (MVS multiprocessor overhead increasing as the #CPUS increased).

Also when FS imploded, I got roped into helping with a 370 16-CPU processor (and we con the 3033 processor engineers into working on it in their spare time, lot more interesting that remapping 168-3 logic to 20% faster chips). Everybody throught it was really great until somebody tells the head of POK that it could be decades before MVS had (effective) 16-CPU support (POK doesn't ship 16-CPU machine until almost 25yrs later after turn of century). The head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers, heads down and no distractions.

1988, Nick Donofrio approves HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/600. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having simple relational for PS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives). Work is also underway to port LLNL supercomputer filesystem (LINCS) to HA/CMP and working with NCAR spinoff (Mesa Archive) to platform on HA/CMP.

Early Jan92, we have HA/CMP meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters mid92 and 128-system clusters ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we weren't allowed to work with anything that had more than four systems (we leave IBM a few months later).

FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

Is Parallel Programming Hard, And, If So, What Can You Do About

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About
 It?
Newsgroups: comp.arch
Date: Sat, 31 May 2025 12:53:59 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
I was too flip in my answer, so here is, I think, a better one. The "it" to which we are referring here is caching of write data.

So let's look at a possible scenario. Let's say the heads are at cylinder 100. A write comes in for data that is at cylinder 300. Without write caching, the disk will move the heads to cylinder 300. Now lets say the next request is a read for data at cylinder 150. If the write had been cached, the disk can handle the read with only a 50 cylinder move, then the write with a 150 cylinder move for a total of 200 cylinders. Without write caching, the first move is 200 cylinders for the write, followed by 150 back for the read for a total of 350. Thus the read data, which is presumably more time critical, is delayed.

Overall, write caching improves performance, but if you don't want it, then you can essentially not use it, either by forcing the writes to go to the media, or not using command queuing at all.


Early 70s, as mainstream IBM was converting everything to virtual memory, I got into a battle. Somebody came up with a (LRU?) page replacement algorithm that would replace non-changed pages (didn't require a write before the read replacement) before changed pages (which needed a write before being able to fetch the needed page). Nearly a decade later, they finally realized that they were replacing highly used, highly shared RO/non-changed pages ... before replacing, private, single-task, changed data page (before they realized it was possible to keep a pool of immediately available, changed pages that had been pre-written).

ATM financial started using the IBM (airline) TPF operating system ... light-weight but had simple ordered arm queuing algorithm for reads/updates/writes.

Then a little later in 70s an IBM tech in LA at a financial institution redid it looking at ATM use history and anticipating account requests (that would result in reads/updates/writes ordering that hadn't happened yet). Under heavy load, it improved aggregate throughput (and under lighter load it make little difference) ... sort of delaying a 300cyl seek anticipating likelyhood of transaction (as yet to happen), that would involve a shorter seek.

since sometime in the 80s, (at least) RDBMS have been using "write caching" (write behind) where the sequential log/journal of "committed" transactions is made and actual RDBMS writes happen in the background. Failure recovery requires rereading the log and forcing pending writes for committed transactions.

Originally in cluster environment, any (RDBNS) pending writes for transaction lock request from a different system would force pending writes before granting a different system the requested lock. I did a hack where I could append queued/pending writes to passing the transaction lock to a different system ... in the era of mbyte (shared, multi-system, cluster) disks and gbyte interconnect.

HA/CMP & RDBMS posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
original sql/rdbms System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3090

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3090
Date: 31 May, 2025
Blog: Facebook
1980, STL (since renamed SVL) was bursting at the seams and moving 300 people (and 3270s) from the IMS (DBMS) group to offsite bldg. They tried "remote 3270s" and found the human factors completely unacceptable. They con me into doing channel-extender support so they can place channel-attached 3270 controllers in the offsite bldg, with no perceptible difference in human factors. Side-effect were those mainframe systems throughput increased 10-15%. STL was configuring 3270 controllers across all channels shared with DASD controllers. The channel-extender hardware had significantly lower channel busy (for same amount of 3270 activity) than directly channel-attached 3270 controllers, resulting increased system (DASD) throughput. There was then some discussion about placing all 3270 controllers on channel-extenders (even for controllers inside STL). Then there is attempt by the hardware vendor to get IBM to release my support, however there is a group in POK that were trying to get some serial stuff released and they were concerned if my stuff was in the field, it would make it harder to release the POK stuff (and request is vetoed)

There was a later, similar problem with 3090 and 3880 controllers. While 3880 controllers supported "data streaming" channels capable of 3mbyte/sec transfer, they had replaced 3830 horizontal microprocessor with an inexpensive, slow vertical microprocessor ... so for everything else (besides doubling transfer rate from 1.5mbyte to 3mbyte), 3880 had much higher channel busy. 3090 had originally configured number of 3090 channels to meet target system throughput (assuming 3880 was same as 3830 but supporting 3mbyte transfer). When they found out how much worse the 3880 channel busy actually was, they were forced to significantly increase the number of channels to meet target throughput. The increase in number of channels required an extra TCM, and 3090 people semi-facetiously joked they would bill the 3880 organization for the increase in 3090 manufacturing costs. Eventually sales/marketing respins the large increase in number of 3090 channels as 3090 being wonderful I/O machine.

1988, IBM branch office asks me if I can help LLNL (national lab) get some serial stuff they are working with standardized, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec transfer, full-duplex, aggregate 200mbytes/sec. Then POK finally gets their stuff released (when it is already obsolete) with ES/9000 as ESCON (initially 10mbytes/sec increasing to 17mbytes/sec). Then POK becomes involved in "FCS" and define a heavy-weight protocol that significantly reduces the throughput, which eventually is released as FICON.

The latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (about 20K/FICON). About the same time a FCS is announced for E5-2600 server blades claiming over a million IOPS (two such FCS with higher throughput than 104 FICON). Note IBM docs recommended that SAPs (system assist processors that do actual I/O) be kept to 70% CPU (which would be more like 1.5M IOPS). Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block devices.

refs:
https://en.wikipedia.org/wiki/ESCON
https://en.wikipedia.org/wiki/Fibre_Channel
... above says 100mbyte/direction in 1997, but we had some in 1992
https://en.wikipedia.org/wiki/FICON

Note IBM channel protocol was half-duplex with a enormous amount of end-to-end protocol chatter (per each CCW in channel program and associated busy latency) with control units. Native FICON effectively streamed download much of channel program (equivalent) to controller equivalent; eliminating the enormous end-to-end protocol chatter and the half-duplex busy latency.

Note also max. configured z196 benchmarked at 50BIPS, while there were E5-2600 server blades benchmarking at 500BIPS (ten times z196 and a rack of server blades might have 32-64 such blades, potentially 640 times max. configured z196).

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM & DEC DBMS

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM & DEC DBMS
Date: 01 Jun, 2025
Blog: Facebook
I had lots of time on early engineering E5/4341 and in Jan1979, IBM branch office found out about it and cons me into doing benchmark for national lab looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). The E5/4341 clock was reduced 20% compared to production models that would ship to customers. Then a small cluster of five 4341s had higher throughput than IBM high-end 3033 mainframe, much lower cost and less floor space and environmentals. Then in the 80s, 4300s were selling into the same mid-range market as DEC VAX for small unit number orders. The big difference was large corporations ordering hundreds of VM/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Spring 1979, some USAFDC (in the Pentagon) wanted to come by to talk to me about 20 VM/4341 systems, visit kept being delayed, by the time they came by (six months later), it had grown from 20 to 210. Old archived post with decade of DEC VAX, slide&diced by model, year, US/non-US:
https://www.garlic.com/~lynn/2002f.html#0

Late 70s, besides getting to play disk engineer in bldg14&15, I was also working with Jim Gray and Vera Watson on the original SQL/Relational (System/R) and we manage to do tech transfer to Endicott ("under the radar" while company was preoccupied with the next great DBMS, "EAGLE"). Then "EAGLE" implodes and request is made for how fast can System/R be ported to MVS (which is eventually announced as DB2, originally for decision support only). Jim Gray departs SJR fall of 1980 for Tandem and tries to palm off stuff on me (BofA has early System/R pilot and look at getting 60 VM/4341s).

In 1988, Nick Donofrio approves HA/6000, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXcluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having simple relational for PS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives). Work is also underway to port LLNL supercomputer filesystem (LINCS) to HA/CMP and working with NCAR spinoff (Mesa Archive) to platform on HA/CMP.

Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. I was also working with IBM FSD and convince them to go with cluster scale-up for government supercomputer bids ... and they inform the IBM Supercomputer group. Then late JAN1992, cluster scale-up is transfer for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four systems (we leave IBM a few months later).

IBM concerned that RS/6000 will eat high-end mainframe (industry benchmark, number of program iterations compared to MIPS reference platform). 1993:
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive had reported to for HA/CMP goes over to head up Somerset/AIM (apple, ibm, motorola). RIOS/Power was multi-chip w/o bus/cache consistency (no SMP). AIM would do single chip with motorola 88k bus/cache (supporting SMP configurations). 1999:
• single chip Power/PC 440: 1,000MIPS.

original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA & TCP/IP

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA & TCP/IP
Date: 01 Jun, 2025
Blog: Facebook
there was claim that some customers had so much 3270 coax runs that it was starting to exceed bldg load limits ... supposedly motivating token-ring and cat wiring.

1980, STL (since renamed SVL) was bursting at the seams and moving 300 people (and 3270s) from the IMS (DBMS) group to offsite bldg. They tried "remote 3270s" and found the human factors completely unacceptable. They con me into doing channel-extender support so they can place channel-attached 3270 controllers in the offsite bldg, with no perceptible difference in human factors. Side-effect was throughput for those mainframe systems, increased 10-15%. STL was configuring 3270 controllers across all channels shared with DASD controllers. The channel-extender hardware had significantly lower channel busy (for same amount of 3270 activity) than directly channel-attached 3270 controllers, resulting increased system (DASD) throughput. There was then some discussion about placing all 3270 controllers on channel-extenders (even for controllers inside STL).

IBM workstation division for PC/RT workstation did their own 4mbit T/R card ... but for RS/6000 microchannel, they were told they couldn't do their own cards, but had to use standard PS2 cards. The communication group was fiercely fighting off client/server and distributed computing (trying to protect their dumb terminal paradigm) and had severely performance kneecapped PS2 microchannel cards. The 16mbit T/R microchannel card had lower throughput than the PC/RT 4mbit T/R card. Then for the new Almaden bldg, they found that 10mbit Ethernet over cat wiring had higher aggregate LAN throughput than 16mbit T/R over same wiring. Also $69 10mbit ethernet card had significantly higher throughput than the $800 16mbit T/R microchannel cards.

Early in the 80s, I had gotten HSDT, T1 (US&EU T1; 1.5mbits/sec and 2mbits/sec; full-duplex; aggregate 300kbytes and 400kbytes) and faster computer links (both terrestrial and satellite) and lots of conflict with communication group (60s, IBM had 2701 telecommunication controller supporting T1, then with IBM's move to SNA/VTAM in the 70s and the associated issues, appeared to cap controllers at 56kbit/sec links).

HSDT was working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

The communication group was also fighting off release of mainframe TCP/IP support. When they lost, they changed their tactic that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I then add support for RFC1044 and in some tuning tests between at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Univ. study in the late 80s, found that VTAM LU6.2 pathlength was something like 160k instructions while a typical (BSD 4.3 tahoe/reno) UNIX TCP pathlength was 5k instructions.

Later in the 90s, the communication group subcontracted TCP/IP support (implemented directly in VTAM) to silicon valley contractor. What he initially demo'ed had TCP running much faster than LU6.2. He was then told that everybody "knows" that a "proper" TCP implementation is much slower than LU6.2 ... and they would only be paying for a "proper" implementation.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

SNA & TCP/IP

From: Lynn Wheeler <lynn@garlic.com>
Subject: SNA & TCP/IP
Date: 02 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP

channel-extender vendor?

NSC ... initially they implied that 710 was full duplex ... but their software never really used 710 in that manner ... and I started getting lots of collisions; needed to use 720 satellite adapters to simulate full-duplex until they came out with 715. I had done the support in 1980, and then the serial group in POK (that eventually ships ESCON more than decade later) get the request to have my software released, vetoed (concerned if it was in the market it would make it harder to justify releasing their stuff).

funny ... for certain types of transmission errors, I would simulate CSW channel check. when IBM wouldn't release my support, NSC reverse engineers it and duplicates it. This comes up 6-7 yrs later when the 3090 product administrator tracks me down. 3090 channels had been designed to have 3-5 channel checks over year period, aggregate for all 3090s. There was an industry service that collected EREP data from mainframe customers (both IBM and IBM clone) and published summarized data; and 3090 showed a total of 20 channel check aggregate for a year period for all 3090s ... and they attributed the additional 15 reported channel checks to customers running the NSC channel extender support. The 3090 product administrator asks if I could do something about it. I do a little research and determine that for channel-extender purposes, simulating IFCC (interface control check) would result in the same actions (as channel check) and get NSC to change their software.

For related info, see RFC1044
https://www.rfc-editor.org/info/rfc1044
for support I added to mainframe TCP/IP.

Note, I could get both T1 and T3 from NSC routers (as well as dozen+ Ethernet ports, FDDI, bunch of other interfaces) ... which is what I was using in working with NSF director for the NSF Supercomputer datacenter support (as well as other gov. agencies). The ESCON spec was improved for transmission by about 10% and made full-duplex, getting about 40+mbytes/sec aggregate (rather than ESCON just 10mbytes/sec, later improved to 17mbytes/sec) ... which was used in RS/6000 for SLA (serial link adapter), however it was only useful to talk to other RS/6000 until we con NSC into adding SLA feature to their router. This was then upgraded to fibre-channel ("FCS") in 1992.

what got me into it was when I transferred from science center to research in the 70s, I got to wander around silicon valley datacenters (both IBM and non-IBM), including disk bldg14/engineering and bldg15/product-test across the street. They were running 7x24, prescheduled, stand-alone testing and had mentioned they recently had tried MVS but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite the I/O supervisor to make it bullet-proof and never fail, allowing any amount of on-demand, concurrent testing, grealy improving productivity. STL was one of the places running my enhanced production operating system, so that contributed to requesting me to also do channel-extender support. I do a (internal only) research report mentioning the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head.

Many standard VM systems were claiming quarter to third second system response (while all the MVS systems were rarely even second response). I was clocking .11sec system response for my systems. In the early 80s, there were studies showing .25sec response improve productivity. 3277/3272 had .086 hardware response ... so needed at least .164sec system response for .25sec. This was when 3278 appeared where lots of the electronics moved back to 3274 controller greatly increasing coax protocol chatter and hardware response went to .3-.5secs (depending on amount of data) ... making it impossible to achieve .25sec (unless mainframe had time machine to send transmission into the past). Letters to the 3278 product administrator got response that 3278 wasn't for interactive computing but data entry.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Terminal Line Speed

From: Lynn Wheeler <lynn@garlic.com>
Subject: Terminal Line Speed
Date: 05 Jun, 2025
Blog: Facebook
CSC comes out to install (virtual machine) CP67 (precursor to vm370) at the univ, 3rd install after CSC itself and MIT Lincoln Labs. I mostly get to play with it during my dedicated 48hr weekend time. I initially rewrite lots of the code for running OS/360 in virtual machine; test stream 322secs on bare hardware and 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).
https://www.wikiwand.com/en/articles/History_of_CP/CMS
https://www.leeandmelindavarian.com/Melinda#VMHist

CP67 comes with 1052&2741 terminal support (134baud) and automagic terminal identification (using controller SAD CCW to switch port scanner type). Univ. has some ASCII TTY (33 & 35), so add ASCII terminal support (110baud), integrated with automagic terminal type. I then want to have single dial-up number ("hunt group")
https://en.wikipedia.org/wiki/Line_hunting
for all terminals, didn't quite work, IBM had short-cut controller and hardwired line speed, so univ. starts clone controller project, build channel interface card for Interdata/3 programmed to emulate IBM controller (with addition that it provided line autobaud support). Then upgraded with Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (later Perkin-Elmer) sells it as plug compatible controller (and 4 of us are written up responsible for some part of the clone controller business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

I get dialup 2741 at home with acoustic modem, later upgraded with 300baud CDI miniterm.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

CP67 at NPG

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP67 at NPG
Date: 06 Jun, 2025
Blog: Facebook
Some of the MIT CTSS/7094 people went to the 5th flr and did MULTICS, others went to the IBM cambridge science center on the 4th flr and did virtual machines (they wanted a 360/50 to modify with virtual memory, but all the extra 360/50s were going to FAA ATC, so had to settle for 360/40 and do CP/40, when 360/67 standard with virtual memory becomes available, it morphs into CP/67, precursor to VM/370), the science center wide area network (morphs into the internal corporate network larger than arpanet/internet from just about the beginning until sometime mod/late 80s; also used for the corporate sponsored univ BITNET), lots of online and performance apps, invented GML in 1969 (morphs into ISO SGML a decade later and after another decade, HTML at CERN). other recent comment
https://www.garlic.com/~lynn/2025c.html#43 Terminal Line Speed

At the great 1Jan83 cutover to internetworking, arpanet had appox 100 IMPs and 255 hosts while the internal corporate network was about to pass 1000 (world-wide).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network post
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

recent posts mentioning CP67 at NPG
https://www.garlic.com/~lynn/2025b.html#70 Kernel Histories
https://www.garlic.com/~lynn/2025b.html#28 IBM WatchPad
https://www.garlic.com/~lynn/2025.html#40 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#5 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#12 4th Generation Programming Language
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#40 IBM Virtual Memory Global LRU
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#36 Implicit Versus Explicit "Run" Command
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#112 43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#25 CTSS/7094, Multics, Unix, CP/67
https://www.garlic.com/~lynn/2024.html#4 IBM/PC History
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023f.html#106 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#103 Microcode Development and Writing to Floppies
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#47 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#62 Online Before The Cloud
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022c.html#100 IBM Bookmaster, GML, SGML, HTML
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#29 Unix work-alike
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021i.html#101 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021h.html#81 Why the IBM PC Used an Intel 8088
https://www.garlic.com/~lynn/2021f.html#76 IBM OS/2
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2021c.html#89 Silicon Valley
https://www.garlic.com/~lynn/2021c.html#57 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#69 OS/2

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Germany and 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany and 370/125
Date: 07 Jun, 2025
Blog: Facebook
After graduating and joining IBM Science Center, one of my hobbies was enhanced production operating systems for internal datacenters (one of the first and long time customer was the online sales&marketing support HONE systems). Boeblingen paid for me to come over a couple times for my first overseas business trips (put up in small commercial business hotel in residential district, calls back home were nearly $100, also downtown, first time I ever heard commercial vehicles w/warning signal, when backing up). Later after Future System imploded, the US 115/125 support group cons me into doing design and software/microcode for a 125 5-CPU multiprocessor (VAMPS). At the same time Endicott had also con'ed me into helping with 138/148 ECPS microcode. Then Endicott escalates issue that 5-CPU 125 would have higher throughput and better price/performance than 148 and I had to argue both sides before Endicott manages to get VAMPS killed.

trivia: 115/125 had 9-position memory bus for microprocessors. 115 had all the microprocessors the same just with different microcode loads (for both controllers and 370). 125 was same except microprocessor for 370 was 50% faster than the other microprocessors. With nine-positions, could have four positions for controllers and up to five for 125 370 cpus.

posts mentioning multiprocessor 125
https://www.garlic.com/~lynn/submain.html#bounce
SMP, tightly coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning both VAMPS and ecps
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2008k.html#22 CLIs and GUIs
https://www.garlic.com/~lynn/2008d.html#54 Throwaway cores
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?
https://www.garlic.com/~lynn/2007f.html#14 more shared segment archeology
https://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from early/mid 70s
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006b.html#40 another blast from the past ... VAMPS
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2005k.html#50 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#47 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#42 wheeler scheduler and hpo
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004o.html#33 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#3 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004e.html#52 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#16 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002i.html#80 HONE
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Germany and 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Germany and 370/125
Date: 07 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#45 IBM Germany and 370/125

early 70s, HONE also asked that I do 1st non-US installs in Paris and Tokyo ... I thought it was accomplishment to figure out how to transition from Paris through internal networks in order to read home email.

re: long-winded topic drift, long-distance voice; In 70s, IBM formed SBS (satellite business systems) with 1/3rd for COMSAT and 1/3rd for Aetna, for computer data. However communication group SNA/VTAM couldn't handle round-trip satellite propagation delays and SBS was forced to fall-back to voice business (also degraded by round-trip delays), so was still loosing money. Satellite T3 C-band 10M dishes at all the major US IBM plant sites (and corporate required transmissions be encrypted ... developed "data aggregator"; T3 DES encryption, joke referred to as the "data aggravator"). Also joke was that so many IBMers transferred to SBS that it had the same number of management levels for 2000 employees as the whole IBM corporation.

Early 80s, I get HSDT project, T1 and faster computer links (both terrestrial and satellite) as well as conflict with the communication group (in the 60s, IBM had 2701 that supported T1, but with the transition to SNA/VTAM in the 70s and associated issues, all the controllers appeared caped at 56kbits, aka, VTAM could just barely handle terrestrial 56kbit round-trip delay). Besides circuits on C-band satellites, HSDT on the other side of Pacific, was getting custom built TDMA earth stations for Ku-band SBS4 (initially, 4.5M dishes in Los Gatos and Yorktown and 7M dish in Austin). HSDT from the 1st, developed dynamic adaptive rate-based pacing (as alternative to fixed window-based pacing that crippled VTAM) for increasingly faster transmissions and things like round-trip double hop satellite propagation delay (up/down between west & east coast and then up/down between east coast and Europe). Mid-80s, the Friday before leaving for the other side of Pacific to check on custom hardware, got a Raleigh email announcing a new "networking" forum with the following definitions:


low-speed: 9.6kbits/sec
medium-speed: 19.2kbits/sec
high-speed: 56kbits/sec
very high-speed: 1.5mbits/sec

Monday morning on conference room wall on the other side of Pacific:

low-speed: <20mbits/sec
medium-speed: 100mbits/sec
high-speed: 200mbits-300mbits/sec
very high-speed: >600mbits/sec

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3270 Terminals

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3270 Terminals
Date: 09 Jun, 2025
Blog: Facebook
Trivia: one of my hobbies after joining IBM was highly optimized operating systems for internal datacenters. In early 80s, there were increasing studies showing quarter second response improved productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was introduced with lots of 3278 hardware moved back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (impossible to achieve quarter second). Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch). 3272/3277 required .164sec system response (for human to see quarter second). Fortunately I had numerous IBM systems in silicon valley with (90th percentile) .11sec system response, I don't believe any TSO users ever noticed 3278 issues, since they rarely ever saw even one sec system response). Also real 3270 was half-duplex, if you happen to hit a key, same time screen was being update, it would lock the keyboard ... and need to stop and reset. YKT starts bldg FIFO boxes for 3277, unplug keyboard from head, plug in FIFO box, and plug keyboard into FIFO (so it holds keystrokes when it senses screen being updated). Later, IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput as 3278 emulation cards.

After transferring from CSC to SJR in San Jose in the 70s, I got to wander around datacenters (IBM & non-IBM) in silicon valley, including disk bldg14/engineering and bldg15/product-test, across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment) requiring re-ipl. I offer to rewrite I/O supervisor to be bullet-proof and never fail, allowing any amount of on-demand concurrent testing, greatly improving productivity. I then author an internal research report on the work and happen to mention the MVS 15min MTBF (bringing the wrath of the MVS organization down on my head).

Then 1980, IBM STL (since renamed SVL) was bursting at the seams and were moving 300 people (& 3270s) from IMS (DBMS) group to offsite bldg. They had tried remote 3270, but found human factors totally unacceptable. I get called into doing channel-extender support allowing channel-attached 3270 controllers at the offsite bldg, resulting in no discernible difference in human factors between offsite and inside STL. Side-effect were the channel-extender mainframes started getting 10-15% improved system throughput. STL was spreading 3270 controllers across all the channels with 3830/3330 DASD ... and it turns out channel-extenders had significantly lower channel busy (for same amount of 3270 traffic) as direct channel-attached 3270 controllers (moving them to channel-extenders, reduced the channel busy interference with DASD). There was then consideration using channel-extenders for all 3270s (even for those inside STL). Then hardware vendor tried to get IBM to release my support, but there was group in POK that gets it vetoed (they were playing with some serial stuff and were afraid that if it was in market, it would be harder to justify releasing their stuff).

Hardware vendor than reverse engineers my support and releases it. 1986, 3090 product administrator tracts me down. 3090 channels were designed to have aggregate of 3-5 "channel errors" per year, aggregate across all machines. There was a industry service that collected mainframe EREP data (both IBM and clone) and published summaries. It showed 20 "channel errors" and the extra appeared to be customers running the channel extender support, and the 3090 product administrator wanted me to get vendor to change. For some channel extender transmission errors, I reflected CSW channel control check. I do a little checking and determine that reflecting "IFCC" (interface control check) instead, would result in essential same recovery/retry and get the vendor to change their support.

1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I did in 1980, initially 1gbit, full-duplex, aggregate 200mbyte/sec.). Then the POK serial stuff is finally released with ES/9000 as ESCON (when it was already obsolete, initially 10mbyte/sec, later improved to 17mbyte/sec).

CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender support posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

recent 3270 posts mentioning .086sec and .3-.5sec response
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#42 Los Gatos Lab, Calma, 3277GA
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#93 IBM 4341
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022e.html#18 3270 Trivia
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Technology

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Technology
Date: 09 Jun, 2025
Blog: Facebook
"wild ducks" era was from watsons, Learson tried (& failed) to block bureaucrats, careerists and MBAs from destroying Watson culture/legacy. Post discussing some of Learson's failure and eradication of "wild ducks"
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

After transferring to SJR, I worked with Jim Gray and Vera Watson on original SQL/Relational, System/R ... including tech transfer to Endicott for SQL/DS (under the radar while company was preoccupied with next new great DBMS, "EAGLE"). Then when "EAGLE" implodes, there was request for how fast could System/R be ported to MVS, eventually was released as DB2 (originally for decision support only).

1988, got HA/6000 project (not System/R, SQL/DS and/or DB2 source which was mainframe only) , originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that have VAXCluster support in same, portable source base for UNIX, I do a distributed lock manager support with VAXCluster semantics to ease the port along with several performance enhancements). Then (mainframe) DB2 group started complaining if I was allowed to continue, it would be at least five years ahead of them. Also IBM S/88 Product Administrator started taking us around to their customers and has me write a section for the corporate continuous available strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain they couldn't meet the requirements).

Early Jan92, have a commercial cluster scale-up meeting with Oracle CEO where IBM/AWD executive Hester tells Ellison that we would have 16-system cluster by mid92 and 128-system cluster by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later).

1993 industry benchmark (uses number of program iterations compared to industry MIPS reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


aka, 16-system: 2,016MIPS, 128-system: 16,128MIPS

trivia: also in 1988, IBM Branch Office asks if I can help LLNL (national lab) standardize some serial stuff they are working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I did in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then POK gets their stuff released with ES/9000 as ESCON (when it is already obsolete, initial 10mbytes/sec, later improved to 17mbytes/sec). Some other background in this post from today
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
FCS (&/or FICON) posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM And Amdahl Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM And Amdahl Mainframe
Date: 09 Jun, 2025
Blog: Facebook
After graduating and joining IBM Science Center, I still could attend user group meetings and visit customers. Director of one of the largest financial industry datacenters used to like me to stop by and talk technology. Then the local IBM branch manager horribly offended the customer and in retaliation, they ordered an Amdahl system (this was back in the days when Amdahl was selling into univ and science/technical, but this would be the 1st true blue commercial account) ... a single Amdahl machine in vast sea of blue. I was asked to go onsite for 6-12 months (to help obfuscate why the customer was ordering Amdahl). I talk it over with the customer and then decline IBM's offer. I was then told that the branch manager was good sailing buddy of IBM's CEO ... and if I didn't do it, I could forget raises, promotions and a career.

other trivia: ... Amdahl had won battle to make ACS, 360 compatible, then when ACS/360 is killed Amdahl leaves IBM
https://people.computing.clemson.edu/~mark/acs_end.html
https://en.wikipedia.org/wiki/Amdahl_Corporation
before Future System starts, FS was totally different than 360 and was going to completely replace it (internal politics was killing off 370 efforts, and lack of new 370s during FS, is credited with giving the clone 370 makers their market foothold).
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. For the decision to add virtual memory to all 370s, there was also decision to morph CP67 into VM370; and a lot of stuff was dropped and/or greatly simplified. I then start moving lots of stuff into VM370 starting with release2. When FS imploded and the mad rush to get stuff back into the 370 product pipelines (including quick&dirty parallel effort for 3033&3081). With the rise of clone 370 system makers during FS, there was also a decision to start charging for operating system software (in the 23jun1969 unbundling announcement, they were able to make the case to not charge for kernel software), starting with new, incremental add-ons; and bunch of my internal stuff was selected for early guinea pig (needing to spend time with planners, lawyers, business people about practices for operating system charging).

It included dynamic adaptive resource management (dispatching, scheduling, etc ... from my 60s undergraduate days), including some code at boot that attempted to qualify the performance of the system (replacing hardcoded table of known machines). Corporate performance expert said they wouldn't sign-off on release because it didn't contain manual tuning knobs (and "everybody" knew that the enormous array of MVS manual knobs were state-of-the-art). So I put in some manual knobs with documentation, formulas, and code ... as "SRM" (DMK-prefix, joke on MVS SRM). What wasn't included was explicit explanation of the joke; from Operations Research ... the SRM stuff had less degrees of freedom that the dynamic adaptive stuff (DMK-prefix, "STP", from 70s-era TV-adverts).

Same time, Endicott had con'ed me into helping with microcode assist ECPS for 370 138&148. Later I was allowed to present how ECPS was done at user group meetings, including monthly BAYBUNCH meetings hosted by SLAC. The Amdahl people would corner me after meetings (usually Oasis, but sometimes Dutch Goose), pumping me for more information. They explained that they had done MACROCODE (originally as countermeasure to the plethora of new 3033 microcode hacks) and was using it to develop HYPERVISOR (multiple domain). Archived post with original ECPS analysis
https://www.garlic.com/~lynn/94.html#21

other trivia: besides helping Endicott with ECPS and doing my own performance add-on, I was asked to help with new 16-CPU 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before POK's favorite son operation system ("MVS") had (effective) 16-CPU support (IBM pubs at the time had MVS 2-CPU throughput only 1.2-1.5 times that of single CPU, later IBM 3081K 2-CPU had about same aggregate MIPS as Amdahl's single processor, but MVS 3081K with only .6-.75 times the throughput) and some of us are invited to never visit POK again and 3033 processor engineers, heads down and no distractions (POK doesn't ship 16-CPU machine until after turn of century).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaoptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
SMP, tightly-couplded, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RS/6000

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RS/6000
Date: 11 Jun, 2025
Blog: Facebook
IBM/AWD did their own cards for the PC/RT workstation, including the 4mbit token-ring card. Then for RS/6000 (w/microchannel), they were told they couldn't do their own cards, but had to use standard PS2 cards (that had been heavily performance kneecapped by the communication group). They found that the PS2 microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (joke that PC/RT 4mbit T/R server would have higher throughput than RS/6000 16mbit T/R server). HA/CMP is desperately in need of workstation, high-performance cards. RS/6000 had done SLA (serial link adapter) modified ESCON spec for faster data rate, full-duplex, aggregate 40+mbyte/sec (much faster than original ESCON 10mbyte/sec, but non-interoperable with anything but with other RS/6000). We talk a high-performance router vendor (has T1 & T3 telco interfaces, 16 10mbit Ethernet LAN option, FDDI, mainframe and supercomputer channel interfaces, etc) into adding SLA option (and can leverage in high-performance workstation market).

At Interop '88 in Santa Clara, I have PC/RT with megapel display in (non-IBM) booth. It was at immediate right-angle to Sun booth where Case was demo'ing SNMP and con him into installing on my PC/RT

1988, Nick Donofrio approves the HA/6000 project, originally for the NYTimes to move their newspaper system (ATEX) off of VAXCluster to RS/6000. We start running the project out of the IBM Los Gatos lab (on the west coast) and subcontract a lot of the work to CLaM Associates (in Cambridge). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXcluster support in same source base with Unix; I do distributed lock manager with VAXCluster semantics to ease ports; IBM Toronto was still long way before having simple relational for PS2). Then S/88 product administrator starts taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/as400 and POK/mainframe, complain they can't meet the objectives). Work is also underway to port LLNL supercomputer filesystem (LINCS) to HA/CMP and working with NCAR spinoff (Mesa Archive) to platform on HA/CMP.

Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. I was also working with IBM FSD and convince them to go with cluster scale-up for government supercomputer bids ... and they inform the IBM Supercomputer group. Then late JAN1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four systems (we leave IBM a few months later).

IBM concerned that RS/6000 will eat high-end mainframe (industry benchmark, number of program iterations compared to the benchmark reference platform). 1993:
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive HA/CMP reported to, goes over to head up Somerset/AIM (apple, ibm, motorola). RIOS/Power was multi-chip w/o bus/cache consistency (no SMP). AIM would do single chip with motorola 88k bus/cache (supporting SMP configurations). 1999, single chip Power/PC 440: 1,000MIPS.

Also 1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff they are working with, that quickly becomes fibre-channel standarrd ("FCS", including some stuff I had done in 1980, initially 1gbit, full-duplex, aggregate 200mbyte/sec).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Basic Beliefs

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Basic Beliefs
Date: 12 Jun, 2025
Blog: Facebook
Late 80s, AMEX and KKR were in competition for private-equity, reverse-IPO(/LBO) buyout of RJR and KKR wins. Barbarians at the Gate
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
KKR runs into trouble and hires away president of AMEX to help.

1972, Learson tried (and failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

FS completely different from 370 and going to completely replace it (during FS, internal politics was killing off 370 efforts, limited new 370 is credited with giving 370 system clone makers their market foothold). One of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 apps were redone for FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd for makeover of the corps (at a time when IBM was desperately in need of a makeover).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Then IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Private Equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
IBM CEO (former AMEX president)
https://www.garlic.com/~lynn/submisc.html#gerstner
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Workstation

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Workstation
Date: 12 Jun, 2025
Blog: Facebook
Old archived (comp.arch/alt.folklore.computers) post with A74 announce (7437) email
https://www.garlic.com/~lynn/2000e.html#email880622

i provided the software changes to run with 4k storage protect (instead of 370 2k).

Other archived posts with email mentioning A74
https://www.garlic.com/~lynn/2011c.html#email850425b
https://www.garlic.com/~lynn/2015d.html#email850503
https://www.garlic.com/~lynn/2015d.html#email850520
https://www.garlic.com/~lynn/2015d.html#email850520b
https://www.garlic.com/~lynn/2015d.html#email850520c
https://www.garlic.com/~lynn/2015d.html#email850520d
https://www.garlic.com/~lynn/2007.html#email850712

... trivia: I did a lot more of the work for xt/370 (later available with PC/AT) ... including showing lots of CMS had gotten quite bloated and page thrashed in 384k 370 memory. I was blamed for 6m slip in ship schedule when they had to upgrade memory to 512k. They did ship with my page mapped CMS filesystem (that I had done originally for CP67/CMS) that significantly out performed the standard CMS filesystems (I had been including in my production systems for internal datacenters)

CMS paged mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

IBM AWD did their own cards for PC/RT, including 4mbit token-ring card. Then for RS/6000 microchannel, they were told they couldn't do their own cards but had to use the PS2 standard cards (that had been heavily performance kneecapped by the communication group). The PS2 microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card. Significant trials for AWD being able to support real workstation performance (with the performance kneecapped PS2 microchannel cards).

1988, IBM branch office asks me if I can help LLNL (national lab) get some serial stuff they are working with standardized, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec transfer, full-duplex, aggregate 200mbytes/sec. Then POK finally gets their stuff released (when it is already obsolete) with ES/9000 as ESCON (initially 10mbytes/sec increasing to 17mbytes/sec). Then POK becomes involved in "FCS" and define a heavy-weight protocol that significantly reduces the throughput, which eventually is released as FICON.

The latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time a FCS is announced for E5-2600 server blades claiming over a million IOPS (two such FCS with higher throughput than 104 FICON). Note IBM docs recommended that SAPs (system assist processors that do actual I/O) be kept to 70% CPU (which would be more like 1.5M IOPS). Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block devices.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3270 Terminals

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3270 Terminals
Date: 13 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals

Trivia: one of my hobbies after joining IBM was highly optimized operating systems for internal datacenters. In early 80s, there were increasing studies showing quarter second response improved productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was introduced with lots of 3278 hardware moved back to 3274 controller, cutting 3278 manufacturing costs and significantly driving up coax protocol chatter ... increasing hardware response to .3sec to .5sec depending on amount of data (impossible to achieve quarter second). Letters to the 3278 product administrator complaining about interactive computing got a response that 3278 wasn't intended for interactive computing but data entry (sort of electronic keypunch). 3272/3277 required .164sec system response (for human to see quarter second). Fortunately I had numerous IBM systems in silicon valley with (90th percentile) .11sec system response, I don't believe any TSO users ever noticed 3278 issues, since they rarely ever saw even one sec system response). Also real 3270 was half-duplex, if you happen to hit a key, same time screen was being update, it would lock the keyboard ... and need to stop and reset. YKT starts bldg FIFO boxes for 3277, unplug keyboard from head, plug in FIFO box, and plug keyboard into FIFO (so it holds keystrokes when it senses screen being updated). Later, IBM/PC 3277 emulation cards had 4-5 times the upload/download throughput as 3278 emulation cards.

After transferring from CSC to SJR in San Jose in the 70s, I got to wander around datacenters (IBM & non-IBM) in silicon valley, including disk bldg14/engineering and bldg15/product-test, across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment) requiring re-ipl. I offer to rewrite I/O supervisor to be bullet-proof and never fail, allowing any amount of on-demand concurrent testing, greatly improving productivity. I then author an internal research report on the work and happen to mention the MVS 15min MTBF (bringing the wrath of the MVS organization down on my head).

Then 1980, IBM STL (since renamed SVL) was bursting at the seams and were moving 300 people (& 3270s) from IMS (DBMS) group to offsite bldg. They had tried remote 3270, but found human factors totally unacceptable. I get called into doing channel-extender support allowing channel-attached 3270 controllers at the offsite bldg, resulting in no discernible difference in human factors between offsite and inside STL. Side-effect were the channel-extender mainframes started getting 10-15% improved system throughput. STL was spreading 3270 controllers across all the channels with 3830/3330 DASD ... and it turns out channel-extenders had significantly lower channel busy (for same amount of 3270 traffic) as direct channel-attached 3270 controllers (moving them to channel-extenders, reduced the channel busy interference with DASD). There was then consideration using channel-extenders for all 3270s (even for those inside STL). Then hardware vendor tried to get IBM to release my support, but there was group in POK that gets it vetoed (they were playing with some serial stuff and were afraid that if it was in market, it would be harder to justify releasing their stuff).

Hardware vendor than reverse engineers my support and releases it. 1986, 3090 product administrator tracts me down. 3090 channels were designed to have aggregate of 3-5 "channel errors" per year, aggregate across all machines. There was a industry service that collected mainframe EREP data (both IBM and clone) and published summaries. It showed 20 "channel errors" and the extra appeared to be customers running the channel extender support, and the 3090 product administrator wanted me to get vendor to change. For some channel extender transmission errors, I reflected CSW channel control check. I do a little checking and determine that reflecting "IFCC" (interface control check) instead, would result in essential same recovery/retry and get the vendor to change their support.

1988, IBM branch office asks if I could help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I did in 1980, initially 1gbit, full-duplex, aggregate 200mbyte/sec.). Then the POK serial stuff is finally released with ES/9000 as ESCON (when it was already obsolete, initially 10mbyte/sec, later improved to 17mbyte/sec).

IBM AWD had done their own adapter cards for PC/RT, including 4mbit token-ring card. Then for RS/6000 microchannel, they were told that they couldn't do their own (microchannel) cards, but had to use all PS2 cards (that had been severely performance kneecapped by the communication group). Example was microchannel PS2 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (performance kneecapping RS/6000 for anything other than pure CPU intensive). New Almaden Research had been heavily provisioned with wiring assuming 16mbit token-ring, but found 10mbyte Ethernet (over same wiring) had higher aggregate throughput and lower latency. Also $69 Ethernet cards had much higher throughput (8.5mbits/sec, same wiring) than $800 microchannel 16mbit token-ring cards (which was even worse than the PC/RT 4mbit token-ring cards). Folklore is that IBM wiring was invented to replace 3270 coax, because for large installations, 3270 coax was starting to exceed bldg load limits.

Also found, for difference in cost of cards (300*$69=$20,700 & 300*800=$240,000), could get several high-performance TCP/IP routers (each with channel interface, 16 Ethernet LAN interfaces, T1 & T3 telco options, FDDI LAN options, more) ... could spread 300 RS/6000 across 6*16=96 Ethernet LANs ... approx three RS/6000 per dedicated 10mbit Ethernet (each 10mbit LAN, with higher aggregate throughput than 16mbit T/R). Now mid-80s, the communication group was also trying to prevent release of mainframe TCP/IP support. When they lost, they changed tactics and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be release through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

internal IBM CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

other recent posts mentioning interactive response
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025c.html#17 IBM System/R
https://www.garlic.com/~lynn/2025c.html#42 SNA & TCP/IP

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3270 Terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3270 Terminals
Date: 14 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#47 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals

IBM FE had diagnostic/service bootstrapping process starting with scoping individual components. After implosion of Future System and mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel (3033 started off remapping 168-3 logic to 20% faster chips and 3081 was huge amount warmed-over FS technology)
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]
... snip ...

... packaging the huge amount of circuits for 3081 (something like enough to build 16 168-3 machines) into reasonable volume led to TCMs ... and no-longer supporting the bootstrap scoping service process ... they had to add a service processor that was connected to a large number of probes built into the TCMs. The initial two-processor 3081D started off significantly slower than the Amdahl single processor. They then double the processor cache sizes, bringing the 3081K up to about same aggregate MIPS as the Amdahl single processor (although MVS doc. said the 2-CPU support only got about 1.2-1.5 times the throughput of 1-CPU, or a 3081K MVS only about .6-.75 throughput of Amdahl single processor MVS (even with the same aggregate MIPS).

Then for 3090 they move to a 4331 running modified version of VM370R6; before shipping to customers, the "3092" becomes a pair of 4361s ... with all the service screens implemented with CMS IOS3270.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some recent service processor posts
https://www.garlic.com/~lynn/2025c.html#23 IBM 4361 & DUMPRX
https://www.garlic.com/~lynn/2024e.html#131 3081 (/3090) TCMs and Service Processor
https://www.garlic.com/~lynn/2024e.html#21 360/50 and CP-40
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023e.html#103 3090 & 3092
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History

--
virtualization experience starting Jan1968, online at home since Mar1970

Univ, 360/67, OS/360, Boeing, Boyd

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Univ, 360/67, OS/360, Boeing, Boyd
Date: 14 Jun, 2025
Blog: Facebook
I took two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO in assembler for 360/30. Univ. was getting 360/67 for tss/360 to replace 709/1401 (and 360/30 was temporary until 360/67 arrived). Within year of taking intro class, 360/67 arrives and I'm hired fulltime responsible for OS/360 (tss/360 never came to production). Then before I graduate, I'm hired into small group in Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter possible largest in the world. After graduation, I join IBM (instead of staying with Boeing CFO).

In early 80s, I'm introduced to John Boyd and sponsored his briefings at IBM. He had lots of stories, one was being very vocal that the electronics across the trail wouldn't work and possibly as punishment, he is put in command of spook base (about same time I'm at Boeing). He would say spook base had the largest air conditioned bldg in that part of the world ... some refs:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Boyd biography has "spook base" a $2.5B "windfall" for IBM (ten times Renton). other Boyd ref:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

Univ. shutdown datacenter on weekends and I would have place dedicated from 8am sat to 8am mon (made monday classes hard). I was given a lot of hardware and software manuals and got to design my own stand-alone monitor, device drivers, interrupt handlers, error retry/recovery, storage management, etc. (aka MPIO unit record front-end for 709, read->tape and tape->printer/punch) After a few weeks I had 2000 card program. I then used assembler option to assemble either stand-alone monitor or run under os/360 (stand-alone monitor took 30mins to assemble, OS/360 version took an hour to assemble, each DCB macro taking over 5mins).

Sometimes when I came in sat. morning, production had finished early and everything had been shutdown .... then sporadically the 360/30 wouldn't powerup. Scouring documents and trail&error, I learned to put all controllers in CE-mode, powerup 360/30, individual powerup controllers, and then take the controllers out of CE-mode.

Student Fortran took under second to run on 709 but initially over a minute w/OS360 (on 360/67 as 360/65). I then install HASP cutting time in half. I then start redoing STAGE2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Applying lots of PTFs would destroy careful PDS member ordering and start driving student fortran to 20secs (and I would have to redo mini-STAGE2 SYSGEN, to re-establish careful ordering). Student Fortran never got better than 709 until I install UofWaterloo WATFOR.

Then CSC comes out to install CP67 (precursor to VM370, 3rd install after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my dedicated weekend time. I initially start rewriting lots of CP67 to improve running OS/360 in virtual machine. Test stream ran 322secs and initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HASP/ASP, JES2/JES3, and/or NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

mentioning 709, 1401, MPIO, 360/30, OS/360, CP/67, Boeing CFO, Renton
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/2

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/2
Date: 15 Jun, 2025
Blog: Facebook
60s, CSC came out to univ. to install CP67/CMS (3rd install after CSC itself and MIT Lincoln Labs) ... and I mostly played with it during my weekend dedicated time ... rewriting pathlengths for running OS/360 in virtual machines. Benchmark originally was 322secs bare machine, initially in virtual machine 858secs, CP67 CPU 534secs ... after 6months got CP67 CPU down to 113secs for the benchmark.

Then I start on I/O optimized ... ordered seek for movable arm DASD and chained page requests maximizing transfers/rotation when no arm motion required (got fixed head 2301 drum from about 70/secs to peak of 270/sec). I then rewrite paging, page replacement, dynamic adaptive resource management, scheduling, dispatching, etc. Later I found that the original CP67 scheduling/dispatching code looked similar to MIT/CTSS and early Unix.

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computing Services (consolidate all dataprocessing into independent business unit). Later when I graduate, I join the science center (instead of staying with Boeing CFO).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, branch office sales&marketing support HONE was first (and long-time customer), initially CP67/CMS, then VM370/CMS. CSC had also done port of APL\360 to CMS as CMS\APL ... restructuring from swapped 16kbyte (sometimes 32kbyte) workspaces to large demand paged mbyte and added APIs for system services (like file I/O), enabling lots of real world applications

Some 20yrs later (after undergraduate rewriting lots of CP67, then 74/75 added a lot back into VM370), got email from the OS2 group, originally ask the Endicott (VM group), then fowarded to IBM Kingston (VM group), then forwarded to me ... asking why VM does it so much better.

Date: 11/24/87 17:35:50
To: wheeler
FROM: ????
Dept ???, Bldg ??? Phone: ????, TieLine ????
SUBJECT: VM priority boost

got your name thru ??? ??? who works with me on OS/2. I'm looking for information on the (highly recommended) VM technique of goosting priority based on the amount of interaction a given user is bringing to the system. I'm being told that our OS/2 algorithm is inferior to VM's. Can you help me find out what it is, or refer me to someone else who may know?? Thanks for your help.

Regards, ???? (????? at BCRVMPC1)

... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:58:10
From: wheeler
Subject: os2 dispatching

fyi ... somebody in boca sent a message to endicott asking about how to do dispatch/scheduling (i.e. how does vm handle it) because os2 has several deficiencies that need fixing. VM Endicott forwarded it to VM IBM Kingston and VM Kingston forwarded it to me. I still haven't seen a description of OS2 yet so don't yet know about how to go about solving any problems.

... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:53:29
From: wheeler
To: somebody at bcrvmpc1 (i.e. internal vm network node in boca)
Subject: os2 dispatching

I've sent you a couple things that I wrote recently that relate to the subject of scheduling, dispatching, system management, etc. If you are interested in more detailed description of the VM stuff, I can send you some descriptions of things that I've done to enhance/fix what went into the base VM system ... i.e. what is there now, what its limitations are, and what further additions should be added.

... snip ... top of post, old email index

other trivia: before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

more trivia: IBM AWD did their own cards for PC/RT, including 4mbit token-ring. Then for RS/6000 microchannel, they were told that they couldn't do their own (microchannel) cards, but had to use all PS2 cards (that had been severely performance kneecapped by the communication group fighting off client/server and distributed computing). Example was microchannel PS2 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (performance kneecapping RS/6000 for anything other than pure CPU intensive). New Almaden Research had been heavily provisioned with wiring assuming 16mbit token-ring, but found 10mbyte Ethernet (over same wiring) had higher aggregate throughput and lower latency. Also $69 Ethernet cards had much higher throughput (8.5mbits/sec, same wiring) than $800 microchannel 16mbit token-ring cards (which were even worse than the PC/RT 4mbit token-ring cards). Folklore is that IBM wiring was invented to replace 3270 coax, because for large installations, 3270 coax was starting to exceed bldg load limits.

Also found, for difference in cost of cards (300*$69=$20,700 & 300*800=$240,000), could get several high-performance TCP/IP routers (each with channel interface, 16 Ethernet LAN interfaces, T1 & T3 telco options, FDDI LAN options, more) ... could spread 300 RS/6000 across 6*16=96 Ethernet LANs ... approx three RS/6000 per dedicated 10mbit Ethernet (each 10mbit LAN, with higher aggregate throughput than 16mbit T/R).

Now mid-80s, the communication group was also trying to prevent release of mainframe TCP/IP support. When they lost, they changed tactics and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM production system posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

archived posts mentioning OS/2 and BCRVM
https://www.garlic.com/~lynn/2024c.html#112 Multithreading
https://www.garlic.com/~lynn/2023g.html#36 Timeslice, Scheduling, Interdata
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#36 OS/2
https://www.garlic.com/~lynn/2021f.html#72 IBM OS/2
https://www.garlic.com/~lynn/2021.html#68 OS/2
https://www.garlic.com/~lynn/2007i.html#60 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System And Follow-on Mainframes

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Future System And Follow-on Mainframes
Date: 16 Jun, 2025
Blog: Facebook
1st part of 70s, IBM had "Future System", completely different than 370 and was going to completely replace it (internal politics during FS was killing off 370 efforts, lack of new 370 during period is credited with given 370 clone makers their market foothold). When FS implodes there is mad rush getting stuff back into the 370 product pipelines, including kicking off Q&D 303x and 3081 efforts in parallel.

For 303x channel director they took a 158 engine with integrated microcode (and no 370 microcode). A 3031 was two 158 engines, one with just integrated channel microcode and a 2nd with just 370 micrococde. A 3032 was 168-3 redone to use 303x channel director for external channels. A 3033 started out remapping 168-3 logic to 20% faster chips.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]
... snip ...

... aka, 3081 had equivalent circuits as 16 168-3 (packaging so many circuits motivating TCMs), With the FS implosion, I was asked to help with a 16-CPU multiprocessor implementation and we con the 3033 processor engineers into working on it in their spare time. Everybody thot it was great until somebody tells the head of POK that it could be decades before POK favorite son operating system ("MVS") had (effective) 16-CPU mutliprocessor support (POK doesn't ship 16-CPU machine until after the turn of the century). At the time MVS documentation had 2-CPU support only getting 1.2-1.5 times the throughput of a 1-CPU (of the same model), aka MVS multiprocessor overhead so large. Then head of POK asks some of us to never visit POK again and directed 3033 processor engineers, heads down and no distractions.

Initial 2-CPU 3081D had lower aggregate MIPs than 1-CPU Amdahl. The processor caches are doubled in size for 3081K, bringing aggregate MIPS up to about same as Amdahl 1-CPU (however, MVS 3081K throughput only about .6-.75 throughput of 1-CPU Amdahl; same aggregate MIPS, but Amdahl 1-cpu didn't have MVS SMP overhead).

3081 originally was only going to be multiprocessor systems ... however IBM's ACP/TPF didn't have multiprocessor support ... and there was concern that the whole ACP/TPF market moves to latest Amdahl 1-CPU system. Initially they thought that they just don't install 2nd CPU in 3081 .... however that was in the middle of the frame, which would make it top-heavy and prone to tipping over. Then they had to rewire the box, allowing the 1st CPU to be installed in the middle of the box (for 3083).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Recent posts mentioning ACP/TPF, 3081, and 3083
https://www.garlic.com/~lynn/2025.html#121 Clone 370 System Makers
https://www.garlic.com/~lynn/2025.html#101 Clone 370 Mainframes
https://www.garlic.com/~lynn/2025.html#53 Canned Software and OCO-Wars
https://www.garlic.com/~lynn/2024g.html#84 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024f.html#50 IBM 3081 & TCM
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024e.html#92 IBM TPF
https://www.garlic.com/~lynn/2024.html#20 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023f.html#87 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023f.html#86 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023b.html#98 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#31 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Innovation

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Innovation
Date: 17 Jun, 2025
Blog: Facebook
trivia: some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS. Others went to the IBM cambridge science center on the 4th flr and did virtual machines (initially wanted to modify 360/50 with virtual memory, but all extra systems were going to FAA/ATC and had to settle for 360/40 to modify and did CP40/CMS ... which morphs into CP67/CMS when 360/67 standard with virtual memory became available, percursor to vm370/cms), lots of online & performance apps, invented GML in 1969 (morphs into ISO SGML a decade later and after another decade, HTML at CERN(=). Co-worker
https://en.wikipedia.org/wiki/Edson_Hendricks
responsible for the CP67-based scientific center wide-area network, which evolves into corporate internal network, technology also used for the corporate sponsored univ. BITNET). Ref by one of the GML inventors
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Network larger than arpanet/internet from just about the beginning until sometime mid/late 80s (about the time it was forced to convert to SNA/VTAM). Newspaper article about Ed's loosing battle to move to TCP/IP:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
and more at Ed's archived website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

First commercial RDBMS release was on MULTICS, 1976
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store

Original SQL/relational was done at SJR on VM/370 145 as System/R. I worked on it with Jim Gray and Vera Watson after transferring to SJR ... then helped with tech transfer to Endicott for SQL/DS ("under the radar" while company was preoccupied with the next great DBMS, "EAGLE"). When "EAGLE" finally implodes, there was request for how fast could it be ported to MVS (eventually released as DB2, originally for decision support only).

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc. posts
https://www.garlic.com/~lynn/submain.html#sgml
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Why I've Dropped In

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why I've Dropped In
Newsgroups: comp.arch
Date: Tue, 17 Jun 2025 16:47:55 -1000
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
Yes, but as I have argued before, this was a mistake, and in any event base registers became obsolete when virtual memory became available (though, of course, IBM kept it for backwards compatibility).

OS/360 "relocatable" ... included address constants in executable images that had to be modified when first loaded into real storage (which continued after move to virtual storage).

The initial decision to add virtual memory to all 370s was based on the fact that OS/360 "MVT" storage management was so bad that (concurrently loaded) executable sizes had to be specified four times larger than used ... so typical 1mbyte (real storage) 370/165 only ran four concurrently executing regions, insufficient to keep 165 busy and justified. Running MVT in a (single) 16mbyte virtual address space, aka VS2/SVS (sort of like running MVT in a CP67 16mbyte virtual machine) allowed concurrently running regions to be increased by a factor of four (modulo 4bit storage protection keys required for isolating each region) with little or no paging.
https://www.garlic.com/~lynn/2011d.html#73

As systems got larger they needed to run more than 15 concurrent regions (storage protect key=0 for kernel, 1-15 for regions). As a result they move to VS2/MVS ... a separate 16mbyte virtual address space for each region (to eliminate storage protect key 15 limit on concurrently executing regions). However since OS/360 APIs were heavily pointer passing, they map an 8mbyte kernel image into every virtual address space (allowing pointer passing kernel calls to use passed pointer directly) ... leaving 8mbyte for each region.

However kernel subsystems were also mapped into their own, separate 16mbyte virtual address space. For (pointer passing) application calls to subsystem, a one megabyte "common segment area" ("CSA") was mapped into every 16mbyte virtual address space for pointer passing API calls to subsystems ... leaving 7mbytes for every application.

However, by later half of 70s & 3033 processor, since the total common segment API data space was somewhat proportional to number of subsystems and number of concurrently executing regions ... the one mbyte "common SEGMENT area" was becoming 5-6mbyte "common SYSTEM area", leaving only 2-3mbytes for applications ... but frequently threatening to become 8mbyte (leaving zero bytes for applications).

This was part of desperate need to migrate from MVS to 370/XA and MVS/XA with 31-bit addressing as well as "access registers" ... where call to subsystem switched the caller's address space pointer to the secondary address space and loads the called subsystem address space pointer into the primary address space ... allowing subsystem to directly address caller's API data in (secondary address space) private area (not needing to be placed in a "CSA"). The subsystem then returns to the caller ... and the caller's address space pointer is switched back from secondary to primary.

some posts mentioning MVT, VS2/SVS, VS2/MVS, MVS/XA, CSA
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2025.html#47 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#11 what's a segment, 80286 protected mode
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2022c.html#69 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Innovation

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Innovation
Date: 18 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#58 IBM Innovation

topic drift ... I was blamed for online computer conferencing in late 70s and early 80s on the IBM internal network, it really took off spring '81 after distributing trip report of visit to Jim at Tandem. Only about 300 directly participated but claims 25,000 were reading (also folklore was that when corporate executive committee was told, 5of6 wanted to fire me).

From IBMJargon:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

... semi-related
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System And Follow-on Mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Future System And Follow-on Mainframes
Date: 19 Jun, 2025
Blog: Facebook
re;
https://www.garlic.com/~lynn/2025c.html#57 IBM Future System And Follow-on Mainframes

3033 photos (gone 404 some place, put live on at wayback machine)
https://web.archive.org/web/20190105033420/https://www.ibm.com/ibm/history/exhibits/3033/3033_album.html

3033 intro
https://web.archive.org/web/20190208104809/https://www.ibm.com/ibm/history/exhibits/3033/3033_intro.html

3033 reference room
https://web.archive.org/web/20190105075505/https://www.ibm.com/ibm/history/exhibits/3033/3033_room.html

past posts referencing IBM 3033 URLs
https://www.garlic.com/~lynn/2023g.html#65 Vintage Mainframe
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023.html#92 IBM 4341
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#0 IBM's 3033
https://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014h.html#6 Demonstrating Moore's law
https://www.garlic.com/~lynn/2014g.html#103 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2012n.html#36 390 vector instruction set reuse, was 8-bit bytes
https://www.garlic.com/~lynn/2009g.html#70 Mainframe articles
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Future System And Follow-on Mainframes

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Future System And Follow-on Mainframes
Date: 19 Jun, 2025
Blog: Facebook
re;
https://www.garlic.com/~lynn/2025c.html#57 IBM Future System And Follow-on Mainframes
https://www.garlic.com/~lynn/2025c.html#61 IBM Future System And Follow-on Mainframes

Other 3033 trivia, when I 1st transfer out to SJR, I get to wander around silicon valley datacenters, including disk bldg14/engineering and bldg15/product test, across the street. They were running 7x24, pre-scheduled, stand-alone mainframe testing and had mentioned that they had recently tried MVS, but MVS had 15min mean-time-between-failure (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. I then write a (internal only) research report on the work and happened to mention MVS 15min MTBF, bringing down the wrath of the MVS organization on my head.

Bldg15 got the 1st engineering 3033 (outside POK processor engineering) for disk I/O testing. Turns out the testing only took percent or two of CPU, so we scrounge up a 3830 controller and 3330 string, setting up our own private online service (including running 3270 coax underneath the street to my office in research). At the time air-bearing simulation (part of design of thin-film disk heads) was getting a couple turn-arounds a month on SJR 370/195. We set it up on the 3033 (which had less than half MIPS rate of 195) and they were getting several turn-arounds a day.

posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning air-bearing simulation on bldg15 3033:
https://www.garlic.com/~lynn/2025b.html#112 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#25 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#29 IBM 3090
https://www.garlic.com/~lynn/2024g.html#58 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#58 IBM 3031, 3032, 3033
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#9 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#74 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2007e.html#43 FBA rant

--
virtualization experience starting Jan1968, online at home since Mar1970

mainframe vs mini, old and slow base and bounds, Why I've Dropped In

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: mainframe vs mini, old and slow base and bounds, Why I've Dropped In
Newsgroups: comp.arch
Date: Sun, 22 Jun 2025 12:15:41 -1000
antispam@fricas.org (Waldek Hebisch) writes:
Even more fuzzy were models keeping microcode in core: IIUC in principle determined user could take over microcode store and run user programs directly on the hardware. That way machine would be more like a mini with rather inconvenient instruction set, but much faster than using 360 instruction set.

re:
https://www.garlic.com/~lynn/2025c.html#59 Why I've Dropped In

Boeblingon (germany) 115/125 was even more divergent. They had nine position memory bus for microprocessors. For 115, all the microprocessors were the same, one running 370 microcode (avg ten native instructions/370 instruction, at about 370 80KIPS) and others running (I/O integrated) "controller" microcode. The 125 was the same but the microprocessor running 370 microcode was 50% faster .... getting about 370 120KIPS (1.2MIPS native microprocessor).

I get con'ed into doing design/implementation where could have up to five (125) 370 microprocessors in a SMP configuration.

At the same time Endicott asks me to help with doing ECPS for the 138&148 370 machines. Identify 6kbytes of highest executed vm370 (virtual machine) kernel pathlengths for moving into native microcode (at 10:1 performance increase). Archived (a.f.c.) post with initial analysis (6kbytes accounted for 79.55% of kernel execution)
https://www.garlic.com/~lynn/94.html#21

I was also going to include superset of the 138/148 ECPS work in the 5-CPU 370/125 SMP (in part because there was more available microcode space). Then Endicott complains that the 5-CPU 370/125 would overlap 370/148 throughput at better price/performance and in the corporate escalation, I had to argue both sides, and Endicott wins (and the 5-CPU 370/145 SMP work was canceled).

This was all in the mid-70s aftermath of Future System implosion and the mad rush to get stuff back into the 370 product pipelines ... at the high-end kicking off 3033&3081 efforts in parallel. Head of (mainframe high-end) POK also manages to convince corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

The 370 extensions for high-end (370/XA) had no provisions for supporting virtual machine operation. Some of the old VM370 group do a primitive virtual machine operation and the (3081) "SIE" instruction (for moving into & out of virtual machine mode) in support of MVS/XA development ... never intended for customer release or production. Further aggravating the situation was 3081 lacked the microcode space for the "SIE" instruction and microcode had to be swapped-in when entering and exiting virtual machine mode.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
VAMPS 5-CPU SMP posts
https://www.garlic.com/~lynn/submain.html#bounce
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage Mainframe
Date: 22 Jun, 2025
Blog: Facebook
Sophomore year, took two credit hr intro to fortran/computers, at the end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 ... and temporarily 360/30 replaced 1401 pending 360/67. Univ. shutdown datacenter on weekends and I had the place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. Periodically when I came in sat. morning, production finished early and everything powered off and dark. Sometimes the 360/30 wouldn't power up and after a lot of reading manuals and trial and error found I could put on the controllers into "CE mode", power on 360/30, power on individual controllers (taking them out of CE mode). Then within a year of taking intro class, 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to fruition). Student fortran ran under second on 709, initially on os/360 over a minute. I install HASP cutting time in half. I then start redoing stage2 sysgen, carefully placing datasets and pds members to optimize arm seek and multi-track search; cutting another 2/3rds to 12.9secs (still nearly all 3step job scheduling, also PTFs replacing PDS members would have student jobs creeping up towards 20secs and would have to do mini-stage2 to get performance back). Student Fortran never got better than 709, until I installed UofWaterloo WATFOR.
https://en.wikipedia.org/wiki/WATFIV

Then CSC comes out to install (virtual machine) CP67 (precursor to vm370), 3rd install after CSC itself and MIT Lincoln Labs. I mostly get to play with it during my dedicated weekend time. I initially rewrite lots of the code for running OS/360 in virtual machine; test stream 322secs on bare hardware and 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).
https://www.wikiwand.com/en/articles/History_of_CP/CMS

Then before I graduate, I'am hired fulltime into a small group in Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world (360/65s arriving faster than could be installed, boxes constantly staged in hallways around machine room, joke that Boeing was ordering 360/65s like other companies ordered keypunches). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff). Renton did have one 360/75 and when it ran classified work, had black rope around the 75 area, heavy black felt draped over console lights and 1403 printers, and guards at the corners.

After graduating, I joined CSC (instead of staying with Boeing CFO) and one of my hobbies was enhanced production operating systems for internal datacenters (one of the 1st and long-time customers was the online sales&marketing support system HONE ... initially CP67 and then moving to VM370, datacenters initially just US and then spreading world-wide).

Grenoble science center took internal CP67 and modified it to confirm to 60s academic literature about working set dispatcher and local LRU page replacement. CSC and Grenoble had similar interactive workload but my CSC CP67 with 75-80users and 104 pageable pages (768kbyte 360/67), had better throughput and interactive response than Grenoble did with 35usrs and 155 pageable pages (1mbyte 360/67) ... aka half the users and 50% more pageable pages.

some related
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock
dynamic adaptive resource management & scheduling pages
https://www.garlic.com/~lynn/subtopic.html#fairshare

posts mentioning 709, 1401, MPIO, 360/67, student fortran, CP67/CMS, Boeing CFO, and Renton
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Workstation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Workstation
Date: 23 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#52 IBM 370 Workstation

After leaving IBM, did some consulting with Fundemental, FAA was using FLEX-ES (on Sequent) for 360 emulation; gone 404, but lives on at wayback machine
https://web.archive.org/web/20241009084843/http://www.funsoft.com/
https://web.archive.org/web/20240911032748/http://www.funsoft.com/index-technical.html
and Steve Chen (CTO at Sequent) before IBM bought them and shut it down
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

past posts mentioning funsoft.com
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025.html#99 FAA And Simulated IBM Mainframe
https://www.garlic.com/~lynn/2025.html#20 Virtual Machine History
https://www.garlic.com/~lynn/2024f.html#10 Emulating vintage computers
https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2011c.html#93 Irrational desire to author fundamental interfaces
https://www.garlic.com/~lynn/2010e.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#27 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2009c.html#21 IBM tried to kill VM?
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
https://www.garlic.com/~lynn/2000g.html#6 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000.html#17 I'm overwhelmed
https://www.garlic.com/~lynn/2000.html#11 I'm overwhelmed
https://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Workstation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Workstation
Date: 23 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#52 IBM 370 Workstation
https://www.garlic.com/~lynn/2025c.html#65 IBM 370 Workstation

Note: precursor to A74/7437 was 4341s. I was supplying enhanced bullet-proof systems to disk engineer and product test labs (bldg14&15) in late 70s. Bldg15 got 1st engineering 3033 outside POK processor engineering lab ... then an engineering 4341 in late 78. Jan1979 branch office found out and con me into doing 4341 benchmark for national lab that was looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). A small cluster of five 4341s had higher throughput than 3033, much lower costs, much better price/performance, smaller footprint and better environmentals, power and cooling. Archived post with old email about Endicott mid-range were starting to move up and eat the IBM high-end
https://www.garlic.com/~lynn/2024.html#email810423b
https://www.garlic.com/~lynn/2024.html#email810512

getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
some of the HA/CMP posts mentioning cluster scale-up getting transferred and announced as IBM supercomputing
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370 Workstation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370 Workstation
Date: 23 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#52 IBM 370 Workstation
https://www.garlic.com/~lynn/2025c.html#65 IBM 370 Workstation
https://www.garlic.com/~lynn/2025c.html#66 IBM 370 Workstation

Jim Gray and I would host friday's after work at local watering holes in south san jose, and sometimes get non-IBMs ... email to somebody in Endicott
Date: 02/23/79 09:13:33
From: wheeler

...

Also know several people who work for 2pi who have been very active in this area in conjunction with NCSS. NCSS supplies an enhanced CP/67 converted to 370 (done by a good part of the original CP/67 design implementation team), performance is much better than VM for CMS activity (they have ignored virtual operating systems). You have probably seen several advertisements for the NCSS 3200 (which is a 2pi machine).

... snip ...
top of post, old email index

posts mentioning 2pi and ncss
https://www.garlic.com/~lynn/2023g.html#69 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2015.html#74 Ancient computers in use today
https://www.garlic.com/~lynn/2013l.html#62 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2012e.html#46 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2011p.html#56 Are prefix opcodes better than variable length?
https://www.garlic.com/~lynn/2011g.html#25 Mainframe technology in 2011 and beyond; who is going to run these Mainframes?
https://www.garlic.com/~lynn/2009p.html#9 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2009p.html#4 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone

--
virtualization experience starting Jan1968, online at home since Mar1970

Sun Microsystems

From: Lynn Wheeler <lynn@garlic.com>
Subject: Sun Microsystems
Date: 25 Jun, 2025
Blog: Facebook
The stanford people approached the IBM Palo Alto Scientific Center about IBM producing a workstation they had developed. PASC said that they had to set up a review and invited a few internal organization to listen to the presentation by the stanford people. Afterwards all the internal organizations said they were working on something much better and IBM declined.

Much later in the mid-90s after leaving IBM, listened to somebody tell story about funding the first production run ... they meet in hanger at San Jose Airport, he walks down shaking the hand of each employee, they leave, and then he leaves with all the machines. past ref
https://www.garlic.com/~lynn/2023b.html#11 Open Software Foundation

other refs:
https://www.garlic.com/~lynn/2024g.html#33 SUN Workstation Tidbit
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2023c.html#11 IBM Downfall
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2022c.html#30 Unix work-alike
https://www.garlic.com/~lynn/2021i.html#100 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2021b.html#19 IBM Recruiting
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2017k.html#33 Bad History
https://www.garlic.com/~lynn/2017i.html#15 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#86 IBM Goes to War with Oracle: IT Customers Praise Result
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2013j.html#58 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer

--
virtualization experience starting Jan1968, online at home since Mar1970

Tandem Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tandem Computers
Date: 25 Jun, 2025
Blog: Facebook
I had worked with Jim Gray and Vera Watson on (original sql/relational) System/R, then fall 1980 Jim leaves for Tandem. In the late 70s and early 80s, I had been blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about the beginning until sometime late 80s, about the time it was force to convert to SNA/VTAM). It really took off, spring 1981 when I distributed a trip report to see Jim at Tandem. Only about 300 directly participated but claims 25,000 were reading (folklore is when corporate executive were told, 5of6 wanted to fire me). From IBM Jargon:

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

also study by Jim about systems failing:
https://www.garlic.com/~lynn/grayft84.pdf

Last product did at IBM was HA/CMP. HA/6000 had been initially approved for NYTimes to to move their newspaper system (ATEX) off of VAXCluster to RS/6000. We start running the project out of the IBM Los Gatos lab (on the west coast) and subcontract a lot of the work to CLaM Associates (in Cambridge). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that have VAXCluster support in same, portable source base for UNIX, I do a distributed lock manager support with VAXCluster semantics to ease the port along with several performance enhancements). Then (mainframe) DB2 group started complaining if I was allowed to continue, it would be at least five years ahead of them. Also IBM S/88 (re-branded Stratus) Product Administrator started taking us around to their customers and has me write a section for the corporate continuous available strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain they couldn't meet the requirements).

Early Jan1992, cluster scale-up meeting with Oracle CEO, IBM/AWD executive Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. I was also working with IBM FSD and convince them to go with cluster scale-up for government supercomputer bids ... and they inform the IBM Supercomputer group Then late JAN1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four systems (we leave IBM a few months later).

Some concern about HA/CMP eating mainframes (1993 benchmark, number of program iterations compared to industry reference platform)
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


aka at the time RIOS didn't have bus&cache for multiprocessor operation. The executive we reported to for HA/CMP, goes over to head up Somerset/AIM for single chip power/pc that uses Motorola 88k bus/cache design (and support shared-memory multiprocessor)

Jan1999, Compaq/Tandem/Atalla sponsored secure transaction conference for me at Tandem; write ups by participants
https://www.garlic.com/~lynn/aepay3.htm#riskm
https://www.garlic.com/~lynn/aepay3.htm#riskaads

Also the same month I was asked to help prevent the coming economic mess (we failed), they briefed me that some investment bankers had walked away "clean" from the 80s S&L crisis, were then running Internet IPO mills (invest few million, hype, IPO for a couple billion, needed to fail to leave field clear for next round) and predicted next to get into securitized loans/mortgages.

I also did secure transaction chip and TD to agency DDI was doing panel discussion in the trusted computing tract and asks me to participate (gone 404, but lives on at way back machine)
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Series/1 PU4/PU5 Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: Series/1 PU4/PU5 Support
Date: 25 Jun, 2025
Blog: Facebook
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and (online sales&marketing support) HONE systems were the first (and long time) customer (first CP67 then VM370). Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) and lots of conflicts with the communication group (60s, IBM had 2701 telecommunication controller that supported T1, then with the 70s move to SNA/VTAM and associated issues, controllers were caped at 56kbit/sec). Mid-80s, I was also ask to take a "baby bell" VTAM/NCP (PU4/PU5) emulation done on Series/1 and turn it out as IBM Type1 product ... along with starting port to RS/6000 ("RIOS", which hadn't been announced yet).

I did a comparison between the Series/1 implementation and 3275 that I presented at Fall1986 SNA Architecture Review Board meeting in Raleigh. Part of the presentation in this old archived post
https://www.garlic.com/~lynn/99.html#67
also part of baby bell employee presentation given at the spring '86 COMMON user group meeting: session 43U, Series/1 As A Front End Processor by John Erickson
https://www.garlic.com/~lynn/99.html#70

My organization and I started getting lots of criticism from the communication group saying presentation was totally invalid. I pointed at the Series/1 data was taken from live Bell operations and the 3275 was taken from the Communication Group 3725 Configurator on HONE. Since the S/1 data was live operation, then there could only be errors in the communication group's 3725 Configurator on HONE, I said I would be glad to update the comparison as soon as they corrected any such errors.

Communication group somewhat simulating using T1 trunks by having multiplexors with multiple 56kbit/links run on T1. They even produced a report for the corporate executive committee that customers wouldn't be interested in (real) T1s until sometime in the 90s. They showed number of customer "Fat Pipes" (parallel 56kbit/sec links simulating single logical link) for 2-56, 3-56, etc, dropping to zero by 7-56 parallel links. What they didn't know (or didn't want to tell corporate) was that typical T1 tariff was dropping to about the same as 5or6 56kbit links. Trivial HSDT survey found 200 customer full T1 links where they just transitioned to full T1 with non-IBM hardware and software (FSD has also done S1 ZIRPEL T1 card for gov. bids).

Eventually communication group ships 3737 (massive amount of Motorola 68k processors and memory) that simulated local VTAM CTCA host system, that would immediately ACK transmission receipt to sending host, before trasnmitting over T1 to the remote 3737, which then reversed the process to the remote VTAM (somewhat akin to the baby bell S/1 implementation but with much less feature/function/performance, which also included support for non-IBM protocols and non-IBM systems). Old archived posts with old email about 3737
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005
related 3725 description
https://www.garlic.com/~lynn/2018f.html#email870725

Several of the other IBMers were well acquainted with communication practices and attempted to wall-off everything that they might do; what happened next to tank the effort can only be described as truth is stranger than fiction. By 1988, I still had HSDT, but Donofrio approves HA/6000, originally for NYTIMES to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I then rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing cluster scale-up with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scale-up with Oracle, Sybase, Ingres and Informix (that had VAXCluster support in same source base with Unix) ... aka 128-system clusters (16BIPS aggregate).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

recent S/1 pu4/pu5 & 3725 posts
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2021b.html#74 IMS Stories
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2016d.html#41 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2013d.html#57 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010f.html#2 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#83 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2001n.html#9 NCP

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Networking and SNA 1974

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Networking and SNA 1974
Date: 26 Jun, 2025
Blog: Facebook
re:
https://www.youtube.com/watch?v=OzxHU7KcpNM

For a time, person responsible for AWP164 (turns into APPN) and I reported to the same executive, and I would periodically chide him to come over and work on real networking (TCP/IP) .... since the SNA organization would never cooperate. When it came time to announce APPN, the SNA organization vetoed it. Then there was a period while the APPN announcement letter was carefully rewritten to make sure that there was absolutely no implication that there was any relationship between APPN and SNA.

In the early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) and lots of conflict with the communication group (60s, IBM had 2701 that supported T1/1.544mbits/sec; in the 70s and the transition to SNA/VTAM and numerous issues, controllers were caped at 56kbits/sec.)

HSDT had also been working with NSF director and we were suppose get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

The communication group was also fighting off release of mainframe TCP/IP support. When they lost, they changed their tactic that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I then add support for RFC1044 and in some tuning tests between at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Univ. study in the late 80s, found that VTAM LU6.2 pathlength was something like 160k instructions while a typical (BSD 4.3 tahoe/reno) UNIX TCP pathlength was 5k instructions.

late 80s, I was member of (SGI) Greg Chessen's XTP TAB (technical advisory board) ... the communication group tried hard to block my participation. There were several military projects involved and the gov. was pushing for "standardization" so we took it to ANSI X3S3.3 (ISO chartered standards group for OSI level 3&4 standards) as "high speed protocol". Eventually they told us that ISO required standards to conform to the OSI model. XTP failed because 1) it supported the internetworking layer (which doesn't exist in the OSI model), 2) it went directly from transport to LAN MAC interface (bypassing level 4/3 interface) and 3) it supported the LAN MAC interface (which doesn't exist in the OSI model ... sitting somewhere in the middle of level 3, with both physical layer and network layer characteristics). IBM was claiming that it would support (ISO) OSI ... but there was a joke that while (TCP/IP) IETF standards body had a requirement that there be at least two interoperable implementations to progress in standards process .... ISO didn't have a requirement that a standard be implementable.

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

some recent AWP164/APPN posts
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#0 IBM APPN
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024g.html#40 We all made IBM 'Great'
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023g.html#18 Vintage X.25
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#54 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RS/6000

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RS/6000
Date: 27 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000

trivia: POK then eventually (after decade) gets their fiber stuff announced with ES/9000 as ESCON (when it is already obsolete), initially 10mbytes/sec. Then some POK engineers become involved with FCS and define a heavy weight protocol that significantly reduces throughput, eventually announced as FICON. Latest public benchmark I've found was z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICON (about 20K IOPS/FICON). About same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommends SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... or about 1.5M IOPS. Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks (even 3380 CKD were on their way to fixed-block, can be seen in records/track calculations where record size has to be rounded-up to fixed cell size).

getting to play disk engineer in bldgs 14&5
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

ILIAD predated ROMP, it was part of project to move wide variety of internal microcode processors (370, controllers, as/400, etc) all to 801/risc. For various reasons they all floundered. ROMP originally was going to be DISPLAYWRITER follow-on. When that got canceled, they decided to retarget to the UNIX Workstation market and got the company that had done IBM/PC PC/IX ... to do one for ROMP ... and they needed to figure out what to do with all the PL.8 programmers.

I had been in San Jose Research ... but for various transgressions was transferred to Yorktown, left to live in San Jose with various offices around different bldgs, but had to commute to YKT a couple times a month (monday in San Jose, monday SFO redeye to JFK, bright and early in YKT ... and then Tuesday after work John liked to go drinking ... sometimes I wouldn't check into hotel until after midnight. Then friday afternoon JFK to SFO.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

a few recent posts mentioning Iliad, ROMP, Displaywriter, PC/IX
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025.html#125 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#88 Wang Terminals (Re: old pharts, Multics vs Unix)
https://www.garlic.com/~lynn/2025.html#46 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#83 IBM PC/RT
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#18 The joy of RISC
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2024e.html#40 Instruction Tracing
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#23 Obscure Systems in my Past
https://www.garlic.com/~lynn/2024d.html#14 801/RISC
https://www.garlic.com/~lynn/2024c.html#32 UNIX & IBM AIX
https://www.garlic.com/~lynn/2024c.html#1 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#91 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#72 IBM AIX
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#17 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#43 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#84 memory speeds, Solving the Floating-Point Conundrum
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#39 IBM AIX
https://www.garlic.com/~lynn/2022h.html#103 IBM 360
https://www.garlic.com/~lynn/2022f.html#73 IBM/PC
https://www.garlic.com/~lynn/2022d.html#82 ROMP
https://www.garlic.com/~lynn/2022d.html#79 ROMP
https://www.garlic.com/~lynn/2021k.html#133 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#64 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#27 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021j.html#49 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#41 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021h.html#99 Why the IBM PC Used an Intel 8088
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX
https://www.garlic.com/~lynn/2021d.html#47 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#49 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#48 Holy wars of the past - how did they turn out?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Networking and SNA 1974

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Networking and SNA 1974
Date: 27 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#71 IBM Networking and SNA 1974

senior SE did an internal paper/talk (case study called "How We Put It Together") on SNA/VTAM/NCP complexity ... tightly integrated proprietary ... customer needed to upgrade shop floor controllers from leased line to dial up. That required new controller software release, which required a new 3705 NCP release, which required a new VTAM release, which required a new MVS release. Such a cut-over effort required a weekend for simultaneous, coordinated updates ... which had a series of least one gotcha ... requiring (repeated) backout of all updates/changes.

from long ago and far away:
The paper references experiences within several complex IBM large-account, multiple-CPU environments utilizing a wide variety of SNA products (VTAM; NCP; 3705; 3270; 3600; 3630; 3790; 8100 DPCX, DPPX; SSS; DSX; HCF; and NCCF). This account environment is increasingly representative as recent and anticipated price decreases of IBM hardware make multi-host, distributed systems attractive to a growing number of small and medium accounts. It is shown that the problems stem from many sources: PTF's, documentation, user interfaces, parameter defaults, etc. Each element has its own anomalies which must be compensated for by the customer prior to effecting a workable environment.
... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RS/6000

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RS/6000
Date: 29 Jun, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#72 IBM RS/6000

SUN trivia: The stanford people originally approached the IBM Palo Alto Scientific Center about IBM producing a workstation they had developed. PASC said that they had to set up a review and invited a few internal organizations to listen to stanford presentation. Afterwards all the internal orgs all said they were working on something much better ... and IBM declined. Then Stanford people founded (24Feb1982) SUN.

9333 trivia: PC/RT did their own adapter cards (pc/rt 16bit bus), including 4mbit token-ring card. Then for RS/6000 (w/microchannel), AWD was told that they couldn't do their own cards but had to use (heavily performance kneecapped by the communication group) standard PS2 (microchannel) cards. One example was microchannel 16mbit Token-Ring card ... which had much lower card throughput than the PC/RT 4mbit T/R card. Then found $69 10mbit Ethernet card had enormously higher throughput than $800 PS2 16mbit Token-Ring card (over identical wiring).

9333 had (its own) microchannel card that ran SCSI protocol over 80mbit/sec serial copper. I did some benchmarks with identical SCSI disks, where 9333 could have 4-5 times the throughput of PS2 microchannel SCSI card.

After having worked on FCS, I was hoping could morph 9333 into interoperable 1/8th speed FCS .... instead it morphs into 160mbit serial SSA (FCS for high-end HA/CMP and interoperable 9333 for mid-range).
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

Then we left IBM after HA/CMP cluster scale-up (16-system mid92 and 128-system ye92) was transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four systems (and find out later 9333 morphs into SSA).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Fibre-Channel Standard and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

past posts mentionin 9333, SSA, FCS, etc
https://www.garlic.com/~lynn/2024g.html#57 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2024e.html#90 Mainframe Processor and I/O
https://www.garlic.com/~lynn/2024e.html#84 IBM GPD/AdStaR Disk Division
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#79 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2019b.html#57 HA/CMP, HA/6000, Harrier/9333, STK Iceberg & Adstar Seastar
https://www.garlic.com/~lynn/2017b.html#75 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016b.html#69 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
https://www.garlic.com/~lynn/2013m.html#99 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012k.html#80 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#77 ESCON
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011f.html#46 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010j.html#62 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#31 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2008p.html#43 Barbless
https://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/95.html#13 SSA

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS Capture Ratio

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS Capture Ratio
Date: 29 Jun, 2025
Blog: Facebook
MVS also had "capture ratio" .... typically all MVS accounted for CPU was much less than total CPU. They got total CPU by taking total elapsed time minus measured "wait state" aka total CPU = elapsed-wait. For accounting/billing purposes, it would take individual "accounted CPU" prorated by capture ratio. I've seen some internal shops where it actually ran 50% or less ... some early preliminary investigation seemed to be related to VTAM

There were many MVS internal datacenters that had been filled to the gills with 370/168s & 3033s ... and trying to get more processing cycles out to departments started looking at deploying lots of 4341s ... and trying to figure out the number of 4341s needed by taking MIPs times accounted CPU (not taking into account capture CPU). It became easier when it looked like large portion of the applications could be moved over to VM/4341 (w/o the non-accounted for MVS CPU) ... aka something like five VM/4341s had much higher aggregate throughput than 3033, much lower cost, floor space, environmentals, etc (significant better price/performance).

modulo: 360s&60s rental/lease (rather than sales), IBM billed by the "system meter" that ran anytime any CPU and/or channel was busy ... and everything had to be dormant for at least 400ms before the "system meter" stopped. Long after IBM had switched from rent/lease to sales, MVS still had a system timer event that went off every 400ms (making sure that system meter never stopped).

Early CP/67 (precursor to VM370) work in the 60s was switching to 7x24 availability w/o needing any operators ... lots of integrity work ... but also special terminal channel programs that let channel programs go idle (and letting system meter stop), but instant on when any characters were arriving.

Posts mentioning "Capture Ratio"
https://www.garlic.com/~lynn/2024b.html#27 HA/CMP
https://www.garlic.com/~lynn/2023f.html#103 Microcode Development and Writing to Floppies
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#21 Departmental/distributed 4300s
https://www.garlic.com/~lynn/2021c.html#88 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017i.html#73 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017d.html#51 CPU Timerons/Seconds vs Wall-clock Time
https://www.garlic.com/~lynn/2015f.html#68 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014b.html#85 CPU time
https://www.garlic.com/~lynn/2014b.html#82 CPU time
https://www.garlic.com/~lynn/2014b.html#80 CPU time
https://www.garlic.com/~lynn/2014b.html#78 CPU time
https://www.garlic.com/~lynn/2013d.html#14 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#8 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012j.html#71 Help with elementary CPU speed question
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2010m.html#39 CPU time variance
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2008d.html#72 Price of CPU seconds
https://www.garlic.com/~lynn/2008.html#42 Inaccurate CPU% reported by RMF and TMON
https://www.garlic.com/~lynn/2007t.html#23 SMF Under VM
https://www.garlic.com/~lynn/2007g.html#82 IBM to the PCM market
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load

a few recent mentioning "system meter" and IBM rent/lease billiing
https://www.garlic.com/~lynn/2025b.html#123 VM370/CMS and MVS/TSO
https://www.garlic.com/~lynn/2025b.html#83 Mainfame System Meter
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025.html#129 The Paging Game
https://www.garlic.com/~lynn/2024g.html#100 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#99 Terminals
https://www.garlic.com/~lynn/2024g.html#94 CP/67 Multics vs Unix
https://www.garlic.com/~lynn/2024g.html#59 Cloud Megadatacenters
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#61 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OCO-wars

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OCO-wars
Date: 30 Jun, 2025
Blog: Facebook
23June1969 unbundling announce, starting to charge for (application) software (managed to make the case that kernel software still free), SE services, maint.

SE training use to be part of large SE group at customer sites, however with unbundling, they couldn't figure out how NOT to charge for trainee SEs at the customer.

First part of 70s, was Future System effort, totally different from 370 and was going to completely replace it. Internal politics was killing off 370 efforts, limited new 370 is credited with giving 370 system clone makers their market foothold). One of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 apps were redone for FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 (about 30 times slowdown)
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

After FS implosion there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. Also (possibly with the rise of clone 370s), there was decision to transition to start charging for kenel software, starting with new kernel add-on code. One of my hobbies after graduating and joining IBM was enhanced production operating systems for internal datacenters (branch office sales&marketing support HONE was 1st and long time customer) and some of my code was initial guinea pig for kernel charging (and required to spend some time with lawyers and planners on kernel software charging practice).

In first part of 80s, transition to charging for all kernel software was complete and the OCO-wars started (customers complaining about "object-code ONLY"). Note TYMSHARE had started making their online CMS computer conferencing for free to (customer user group) SHARE starting in AUG1976 as VMSHARE ... archives here:
http://vm.marist.edu/~vmshare
at the moment above is 404 (suppose to be back up tomorrow 1jul2025), archive still at wayback machine (... but search doesn't working)
https://web.archive.org/web/20150912012745/http://vm.marist.edu/~vmshare/
some OCO-wars discussion here
https://www.garlic.com/~lynn/2024g.html#37 IBM Mainframe User Group SHARE

23june1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

more mention of OCO-wars & vmshare
https://www.garlic.com/~lynn/2024g.html#45 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017.html#59 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015d.html#59 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#35 BBC News - Microsoft fixes '19-year-old' bug with emergency patch
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#45 the nonsuckage of source, was MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013m.html#55 'Free Unix!': The world-changing proclamation made 30 years ago today
https://www.garlic.com/~lynn/2013l.html#66 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2012j.html#31 How smart do you need to be to be really good with Assembler?
https://www.garlic.com/~lynn/2012j.html#30 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#20 Operating System, what is it?
https://www.garlic.com/~lynn/2011o.html#33 Data Areas?
https://www.garlic.com/~lynn/2007u.html#8 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007u.html#6 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007k.html#15 Data Areas Manuals to be dropped

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 01 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#13 IBM 4341
https://www.garlic.com/~lynn/2025c.html#14 IBM 4341

370-XA was mostly features to compensate for MVS shortcomings (originally referred to as "811" for the Nov1978 publication date of most of the architecture and specification documents) ... remember, head of POK had managed to convince corporate to kill VM/370, shutdown the development group and transfer all the people to POK for MVS/XA, Endicott managed to save the VM/370 product mission, but had to recreate a development group from scratch.

The equivalent for Endicott was "E" architecture ... single virtual address space embedded in the microcode for DOS/VS and VS1.

VM/4341 throughput and performance was so great that they were being bought for cluster operation (five vm/4341 significantly beating 3033 and national labs getting them for compute farms, sort of the leading edge of the coming cluster supercomputing tsunami). Jan1979 (before 4341 FCS), I was asked to do (60s cdc6600 fortran) benchmark for national lab that was looking at getting 70 for compute farm (early engineering E5 with restricted cycle time still was about same as CDC6600 and easily beat 370/158) Then VM/4341+3370 didn't require datacenter infrastructure and large corporations were ordering hundreds at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami) ... inside IBM, departmental conference rooms were disappearing ... so many being converted to distributed departmental VM/4341s. MVS looked at exploding distributed market and wanted part of it ... however only new CKD was (datacenter) 3380 ... eventually they came out with 3375 (CKD emulated on 3370). However it didn't do MVS much good, distributed market was scores of distributed vm/4341s systems per support person, while MVS was still scores of staff (support&operations) per system.

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning vm/4341, cluster supercomputing tsunami and distributed computing tsunami
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2024g.html#81 IBM 4300 and 3370FBA
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#18 IBM Left Behind
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2020.html#38 Early mainframe security
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
https://www.garlic.com/~lynn/2014m.html#57 Why you need batch cloud computing
https://www.garlic.com/~lynn/2013c.html#75 Still not convinced about the superiority of mainframe security vs distributed?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 01 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#13 IBM 4341
https://www.garlic.com/~lynn/2025c.html#14 IBM 4341
https://www.garlic.com/~lynn/2025c.html#77 IBM 4341

MVS I/O request drive/queue, interrupt, & channel redrive pathlengths were enormous ... XA offloaded much of it to hardware which could be implemented with dedicated real-time processors.

When I transferred to SJR, I got to wander around silicon valley datacenters (both ibm & non-ibm) including disk bldg14/engineering & bldg15/product test across the street. They were running 7x24, prescheduled, mainframe stand-alone disk I/O testing and had mentioned they had tried MVS for concurrent testing ... but it had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor to make it bullet-proof and never fail, allowing any amount of concurrent testing, greatly improving productivity. As an aside, pathlength was about 1/10th MVS (& "never fail") ... getting throughput close to dedicated 370/XA I/O processing.

Also in the morph of CP67->VM370 ... they dropped "CHFREE" macro which was placed immediately after it was determined UC SENSE wasn't needed .... which drastically cut the latency before checking for channel redrive ... putting it back into I/O supervisor ... also helped increase total I/O throughput.

Not since POK had got VM370 "killed" (modulo what Endicott saved for midrange) ... there was no need for 370/XA instruction to enter virtual machine mode. Internally as part MVS/XA development, a vastly simplified (internal "ONLY") VMTOOL virtual machine facility was developed and SIE instruction to enter/exit virtual machine mode. However, because of limited 3081 microcode space and since SIE was just for MVS/XA development test ... SIE microcode to enter/exit virtual machine mode had to be "paged".

Later, customers weren't converting from MVS to MVS/XA as planned ... Amdahl was having more success with HYPERVISOR (multiple domain, done in "macrocode" 370-like instructions running in microcode mode) being able to run MVS and MVS/XA concurrently. Eventually IBM releases VMTOOL as VM/MA (migration add) and then VM/SF (system facility) ... but still not implemented for performance/throughput (and IBM's LPAR and PR/SM wasn't released until almost a decade later for 3090).

... related: 1988, IBM branch office asks if I could help LLNL (national lab) help standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec. Then POK engineers get some serial stuff (they had been playing with for at least a decade) released with ES/9000 as ESCON (when it was already obsolete), initially 10mbyte/sec (later upgraded to 17mbyte/sec).

Then some POK engineers become involved with FCS and define a heavy-weight protocol (for FCS) that drastically cuts the througput, eventually released as FICON. Recent(?) public benchmark was z196 "Peak I/O" getting 2M IOPS using 104 FICON (20K IOPS/FICON). About the same time, a FCS was announced for E5-2600 server blades claiming over a million IOPS (two such FCS higher throughput than 104 FICON). Also IBM pubs say SAPs (system assist processors that do actual I/O) should be kept to 70% CPU ... or about 1.5M IOPS. Also no real CKD has been made for decades, all being emulated on industry standard fixed-block disks.

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recent posts mentioning vmtool sie (& amdahl) hypervisor macrocode
https://www.garlic.com/~lynn/2025b.html#46 POK High-End and Endicott Mid-range
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360
Date: 04 Jul, 2025
Blog: Facebook
Early last decade a customer asked if I could track down the decision to add virtual memory to all 370s and found staff member to executive making decision; basically MVT storage management was so bad that regions had to be specified four times larger than used ... as a result there were insufficient concurrently running regions to keep typical 1mbyte 370/165 busy and justified. Going to running MVT in 16mbyte virtual memory allowed number of regions to be increased by a factor of four times (cap of 15 with 4bit storage protect keys) with little or no paging, aka VS2/SVS (similar to running MVT in CP67 16mbyte virtual machine). Ludlow was doing initial implementation on 360/67 (pending availability of engineering 370s with virtual memory support) ... a little bit of code for setting up 16mbyte virtual memory tables and simple paging. Biggest task was EXCP/SVC0, similar to channel programs passed to CP67, aka the channel needed real addresses and all the passed channel programs had virtual addresses; need to make a channel program copy, replacing virtual addresses with real ... and Ludlow borrows CP67 CCWTRANS to craft into EXCP/SVC0.

Problem was as systems got bigger, needed ever increasing number of concurrently running regions ... more than 15 ... so moved to VS2/MVS, giving each region its own separate 16mbyte virtual address space (using separate virtual address space for protection, in place of storage protect keys). However the strong OS/360 API convention of pointer passing (rather than argument passing) resulted in placing a 8mbyte image of the MVS kernel into every 16mbyte virtual address space (leaving eight for program execution) ... kernel calls could directly access the caller's virtual addresses.

However, subsystems were also given their own separate 16mbyte virtual address spaces ... and subsystem API also need pointer-passing API argument addressing ... and thus was invented 1mbyte common segment ("CSA" or common segment area) in every address space for the allocation of space for API arguments (both user programs and subsystems). However, common segment space requirements were somewhat proportional to number of subsystems and number of concurrently executing programs ... and CSA quickly morphs into multi-megabyte Common System Area. By 3033, it was frequently 5-6mbytes (leaving 2-3mbytes for applications) and threatening to be become 8mbytes (leaving zero for user programs). This was part of the mad rush to 370/XA, 31bit addressing, access register and program call/return.

MVS/XA builds system table of subsystems that contains subsystem address space pointer. Applications does a program call, the hardware moves the application address space to secondary address space pointer and places the subsystem address space pointer in the primary (the subsystem now has addressing to both the subsystem primary address space and the calling application secondary address space). OS/360 heritage had been heavily constrained by the pointer passing API convention assuming that called programs always had addressing to the calling parameters (and increasing large amount of 16mbyte virtual addressing was being consumed for compensating for the pointer-passing API and being able to address parameters).

trivia: os/360 was also heavily I/O oriented (CKD DASD & offloading lots of stuff to I/O, compensating for limited real storage and processing). I started pontificating in the mid-70s using increasing numbers of concurrently executing programs to compensate for I/O bottleneck limiting throughput (resource trade-off was inverting). In the early 80s, I wrote a tome that in the time since 360 announce, disk relative system throughput had declined by order of magnitude (disk I/O throughput got 3-5 times faster while systems increase 40-50 times). IBM disk division executive took exception and assigned the division performance group to refute the claim. After a few weeks, they came back and basically said I had slightly understated the problem. Their analysis was then respun for a presentation on how to configure filesystems for increased system throughput (16Aug1984, SHARE 63, B874).

other trivia: when I 1st joined IBM, I was con'ed into helping 370/195 group with multi-threading ... adding 2nd I-stream simulating 2-CPU multiprocessor. 195 had out-of-order execution to help feed execution units while waiting on memory ... but didn't have branch prediction, so conditional branches drained the pipeline and most codes ran at half MIP-rate. Going to two I-streams (each operating at half speed) had chance of keeping execution units full utilized. Effort was canceled with decision to add virtual memory to all 370s (felt too hard to retrofit virtual memory to 195). This account of Amdhal winning battle to make ACS, 360 compatible, then IBM killing ACS/360 (folklore that executives felt it would advance state of art too fast and IBM would loose control of the market) ... mentions multi-threading
https://people.computing.clemson.edu/~mark/acs_end.html
also mentions some ACS/360 features not showing up until more than 20yrs later with ES/9000.

more trivia: claims that current (cache-miss) memory latency, when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processor cycles ... i.e. memory is the new disk ... and lots of systems have gone to hardware architecture to compensate for memory latency (out-of-order execution, branch prediction, speculative execution, hardware multi-threading, etc) ... giving processor execution something to do while waiting on memory.

a few posts posts mentioning adding virtual memory to all 370s (may also mention 195, multithreading, acs/360)
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2014d.html#54 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS, 3-tier

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS, 3-tier
Date: 04 Jul, 2025
Blog: Facebook
I took two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO (reader/punch/printer frontend for 709) for 360/30. Univ was getting 360/67 for tss/360 replacing 709/1401 ... getting a 360/30 temporarily replacing 1401 pending arrival of 360/67. Univ. shutdown datacenter on weekends and I would have the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given a pile of hadware&software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, etc and within a few weeks had a 2000 card program.

Within a year of taking intro class, the 360/67 arrives and I was hired fulltime responsible for os/360 (tss/360 not coming to production, running as 360/65). student fortran ran under second on 709, with os/360 ran over a minute. I install HASP and cuts the time in half. I then redo sysgen stage2, carefully placing datasets and pds members to optimize disk arm seek and multi-track search cutting another 2/3rds to 12.9secs. student fortran never got better than 709 until i install UofWaterloo WATFOR.

Univ. Library got ONR grant to do online catalog and use part of the money to get a 2321 datacell. It was also selected as betasite for the CICS program product (after unbundling and charging for software) and CICS support was added to my tasks. At first CICS wouldn't come up and playing around found CICS had some hardcoded, undocumented BDAM options and library had built BDAM datasets with different set of options.

various CICS ... gone 404, but lives on at wayback machine:
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
and
https://web.archive.org/web/20060325095234/http://www.yelavich.com/history/ev196901.htm
https://web.archive.org/web/20060325095346/http://www.yelavich.com/history/ev197901.htm
https://web.archive.org/web/20060325095552/http://www.yelavich.com/history/ev196803.htm
https://web.archive.org/web/20060325095613/http://www.yelavich.com/history/ev200401.htm
https://web.archive.org/web/20070322221728/http://www.yelavich.com/history/ev199203.htm
https://web.archive.org/web/20071021041229/http://www.yelavich.com/history/ev198001.htm
https://web.archive.org/web/20081201133432/http://www.yelavich.com/history/ev197003.htm
https://web.archive.org/web/20090106064214/http://www.yelavich.com/history/ev197001.htm
https://web.archive.org/web/20090107054344/http://www.yelavich.com/history/ev200402.htm

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS, 3-tier

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS, 3-tier
Date: 05 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#80 IBM CICS, 3-tier

not quite 56yrs, my wife was co-author of IBM AWP39, Peer-to-Peer Networking (about the same time SNA appeared). Later she was co-author of response to gov. request for super secure large campus environment where she included 3-tier networking architecture. We were then including 3-tier, TCP/IP and ethernet in customer executive presentations and started seeing all sort of attacks, misinformation claims, innuendos and IBM FUD from the communication group and SAA forces.

The agency then requested her and four others in to present. After my wife's presentation (with 3tier, while all the others were 2tier), they suspended everything while they rethought the whole infrastructure.

3 tier, middle layer, saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM HONE
Date: 05 Jul, 2025
Blog: Facebook
Not Descartes but Franklin, after graduating and joining IBM science center, one of my hobbies was enhanced production operating systems for internal datacenters, and online sales&marketing support HONE was 1st (and long-time) customer. One of my 1st IBM overseas trips was for HONE, 1st HONE install outside US was in Paris; La Defense, "Tour Franklin" brand new bldg, still brown dirt not yet landscaped.
https://en.wikipedia.org/wiki/Tour_Franklin
https://en.wikipedia.org/wiki/La_D%C3%A9fense

they put me up in Hotel Opera (now Le Grand) ... RER station to La Defense was just around the corner.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

posts mentioning doing early 70s HONE install in Paris
https://www.garlic.com/~lynn/2024e.html#19 HONE, APL, IBM 5100
https://www.garlic.com/~lynn/2023f.html#77 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2019.html#68 23june1969 unbundling
https://www.garlic.com/~lynn/2018d.html#40 Online History
https://www.garlic.com/~lynn/2017g.html#15 HONE Systems
https://www.garlic.com/~lynn/2015h.html#86 Old HASP
https://www.garlic.com/~lynn/2015c.html#24 30 yr old email
https://www.garlic.com/~lynn/2012i.html#26 Top Ten Reasons Why Large Companies Fail To Keep Their Best Talent
https://www.garlic.com/~lynn/2010b.html#98 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010.html#20 360 programs on a z/10
https://www.garlic.com/~lynn/2008r.html#40 Paris
https://www.garlic.com/~lynn/2007s.html#47 In The US, Email Is Only For Old People
https://www.garlic.com/~lynn/2007s.html#33 Age of IBM VM
https://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate
https://www.garlic.com/~lynn/2007g.html#48 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007b.html#55 IBMLink 2000 Finding ESO levels
https://www.garlic.com/~lynn/2006p.html#35 Metroliner telephone article
https://www.garlic.com/~lynn/2006o.html#6 Article on Painted Post, NY
https://www.garlic.com/~lynn/2006k.html#34 PDP-1
https://www.garlic.com/~lynn/2005o.html#34 Not enough parallelism in programming
https://www.garlic.com/~lynn/2005j.html#29 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005.html#13 Amusing acronym
https://www.garlic.com/~lynn/2004o.html#31 NEC drives
https://www.garlic.com/~lynn/2004n.html#37 passing of iverson
https://www.garlic.com/~lynn/2004d.html#25 System/360 40th Anniversary
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2002h.html#67 history of CMS
https://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/99.html#149 OS/360 (and descendants) VM system?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM HONE
Date: 06 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#82 IBM HONE

... periodically reposted

Learson tried (& failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

FS completely different from 370 and going to completely replace it (during FS, internal politics was killing off 370 efforts, limited new 370 is credited with giving 370 system clone makers their market foothold). One of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 apps were redone for FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 (about 30 times slowdown). Claim that if any other computer company had a loss the magnitude of FS, they would have been bankrupt and out of business.
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html

F/S implosion, from 1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive
... snip ...

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. In 89/90, the Marine Corps Commandant leverages Boyd for makeover of the corps (at a time when IBM was desperately in need of a makeover).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Before I had graduated, I had been brought into the Boeing CFO office to help with the creation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit. I thought Renton datacenter possibly largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (joke that Boeing was getting 360/65s like other companies got keypunches). When I graduate, I join IBM science center (instead of staying with Boeing CFO).

Boyd had lots of stories including being very vocal that the electronics across the trail wouldn't work and possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing). Boyd biography is that "spook base" was $2.5B "windfall" for IBM (ten times Renton, and helping get through FS debacle).
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

Then (20 yrs after Learson tried&failed to block destruction of Watson culture/legacy) IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Boyd conferences continued to be sponsored at (Quantico) Marine Corps University after Boyd passes in 1997 (USAF pretty much had disowned him by then and it was the Marines at Arlington).

other trivia: AMEX and KKR were in competition for (private equity) LBO of RJR and KKR "wins". Then KKR runs into problems and hires away the AMEX president to help (who later is hired by IBM board to try and save IBM)
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

Gerstner then leaves IBM and becomes CEO of a KKR private-equity competitor

Barbarians at the Capitol: Private Equity, Public Enemy (2007)
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a Washington-based global private equity firm whose 2006 revenues of $87 billion were just a few billion below ibm's. Carlyle has boasted George H.W. Bush, George W. Bush, and former Secretary of State James Baker III on its employee roster
... snip ...

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM SNA

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SNA
Date: 07 Jul, 2025
Blog: Facebook
OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

Co-worker responsible for the science center CP-67 wide-area network (non-SNA), account by one of the 1969 GML inventors at science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

which grows into the corporate internal network (larger than arpanet/internet until sometime mid/late 80s, about the time internal network was forced to convert to SNA) and technology also used for the corporate sponsored univ. BITNET.

Edson (passed aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

mid-80s, communication group was fighting off release of mainframe TCP/IP and when they lost, they changed their tactic. Since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate 44kbyte/sec using nearly whole 3090 processor. I then add RFC1044 and in some tuning test at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only a modest amount of 4341 processor; 500 times improvement in bytes moved per instruction executed.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

2nd half of 80s, I was on Chessin's XTP TAB (communication group fought to block). Since there were number of gov. projects involved, XTP as HSP, was taken to ISO chartered ANSI X3S3.3 (OSI level 3&4) for standardization. They eventually told us that ISO had requirement that standards could only be done for protocol that conforms to OSI Model; XTP/HSP didn't because 1) it supported internetworking, which doesn't exist in OSI, 2) it bypassed layer 3/4 interface going directly to LAN MAC interface, 3) it supported LAN/MAC interface, which doesn't exist in OSI. Joke was that (internet) IETF standards required two interoperable implementations before final standard and ISO didn't even require standard to be implementable.

XTPHSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM SNA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SNA
Date: 07 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#84 IBM SNA

Late 80s, a senior disk engineer got a talk scheduled at communication group, world-wide, internal, annual conference supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing the mainframe to more distributed computing friendly platforms. The disk division had come up with a number of solutions but they were constantly vetoed by the communication group (with their corporate ownership of everything that crossed datacenter walls). One of the disk division partial work-arounds was investing in distributed computing startups that would use IBM disks ... and the disk executive would periodically ask us to drop by his investments to see if we could help. However, communication group datacenter stranglehold wasn't just disks and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" (take-off on "baby bell" breakup a decade earlier) in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left the company, but get call from the bowels of (corp hdqtrs) Armonk asking us to help with the corporate breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup (but it wasn't long before the disk division was "divested").

communication group fiercely fighting off non-SNA
https://www.garlic.com/~lynn/subnetwork.html#terminal
getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM SNA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SNA
Date: 07 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#84 IBM SNA
https://www.garlic.com/~lynn/2025c.html#85 IBM SNA

Early 80s, I got HSDT project; T1 and faster computer links and lots of battles with the communication group (60s, IBM had the 2701 controller that supported T1 but the 70s move to SNA and associated issues seem to cap controllers at 56kbits/sec).

HSDT was working with NSF Director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

The Rise And Fall Of Unix

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Rise And Fall Of Unix
Newsgroups: alt.folklore.computers
Date: Mon, 07 Jul 2025 19:01:27 -1000
Peter Flass <Peter@Iron-Spring.com> writes:
This drove me nuts. I may have this wrong because it's 45+ years ago, but I think BTAM received data LSB first, and I had to translate, or else the documentation showed the characters LSB first, and I had to mentally translate all the doc.

I had taken 2 credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO for 360/30. Univ. was getting 360/67 for tss/360 (replacing 790/1401) and got 360/30 temporarily until 360/67 was available. They gave me pile of software and hardware manuals and I (since they shutdown datacenter on weekends) had the datacenter dedicated (although 48hrs w/o sleep made monday classes hard) got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and had a 2000 card implementation within a few weeks.

360/67 arrived within year of taking intro class, and I was hired fulltime for os/360 (tss/360 never came to production). Student fortran ran under second on 709 but over minute on os/360 (360/67 running as 360/65). I add HASP and cuts time in half. I then redo STAGE2 sysgen to carefully place datasets and PDS members to optimize disk arm seek and multitrack search, cutting another 2/3rds to 12.9secs. Never got better than 709 until install UofWaterloo WATFOR..

CSC then comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs). It had 2741 and 1052 terminal support with automagic terminal type and used SAD CCW to change port terminal type scanner. Univ. had some number of (tty33&tty35) ascii terminals so I added ASCII terminal support, borrowing BTAM BCD<->ASCII translate tables.

I then wanted a single dial-in phone number for all terminal types, didn't quiet work, IBM controller could change terminal type port scanner ... but had hard-wired port line speeds.

This kicked off univ. project to build an IBM clone controller, build mainframe channel interface card for Interdata/3 programmed to emulate IBM controller (with the addition that it supported auto-baud). We initially didn't look at IBM controller spec closely enough and when terminal data 1st arrived from clone in mainframe memory, it was all garbage. We find that incoming terminal data, leading bit was placed in low-order byte position ... so data arrived in mainframe memory with all bytes having bit-reverse bits.

Wasn't so obvious with 1042&2741 terminals that used tilt-rotate codes (not actual bcd ... or ascii).

Later, upgraded to Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin-Elmer) was selling as IBM clone controller (and four of us written up some part of the ibm clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

other trivia: account about biggest computer "goof" ever, 360s originally were going to be ASCII machines, but the ASCII unit record gear weren't ready ... so were going to start shipping with old BCD gear (with EBCDIC) and move later
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

clone 360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM SNA

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM SNA
Date: 08 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#84 IBM SNA
https://www.garlic.com/~lynn/2025c.html#85 IBM SNA
https://www.garlic.com/~lynn/2025c.html#86 IBM SNA

Lots of misinformation and IBM FUD. IBM AWD did their own cards for PC/RT (16bit pc/at bus), including 4mbit token-ring card. Then for RS/6000 (microchannel), AWD was told they couldn't do their own cards, but had to use (communication group heavily performance kneecapped) PS2 microchannel cards. The PS2 microchannel 16mbit token-ring card had lower card throughput than PC/RT 4mbit token-ring card (jokes about PC/RT 4mbit TR server having higher throughput than RS/6000 16mbit TR server).

The "new" Almaden research center was heavily provisioned with IBM CAT wiring, presuming 16mbit TR, however found that 10mbit Ethernet (running over same wiring) had higher aggregate throughput and lower latency, and (of course) $69 10mbit Ethernet cards had much higher throughput than $800 16mbit Token-ring cards.

300 (RS/6000) 10mbit Ethernet $20,700 significantly higher card throughputs, LAN throughputs, lower latency than 300 16mbit token-ring $240,000. For the difference ($219,300) could get several high-performance TCP/IP routers with mainframe (not just IBM) channel interfaces, 16 high-performance 10mbit ethernet interfaces, along with options for FDDI lan, telco T1 & T3 interfaces, other stuff ... could configure as few as three RS/6000s sharing a 10mbit ethernet LAN.

My wife had been con'ed into writing IBM response to gov. agency for extremely highly secure, large campus operation and introduced 3tier network operation. We were then out giving 3tier, tcp/ip, ethernet presentations to customer executives (and taking all sorts of attacks from the communication group and SAA forces).

trivia: early 90s, IBM hires a silicon valley contractor to implement TCP/IP support directly in VTAM. What he demoed was TCP running much faster than LU6.2. He was then told that "everybody knows" that a "proper" TCP/IP implementation is much slower than LU6.2 ... and they were only paying for a "proper" implementation. Note: late 80s, univ examined mainframe LU6.2 VTAM and claimed 160,000 instruction pathlength ... compared to typical (BSD4.3 reno/tahoe) UNIX implementation for TCP was only 5,000 instructions.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
3tier networking posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

recent posts mentioning $69 Ethernet and $800 Token-ring cards
https://www.garlic.com/~lynn/2025c.html#74 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#42 SNA & TCP/IP
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025c.html#34 TCP/IP, Ethernet, Token-Ring
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#134 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#12 IBM APPN
https://www.garlic.com/~lynn/2024g.html#101 IBM Token-Ring versus Ethernet
https://www.garlic.com/~lynn/2024g.html#18 PS2 Microchannel
https://www.garlic.com/~lynn/2024f.html#42 IBM/PC
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#102 Rise and Fall IBM/PC
https://www.garlic.com/~lynn/2024e.html#81 IBM/PC
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#64 RS/6000, PowerPC, AS/400
https://www.garlic.com/~lynn/2024e.html#56 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024b.html#52 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#47 OS2
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#97 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

Open-Source Operating System

From: Lynn Wheeler <lynn@garlic.com>
Subject: Open-Source Operating System
Date: 08 Jul, 2025
Blog: Facebook
23jun1969 unbundling announcement, starting to charge for (application) software (but managed to make the case that kernel software should still be free), SE services, maint .... CP67/CMS & VM370 (before and after 23jun1969) distributed full source as default (other software products more common only on request), as well as maintenance included full source update.

First part of 70s, IBM had Future System completely different from 370 and was going to completely replace it (internal politics was killing off 370 efforts and the lack of new 370 during the period is credited with giving the clone 370 system makers their market foothold). When FS imploded, there was mad rush to get stuff back into the 370 product pipelines .... and the rise of the clone 370 system makers also motivated start charging for kernel software.

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... and pieces were chosen as guinea pig for kernel software charging; initially new addons, but transition to full kernel charging by early 80s. After full kernel charging in the 80s, announcement was made that software would be "Object Code Only" and the OCO-wars with customers.

Note: TYMSHARE offered their VM370/CMS based online computer conferencing to the (mainframe user group) SHARE, starting AUG1976 ... archives here:
http://vm.marist.edu/~vmshare
where there are some OCO-war discussions, example
http://vm.marist.edu/~vmshare/browse.cgi?fn=OCO&ft=PROB&args=object-code-only#hit

trivia: After decision to add virtual memory to all 370s, there was decision to produce VM370 product. Initial morph of CP67->VM370 dropped and/or simplified a lot of stuff (including multiprocessor support). I started putting stuff back in with VM370R2 ("CSC/VM" for internal datacenters). Then for VM370R3-base, I put multiprocessor support back in, initially for the internal online sales&marketing support HONE systems.

after transferring to SJR, I got to wander around silicon valley datacenters and would drop into TYMSHARE (and/or see them at the monthly user group BAYBUNCH meetings hosted at Stanford SLAC). I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for putting up on internal network and systems (biggest problem was lawyers worried that internal employees could be contaminated if exposed directly to unfiltered customer opinions).

23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundling
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
online sales&marketing support HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

4th Generation Programming Language

From: Lynn Wheeler <lynn@garlic.com>
Subject: 4th Generation Programming Language
Date: 09 Jul, 2025
Blog: Facebook
4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language

NCSS was 60s commercial CP/67 spin-off from IBM cambridge science center that had done virtual machines and CP/67 (precursor to VM/370). Mathematica made Ramis available through NCSS.
https://en.wikipedia.org/wiki/Ramis_software

When Mathematica makes Ramis available to TYMSHARE for their VM370-based commercial online service, NCSS does Nomad, their own version https://en.wikipedia.org/wiki/Nomad_software
https://www.computerhistory.org/collections/catalog/102658182
and then there was follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to Mathematica's RAMIS, the first Fourth-generation programming language (4GL). Key developers/programmers of RAMIS, some stayed with Mathematica others left to form the company that became Information Builders, known for its FOCUS product
... snip ...

IDC was another 60s commercial CP/67 spin-off of IBM CSC, some mention "first financial language"
https://archive.computerhistory.org/resources/access/text/2015/09/102702884-05-01-acc.pdf
Then one of the IDC/FFL people joins with Bricklin to do Visicalc
https://apennings.com/how-it-came-to-rule-the-world/digital-monetarism/visicalc-and-the-rise-of-the-pc-spreadsheet/
https://en.wikipedia.org/wiki/VisiCalc

trivia: SQL (& RDBMS) was originally done on VM370/CMS aka System/R at IBM SJR,
https://en.wikipedia.org/wiki/IBM_System_R
later tech transfer to Endicott for SQL/DS and nearly decade after start of System/R, tech transfer to STL for DB2.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Commercial Online Virtual Machine Systems
https://www.garlic.com/~lynn/submain.html#online
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

posts mentioning NCSS, Ramis, Nomad, Focus
https://www.garlic.com/~lynn/2025.html#131 The joy of FORTRAN
https://www.garlic.com/~lynn/2024g.html#9 4th Generation Programming Language
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2023g.html#64 Mainframe Cobol, 3rd&4th Generation Languages
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016e.html#107 some computer and online history
https://www.garlic.com/~lynn/2015h.html#27 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2014i.html#32 Speed of computers--wave equation for the copper atom? (curiosity)
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#57 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#30 General Mills computer
https://www.garlic.com/~lynn/2012e.html#84 Time to competency for new software language?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#1 Deja Cloud?
https://www.garlic.com/~lynn/2011m.html#69 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2010q.html#63 VMSHARE Archives
https://www.garlic.com/~lynn/2010e.html#55 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#54 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2006k.html#37 PDP-1
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS

--
virtualization experience starting Jan1968, online at home since Mar1970

FCS, ESCON, FICON

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FCS, ESCON, FICON
Date: 11 Jul, 2025
Blog: Facebook
1980, STL (now renamed SVL) was bursting at the seams and was moving 300 people (& 3270s) from the IMS group to offsite bldg (with service back to STL datacenter). They had tried "remote" 3270, but found the human factors totally unacceptable. I get con'ed into do channel-extender support allowing channel attached 3270 controllers at the offsite bldg (with no perceptible difference in human factors between offsite and in STL). Then the hardware vendor tried to get IBM to release my support, but there is a group in POK working on some fibre stuff that gets it vetoed (afraid that if it was released, it would make it harder to get their stuff released).

Other trivia: the high-end 370s with all the 3270 controllers on channel-extenders found total system throughput increased 10-15%. STL had been spreading the 3270 controllers across all the channels with 3830/3330 DASD. It turns out the channel-extenders had significantly lower channel busy for the same amount of 3270 activity (than directly attached 3270 controllers) ... and STL considered moving *ALL* 3270 controllers to channel-extenders.

Then in 1988, IBM branch office asks me to help LLNL (national lab) standardize some serial stuff they were working with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980, initially 1gbit/sec transfer, full-duplex, aggregate 200mbyte/sec).

Eventually, the POK group gets their stuff released with ES/9000 as ESCON (when it is already obsolete), initially 10mbyte/sec (later upgraded to 17mbyte/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces throughput ... which eventually is released as FICON. 2010 z196 "Peak I/O" benchmark got 2M IOPS using 104 FICON (about 20K IOPS/FICON, running over 104 FCS). About the same time an FCS was released for E5-2600 server blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON).

Note, IBM pubs claim SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... or 1.5M IOPS. Also no CKD DASD has been made for decades, all simulated on industry standard fixed-block disks.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

FCS, ESCON, FICON

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FCS, ESCON, FICON
Date: 11 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#91 FCS, ESCON, FICON

Other trivia: when I transfer from IBM CSC (east coast) to IBM SJR (west coast) got to wander around datacenters in silicon valley, including disk bldg14/engineering and bldg15/product test ... across the street. They were running 7x24, prescheduled, stand-alone testing ... and mentioned that they had recently tried MVS (but it had 15min mean-time-between failure requiring manual re-ipl, in that environment). I offer to rewrite I/O supervisor, making it bullet proof and never fail, allowing any amount of ondemand, concurrent testing in that environment, greatly improving productivity. I then do an (internal only) research report on all the work ... happening to mention the MVS 15min MTBF ... which brings down the wrath of the MVS organization on my head.

A couple years later ... not long before 3880&3380 were about to ship, FE has 57 simulated hardware errors they considered likely to occur. In all 57 cases, MVS was still crashing in all cases and in 2/3rds of the cases, no indication what caused the crash.

Bldg15 got early engineering models for disk testing ... get engineering 3033, 1st outside POK processor engineering. Testing only took a percent or two of 3033 CPU .... so we scrounge up a 3830 and 3330 string and setup our own private online service.

After Future System imploded and before transferring to SJR, I get asked to help with a 370 16-CPU multiprocessor and we had con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thought it was really great until somebody tells the head of POK that it could be decades before POK favorite son operating system ("MVS") had (effective) 16-CPU support (POK doesn't ship 16-CPU machine until after turn of century). At the time MVS docs said 2-CPU support was only 1.2-1.5 times the throughput of a 1-CPU machine (inefficient/heavy weight multiprocessor overhead). Head of POK then invites some of us to never visit POK again and directs the 3033 processor engineers, "heads down and no distractions" (the 3033 processor engineers would still cover me sneaking back into POK).

posts getting to play disk engineer in bldgs 14&5
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

mentions not long before 3880/3380 ship, MVS still failing
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#92 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2021k.html#59 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2018d.html#86 3380 failures
https://www.garlic.com/~lynn/2017g.html#61 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017c.html#14 Check out Massive Amazon cloud service outage disrupts sites
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014i.html#91 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher

--
virtualization experience starting Jan1968, online at home since Mar1970

FCS, ESCON, FICON

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FCS, ESCON, FICON
Date: 12 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#91 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON

Other 1988 trivia: last product we did at IBM approved 1988 ... HA/6000 ... originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000 (run out of Los Gatos lab, bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when start doing technical/scientific cluster scaleup with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (that have VAXCluster support in same source base with UNIX .... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s and hoping can upgrade to interoperable with FCS (planning for HA/CMP high-end).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).

Some concern that cluster scaleup would eat the mainframe .... 1993 MIPS benchmark (industry standard, number of program iterations compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive we had been reporting to, goes over to head up Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with Motorola 88k bus enabling shared-memory, tightly-coupled, multiprocessor system implementations

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

FCS, ESCON, FICON

From: Lynn Wheeler <lynn@garlic.com>
Subject: FCS, ESCON, FICON
Date: 13 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#91 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON

FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
... snip ...

Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf

above mention zHPF, a little more similar to what I had done in 1980 and also in the original (1990) native FCS, early documents claimed something like 30% throughput improvement
https://share.confex.com/share/125/webprogram/Handout/Session17576/zHPF_presentation_SHARE_Orlando.pdf
... pg39 claims increase in 4k IOs/sec for z196 from 20,000/sec per FCS to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf

still below the 2010 native FCS (for E5-2600 server blades) claiming over million, but five times improvement (from 20K/sec to 92K/sec)

FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

posts mentioning zHPF
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025.html#81 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2018f.html#21 IBM today
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017j.html#3 Somewhat Interesting Mainframe Article
https://www.garlic.com/~lynn/2017i.html#59 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#94 Migration off Mainframe to other platform
https://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#16 System z: I/O Interoperability Evolution - From Bus & Tag to FICON
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016g.html#28 Computer hard drives have shrunk like crazy over the last 60 years -- here's a look back
https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe?
https://www.garlic.com/~lynn/2016c.html#28 CeBIT and mainframes
https://www.garlic.com/~lynn/2016c.html#24 CeBIT and mainframes
https://www.garlic.com/~lynn/2015.html#40 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014h.html#72 ancient terminals, was The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014g.html#12 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#51 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#19 How to get a tape's DSCB
https://www.garlic.com/~lynn/2012n.html#9 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012m.html#43 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#28 I.B.M. Mainframe Evolves to Serve the Digital World
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#5 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#4 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#3 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee

--
virtualization experience starting Jan1968, online at home since Mar1970

FCS, ESCON, FICON

From: Lynn Wheeler <lynn@garlic.com>
Subject: FCS, ESCON, FICON
Date: 14 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#91 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#94 FCS, ESCON, FICON

long ago and far away, took two credit hr intro to fortran/computers, at the end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 ... and temporarily 360/30 replaced 1401 pending 360/67. Univ. shutdown datacenter on weekends and I had the place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. Within year of taking intro class, 360/67 arrives and I'm hired fulltime responsible for OS/360 (tss/360 never came to production fruition) and continued to have my 48hr dedicated weekend window.

Later, CSC comes out to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs). It comes with 1052 & 2741 terminal support with automagic terminal type that switches terminal type port scanner. Univ. has ascii 33s&35s and I add ascii terminal support integrated with automagic terminal type. I then want a single dial-in phone number ("hunt group") for all terminal types, but didn't quite work, IBM had taken short-cut and hard-wired port line speed. We start a univ. clone controller project, build a mainframe channel interface board for Interdata/3 programmed to emulate IBM controller with the addition that it can do auto-baud rate. It is then upgraded with Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces ... four of us then are writen up for some part of the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Initial test of Interdata/3 and (multiplexor) channel interface board resulting in red-light the 360/67 ... turns out the channel interface board was holding memory bus access for too long. 360/67 high-speed loc. 80 timer ... if timer "tic'ed" and it went to update loc. 80 memory and memory bus was held, it would delay memory update until memory bus was free ... if the memory bus was still held when the next timer tic happened, it would "red-light" the processor.

clone controller (plug compatible) posts
https://www.garlic.com/~lynn/submain.html#360pcm
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

5-CPU 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: 5-CPU 370/125
Date: 15 Jul, 2025
Blog: Facebook
In the wake of FS implosion there was mad rush to get stuff back into 370 product pipelines ... and the 125 group asks me if I could do a 5-CPU multiprocessor. The 115&125 had a nine position memory bus ... with all the installed 115 microprocessors the same but different microcode .... including the microprocessor with the 370 microcode. The 125 was the same, except the microprocessor running 370 microcode was 50% faster. They wanted to make up to five of the microprocessors the faster CPU ... all running 370 microcode.

At the same time Endicott cons me into helping with the 138/148 microcode assist ("ECPS") .... I would also have similar ECPS running on 125s. Then Endicott complains that the 5-CPU 125 would overlap the 148 performance (at better price/performance) .... and get the 5-CPU project canceled.

archived post w/preliminary 138/148 ECPS study
https://www.garlic.com/~lynn/94.html#21

Trivia: a few years later at ACM SIGOPS conference, the i432 people gave a talk ...now for the 5-CPU 125, I was putting a lot of multiprocessor support into microcode .... the i432 said they had done something similar (somewhat masking how many processors actually running) but it was all in silicon ... so any glitches required new replacement chips in order to fix.

posts mentioning multiprocessor 125
https://www.garlic.com/~lynn/submain.html#bounce
SMP, tightly coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mention ACM SIGOPS, i432 & 370/125
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#61 Typesetting

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT Link Encryptors

From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT Link Encryptors
Date: 15 Jul, 2025
Blog: Facebook
Early 80s, got HSDT project, T1 and faster computer links (satellite and terrestrial) and battles with the communication group (60s, IBM had 2701 that supported T1, 70s transition to SNA/VTAM and various issues caped controllers at 56kbit/sec links). Co-worker at CSC was responsible for the CP67-based science center wide-area network (which morphs into the IBM internal network and technology also used for the corporate sponsored univ BITNET). tidbit from one of the 1969 inventors of GML at the science center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

we then transfer out to SJR in 2nd half of the 70s. Corporate requirement was that all internal corporate network links had to be encrypted (some amount of gov. resistance, especially when links crossed national boundaries) and I hated what I had to pay for T1 link encryptors and faster link encryptors was almost impossible to find. Second half of 80s, got involved in doing our own link encryptors, objective was less than $100 to build and able to hanndle at least 3mbyte/sec (not mbit). Then the corporate crypto group said that it had drastically weakened DES and couldn't be used. It took me three months to convince them that rather than weaker than DES, it was much stronger. It was hollow victory ... I was then told that there was only one entity that was allowed to use such crypto, I could make as many as I wanted, but they all had to be shipped to them.

Nearly decade after leaving IBM, did a high security crypto chip and the top TD to the agency DDI was running panel at IDF in secure computing track and asked me to do a talk ... gone 404, but lives on at wayback machine.
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

some HSDT link encryptor posts
https://www.garlic.com/~lynn/2024d.html#75 Joe Biden Kicked Off the Encryption Wars
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2023b.html#5 IBM 370
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June
https://www.garlic.com/~lynn/2017c.html#69 ComputerWorld Says: Cobol plays major role in U.S. government breaches
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
https://www.garlic.com/~lynn/2013l.html#23 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013g.html#31 The Vindication of Barb
https://www.garlic.com/~lynn/2006n.html#36 The very first text editor

--
virtualization experience starting Jan1968, online at home since Mar1970

5-CPU 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: 5-CPU 370/125
Date: 16 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#96 5-CPU 370/125

I had transferred from CSC (west coast) to SJR and got to wander around datacenters in silicon valley, including disk engineering/bldg14 and disk product test/bldg15 across the street. they were running 7x24, prescheduled stand alone testing and mentioned they had recently tried MVS, but it had 15min mean-time-between failure (in that environment) requiring manual re-ipl. I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity. Bldg15 got 1st engineering 3033 outside POK process engineering and later engineering 4341. Jan1979, branch office heard I had engineering 4341 and cons me into running a benchmark for national lab that was looking at getting 70 for computer farm (sort of leading edge of the coming cluster supercomputing tsunami).

playing disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

Eventually SJR had cluster project where five 4341 ran rings around 3033 ... at much better price/performance, much smaller physical footprint, less power and less cooling. The cluster operation used trotter/3088 with some hardware tweaks to make CTCA protocol much faster ... do a cluster sync operation in well under a second. Then the communication group said that if it was to ever be released, they had to use VTAM ... which drove the same cluster sync operation from well under a second to over 30secs.

1988, project was approved to do HA/6000, originally for the NYTimes to move their NYTimes newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

when I start doing technical/scientific cluster scaleup with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (that have VAXCluster support in same source base with UNIX .... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s and hoping can upgrade to interoperable with FCS (planning for HA/CMP high-end, also 1988 branch office asked if I could help LLNL standardize some serial stuff they were working with, which quickly becomes fibre-channel standard, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).

Some concern that cluster scaleup would eat the mainframe .... 1993 MIPS benchmark (industry standard, number of program iterations compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive we had been reporting to, goes over to head up Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with Motorola 88k bus enabling shared-memory, tightly-coupled, multiprocessor system implementations

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

trivia: late 70s/early 80s, I had been blamed for online computer conferencing ... old item from 1981 mentioned Endicott "bringing low/mid range up to high-end"
https://www.garlic.com/~lynn/2019c.html#email810423

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

and recent comments (in this group) about fibre-channel standard/FCS (& ESCON & FICON)
https://www.garlic.com/~lynn/2025c.html#91 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#92 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#94 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#95 FCS, ESCON, FICON

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning 3088/trotter and 4341 clusters
https://www.garlic.com/~lynn/2023e.html#1 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2015e.html#47 GRS Control Unit ( Was IBM mainframe operations in the 80s)
https://www.garlic.com/~lynn/2012g.html#34 Co-existance of z/OS and z/VM on same DASD farm
https://www.garlic.com/~lynn/2011p.html#100 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011j.html#0 program coding pads
https://www.garlic.com/~lynn/2011h.html#68 IBM Mainframe (1980's) on You tube
https://www.garlic.com/~lynn/2010f.html#14 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2008o.html#57 Virtual
https://www.garlic.com/~lynn/2008e.html#73 Convergent Technologies vs Sun
https://www.garlic.com/~lynn/2008d.html#64 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007j.html#71 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture

--
virtualization experience starting Jan1968, online at home since Mar1970

CICS, 370 TOD

From: Lynn Wheeler <lynn@garlic.com>
Subject: CICS, 370 TOD
Date: 17 Jul, 2025
Blog: Facebook
as undergraduate, they got a 360/67 replacing 709/1401 (originally for tss/360, but ran as 360/65 with os/360) and I was hired fulltime responsible for os/360. Then univ. library got an ONR grant to do online catalog and used some of the money for 2321 datacell ... and was also selected as betatest site for original IBM CICS product ... and CICS support was added to my duties. some history (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

After graduating, I joined IBM CSC ... early activity was 3month task force looking at 370 TOD spec .... which included specification that "0" started 1st day of century (then long discussion was whether the 1st day of century was 1/1/1900 or 1/1/1901 (370 MVT started out using 1/1/1970) ... period is approx. 143years.

IBM TOD
https://en.wikipedia.org/wiki/Time_formatting_and_storage_bugs#Year_2042

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning univ 709, 1401, 360/67, os/360
https://www.garlic.com/~lynn/2025c.html#95 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#87 The Rise And Fall Of Unix
https://www.garlic.com/~lynn/2025c.html#80 IBM CICS, 3-tier
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025c.html#25 360 Card Boot
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025b.html#121 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#117 SHARE, MVT, MVS, TSO
https://www.garlic.com/~lynn/2025b.html#102 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2025b.html#85 An Ars Technica history of the Internet, part 1
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025b.html#38 IBM Computers in the 60s
https://www.garlic.com/~lynn/2025b.html#24 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#17 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#1 Large Datacenters
https://www.garlic.com/~lynn/2025.html#111 Computers, Online, And Internet Long Time Ago
https://www.garlic.com/~lynn/2025.html#102 Large IBM Customers
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#79 360/370 IPL
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals

--
virtualization experience starting Jan1968, online at home since Mar1970

When Big Blue Went to War

From: Lynn Wheeler <lynn@garlic.com>
Subject: When Big Blue Went to War
Date: 18 Jul, 2025
Blog: Facebook
When Big Blue Went to War: A History of the Ibm Corporation'S Mission in Southeast Asia During the Vietnam War (1965–1975)
https://www.amazon.com/When-Big-Blue-Went-War-ebook/dp/B07923TFH5/
loc192-99:
We four marketing reps, Mike, Dave, Jeff and me, in Honolulu (1240 Ala Moana Boulevard) qualified for IBM's prestigious 100 Percent Club during this period but our attainment was carefully engineered by mainland management so that we did not achieve much more than the required 100% of assigned sales quota and did not receive much in sales commissions. At the 1968 100 Percent Club recognition event at the Fontainebleau Hotel in Miami Beach, the four of us Hawaiian Reps sat in the audience and irritably watched as eight other "best of the best" IBM commercial marketing representatives from all over the United States receive recognition awards and big bonus money on stage. The combined sales achievement of the eight winners was considerably less than what we four had worked hard to achieve in the one small Honolulu branch office. Clearly, IBM was not interested in hearing accusations of war profiteering and they maintained that posture throughout the years of the company's wartime involvement.
... snip ...

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. I've frequently retold story about John being very vocal that electronics across the trail wouldn't work and I guess as punishment he is put in command of spook base (about same time I was at Boeing, as undegraduate I had been hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit, I thought Renton datacenter largest in the world); claiming "spook base" had largest air conditioned bldg in that part of the world. "Spook Base" ref (gone 404, but still lives on at wayback machinne)
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
Maintenance of air-conditioning filters and chiller pumps was always a high-priority for the facility Central Plant, but because of the 24-hour nature of operations, some important systems were run to failure rather than taken off-line to meet scheduled preventative maintenance requirements. For security reasons, only off-duty TFA personnel of rank E-5 and above were allowed to perform the housekeeping in the facility, where they constantly mopped floors and cleaned the consoles and work areas. Contract civilian IBM computer maintenance staff were constantly accessing the computer sub-floor area for equipment maintenance or cable routing, with the numerous systems upgrades, and the underfloor plenum areas remained much cleaner than the average data processing facility. Poisonous snakes still found a way in, causing some excitement, and staff were occasionally reprimanded for shooting rubber bands at the flies during the moments of boredom that is every soldier's fate.

also
https://en.wikipedia.org/wiki/Operation_Igloo_White

Tale told by both Boeing employees and IBM Boeing account team; on 360 announce day, Boeing walks into the IBM account rep and places an order that gives the rep the largest compensation that year (back in the days of straight commission, before quota). The next year, IBM establishes quota system ... and before the end of January, Boeing makes another order, making the rep's quota for the year. The rep's quota is then "adjusted" and he leaves a couple months later.

Boyd biography has spook base a $2.5B "windfall" for IBM (60s dollars, ten times Boeing Renton). In 89/90, the commandant of the marine corps leverages Boyd for makeover of the corps at a time when IBM was also desperately in need of makeover. A couple years later, IBM has one of the largest losses in the history of US companies ... and was being reorged into the 13 "baby blue" (take off on "baby bell" breakup a decade earlier) in preparation for beakup
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

posts mentioning "When Big Blue Went to War"
https://www.garlic.com/~lynn/2024g.html#76 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024b.html#10 Some NSFNET, Internet, and other networking background
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022f.html#11 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#36 IBM 23June1969 Unbundle
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021.html#49 IBM Quota
https://www.garlic.com/~lynn/2019e.html#77 Collins radio 1956
https://www.garlic.com/~lynn/2018e.html#96 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017g.html#47 The rise and fall of IBM
https://www.garlic.com/~lynn/2017.html#82 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2015f.html#36 Eric Holder, Wall Street Double Agent, Comes in From the Cold
https://www.garlic.com/~lynn/2014m.html#143 LEO
https://www.garlic.com/~lynn/2014m.html#131 Memo To WSJ: The CRomnibus Abomination Was Not "A Rare Bipartisan Success"
https://www.garlic.com/~lynn/2014l.html#50 IBM's Ginni Rometty Just Confessed To A Huge Failure -- It Might Be The Best Thing For The Company
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#97 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013o.html#16 IBM Shrinks - Analysts Hate It
https://www.garlic.com/~lynn/2013n.html#17 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013k.html#57 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013k.html#29 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#79 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2009g.html#6 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2008q.html#40 TOPS-10

--
virtualization experience starting Jan1968, online at home since Mar1970

More 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: More 4341
Date: 18 Jul, 2025
Blog: Facebook
After Future Sytem imploded, there was mad rush to get stuff back into 370 product pipelines (during FS, internal politics was killing off 370 efforts, which is credited with giving the 370 clone makers their market foothold). Endicott cons me into helping with 138/148 microcode assist ("ECPS", also used for 4331/4341). I then transfer from science center (in Cambridge) to research (bldg28; on the west coast ... and got to wander around datacenters in silicon valley, including disk product test/bldg15 and disk engineering/bldg14 across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual reboot. I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of concurrent testing, greatly improving productivity. Bldg15 gets 1st engineering 3033 outside POK processor engineering and since testing only took a percent or two of CPU, we scrounge up a 3830&3033 string for private online service. Then 1978, get an engineering 4341 I can play with ... jan1979, branch office hears about it and cons me into doing benchmark for national lab looking at getting 70 for 4341 compute farm (sort of leading edge of the coming cluster supercomputing tsunami). Later get five VM/4341s tied in shared DASD cluster using 3088/trotter with some tweaks that easily outperforms 3033, with much better price/performance&throughput, much lower price, power, cooling, floor space.

In the 80s, corporations start ordering hundreds of vm/4341s at a time for distribution out into departmental areas (sort of leading edge of the coming distributed computing tsunami) and inside IBM, departmental conference become scarce with so many being converted to vm/4341 rooms. MVS seeing the explosion in systems wanted some of the market. First the new CKD DASD was datacenter 3380, the only non-datacenter was FBA/3370 which MVS didn't support. Eventually they come out with 3375 CKD emulation for MVS ... but didn't do a lot of good, customers were looking at having scores of vm/4341s per support person ... not scores of support people per MVS/4341.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

recent posts mentioning 4341:
https://www.garlic.com/~lynn/2025c.html#98 5-CPU 370/125
https://www.garlic.com/~lynn/2025c.html#84 IBM SNA
https://www.garlic.com/~lynn/2025c.html#77 IBM 4341
https://www.garlic.com/~lynn/2025c.html#75 MVS Capture Ratio
https://www.garlic.com/~lynn/2025c.html#71 IBM Networking and SNA 1974
https://www.garlic.com/~lynn/2025c.html#66 IBM 370 Workstation
https://www.garlic.com/~lynn/2025c.html#56 IBM OS/2
https://www.garlic.com/~lynn/2025c.html#53 IBM 3270 Terminals
https://www.garlic.com/~lynn/2025c.html#41 SNA & TCP/IP
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#34 TCP/IP, Ethernet, Token-Ring
https://www.garlic.com/~lynn/2025c.html#32 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#26 IBM Downfall
https://www.garlic.com/~lynn/2025c.html#22 IBM 8100
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing
https://www.garlic.com/~lynn/2025c.html#12 IBM 4341
https://www.garlic.com/~lynn/2025c.html#10 IBM System/R
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025b.html#107 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#100 IBM Future System, 801/RISC, S/38, HA/CMP
https://www.garlic.com/~lynn/2025b.html#91 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#89 Packet network dean to retire
https://www.garlic.com/~lynn/2025b.html#82 IBM 3081
https://www.garlic.com/~lynn/2025b.html#81 IBM 3081
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#67 IBM 23Jun1969 Unbundling and HONE
https://www.garlic.com/~lynn/2025b.html#65 Supercomputer Datacenters
https://www.garlic.com/~lynn/2025b.html#44 IBM 70s & 80s
https://www.garlic.com/~lynn/2025b.html#37 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#28 IBM WatchPad
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#21 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#18 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#16 IBM VM/CMS Mainframe
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2025.html#114 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2025.html#113 2301 Fixed-Head Drum
https://www.garlic.com/~lynn/2025.html#105 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#83 Online Social Media
https://www.garlic.com/~lynn/2025.html#77 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#54 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#43 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#38 Multics vs Unix
https://www.garlic.com/~lynn/2025.html#33 IBM ATM Protocol?
https://www.garlic.com/~lynn/2025.html#30 3270 Terminals
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#19 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#12 IBM APPN
https://www.garlic.com/~lynn/2025.html#7 Dataprocessing Innovation
https://www.garlic.com/~lynn/2025.html#1 IBM APPN

--
virtualization experience starting Jan1968, online at home since Mar1970

More 4341

From: Lynn Wheeler <lynn@garlic.com>
Subject: More 4341
Date: 19 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#102 More 4341

At the time air-bearing simulation (part of design of thin-film disk heads) was getting a couple turn-arounds a month on SJR 370/195. We set it up on the bldg15 3033 (which had less than half MIP rate of 195) and they were getting several turn-arounds a day. I then offer MVS, full FBA support ... but they come back I don't have a business case. I needed something like $200m in incremental DASD sales ... to cover the $26M needed for MVS FBA training and documentation. Since IBM was selling every disk it could make, MVS FBA support would just translate into the same amount of disks sold, oh and I wasn't allowed to use total life-time savings in the business case. first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

trivia: 3380 was already transitioning to fixed-block, can be seen in the records/track formulas where record size had to be rounded up to multiple of fixed "cell" size.

posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Innovation
Date: 19 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#58 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#60 IBM Innovation

trivia: The univ got 360/67 for TSS/360 replacing 709/1401, but TSS/360 never came to production, and as undergraduate when 360/67 came in, I was hired fulltime responsible for OS/360. Then CSC comes out to install (virtual machine) CP67, 3rd install after CSC itself and MIT Lincoln Labs. CSC had wanted 360/50 to modify for virtual memory, but all spare 360/50s were going to FAA/ATC and so had to settle for 360/40 ... and did CP/40. CP/40 morphs into CP/67 when 360/67s become available standard with virtual memory. I got to mostly play with CP67 during my 48hr weekend dedicated window. Initially I rewrite lots of pathlengths for running OS/360 in virtual machine ... test stream ran 322secs on bare hardware, but initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, changing to ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270).

posts mentioning CSC
https://www.garlic.com/~lynn/subtopic.html#545tech

CP67 came with 1052 & 2741 terminal support (including dynamic terminal type identification and changing port scanner type). Univ. had some number of (ASCII) TTY 33s&35s and so add ASCII support (integrated with terminal type identification). CSC would pick up most of my changes and ship in standard distribution. I then want to have a single dial-in phone number for all terminals ... but IBM had taken short cut in controller and hardwired line speed. This starts a univ. project to build our own terminal controller, build 360 chanel interface card for Interdata/3 programmed to emulate IBM 360 controller with addition doing line auto-baud. Then Interdata/3 is upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer) sells it as 360 clone controller (and four of us are written up for some part of clone controller business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

posts mentioning 360 plug compatable controller
https://www.garlic.com/~lynn/submain.html#360pcm

Before I graduate, I'm hired full time into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessor into an independent business unit). I think Boeing Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics beween Renton director and CFO, who only had 360/30 up at Boeing field for payroll (although they enlarge that machine room to install 360/67 for me to play with when I wasn't doing other stuff). BCS and a couple 60s spin-offs of CSC were involved in early cloud computing. When I graduate, I join CSC (instead of staying with Boeing CFO), one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (and online sales&marketing support HONE systems was one of my 1st and long time customers).

When decision was made to add virtual memory to all 370s, it was also decided to do VM370 product. In the morph from CP67->VM370, lots of features were dropped or greately simplified. 1974, with VM370R2-base, I start adding lots things back in for my internal CSC/VM. Then transition to VM370R3-base, I also add in multiprocessor support, initially for consolidated US HONE datacenter (in Palo Alto) so they can add a 2nd processor to each system.

CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

more trivia: I wanted to show REX (before renamed REXX and released to customers), that it wasn't just another pretty scripting language. I chose to redo a large assembler application (problem&dump analysis tool) taking 1/2 time over 3months with 10 times the feature/function and 10 times the performance (slight of hand to make interpreted REX run ten times faster than assembler). I finished early, so added automated code that searched for common failure signatures. I thought that it would be released to customers (since it was in use by nearly every internal datacenter and customer PSR), but for some reason it wasn't. Later I got a request if the 3092 group (3090 service processor) could ship it.

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

recent posts mentioning Boeing CFO, BCS, Renton datacenter, Boeing Field, 747
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#82 Boeing's last 747 to roll out of Washington state factory
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Innovation
Date: 19 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#58 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#60 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation

After transfer from CSC to SJR, worked with Jim Gray and Vera Watson on original sql/relational, System/R and tech transfer to Endicott for SQL/DS (under the radar while company was preoccupied with "EAGLE"). Then when "EAGLE" implodes, request is made for how fast could System/R be ported to MVS ... which eventually ships as DB2 (originally for decision support *ONLY*).

System/R Posts
https://www.garlic.com/~lynn/submain.html#systemr

Last product we did at IBM started out as HA/6000 started 1988, originally for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000 (run out of Los Gatos lab, bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (that have VAXCluster support in same source base with UNIX .... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s and hoping can upgrade to interoperable with FCS (planning for HA/CMP high-end).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Some concern that cluster scaleup would eat the mainframe .... 1993 MIPS benchmark (industry standard, number of program iterations compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive we had been reporting to, goes over to head up Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with Motorola 88k bus enabling shared-memory, tightly-coupled, multiprocessor system implementations

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Also, 1988, branch office had asked if I could help LLNL with standardization of some serial stuff they were working with which quickly became fibre-channel standard ("FCS", initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec). Then POK finally releases their fibre stuff as ESCON, when it was already obsolete (initially 10mbytes/sec, later upgraded to 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy duty protocol that significantly reduces throughput, eventually released as FICON. 2010 z196 "Peak I/O" benchmark got 2M IOPS using 104 FICON (about 20K IOPS/FICON, running over 104 FCS). About the same time an FCS was released for E5-2600 server blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON).

Note, IBM pubs claim SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... or 1.5M IOPS. Also no CKD DASD has been made for decades, all simulated on industry standard fixed-block disks.

FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

1990, GM had C4 task force to completely remake their auto business and since they were planning on heavily leveraging IT, they asked for reps from IT companies ... and I was chosen to rep the IBM workstation division. They said standard auto business was 7-8yrs to turn out new model ... running two efforts in parallel offset 3-4yrs (to look like more timely). They said that with the 70s foreign auto quotas, Japanese makers realized that they could sell that many high-end models (with more profit) ... and in the process cut the elapsed time in half (to 3-4yrs elapsed) ... and in 1990 were in the process in cutting elapsed time in half again (18-24months) ... which allowed them to adapt faster to changing market, customer, and technology.

Part of the US 7-8yr lag problem was US auto companies had spun off their part business and with 7-8yr lag, auto design and parts would get out of sync ... sometimes resulting in requirement for significant auto redesign and further delays (poster child was GM corvette)

Offline I would needle the IBM rep representing the mainframe business about what help could he offer since mainframe was on similar design elapsed time cycle.

Auto C4 task force posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Innovation
Date: 19 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#58 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#60 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#103 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation

Starting before early 90s, Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watsons culture & legacy, pg160-163, 30yrs of management briefings 1958-1988
https://bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

Then 20yrs later, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company (take-off on "baby bell" breakup decade earlier)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup and uses some of the same techniques used at RJR (gone 404, but lives on at wayback)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

... trivia: after leaving IBM we did some work for company that had us working with a patent boutique law firm and had claims packaged as 50 patents and prediction it would be well over 100 before it was done. Then some executives looked at filing fees (both US and non-US) and directed all the claims be repackaged for filing as nine patents. Then the US patent office came back and said that they were getting tired of the humongous patents where the filing fee didn't even cover the cost of reading all the claims ... and directed that the claims be repackaged as at least 2-3 dozen patents.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3380 and 3880

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3380 and 3880
Date: 20 Jul, 2025
Blog: Facebook
... original 3380 had 20 track spacings between data tracks ... they then cut spacing in half for double tracks/cylinders/capacity .... then cut it again for triple tracks/cylinders/capacity. then there were some games with "fast" 3380 ... restricted to same number tracks/cylinders/capacity as original 3380 ... but with 1/3rd the spacing ... arm max travel was only 1/3rd the distance to travel.

3880 supported 3mbyte/sec data streaming channels (raised restriction requiring end-to-end handshake for every byte transferred). 3090 assumed that 3880 was going to be the same as 3830 but 3mbyte/sec ... configuring number of channels for target throughput. However, 3880 had a much slower processor ... and so for everything else (but data transfer) had much higher channel busy. When 3090 found out how bad the channel busy was, they realized that they would need a much larger number of channels to achieve target throughput, the increase in channels required another TCM ... and 3090 group semi-facetiously claimed they would bill the 3880 group for the increased 3090 manufacturing cost. Eventually marketing respun the large increase 3090 channels as it being great I/O machine (as opposed to countermeasure to the 3880 large channel busy).

starting 1977, I was providing the systems for disk engineering and disk product test across the street ... and one weekend somebody swapped a 3880 for 3830 (on engineering 3033 1st outside POK processor engineering, connecting 3330 string) and then I got call trying to blame me for the huge degradation in throughput. Eventually isolated it to the 3880 swap ... and kicked off efforts for system tweaks and 3880 microcode tweaks trying to compensate for the increase in channel busy

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning hobby providing enhanced production operating systems for internal datacenters (CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm

a few recent posts mentioning 3880 channel busy and 3090 needing many more channels, requiring extra TCM
https://www.garlic.com/~lynn/2025c.html#39 IBM 3090
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#82 IBM 3081
https://www.garlic.com/~lynn/2025b.html#12 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025.html#86 Big Iron Throughput
https://www.garlic.com/~lynn/2025.html#29 IBM 3090
https://www.garlic.com/~lynn/2024f.html#46 IBM TCM
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe

posts mentioning monday irate call after 3880 swapped for 3830
https://www.garlic.com/~lynn/2024e.html#35 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017g.html#64 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2016h.html#50 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2013n.html#56 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012o.html#28 IBM mainframe evolves to serve the digital world
https://www.garlic.com/~lynn/2011p.html#120 Start Interpretive Execution
https://www.garlic.com/~lynn/2011p.html#19 Deja Cloud?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose Disk
Date: 21 Jul, 2025
Blog: Facebook
SJR was in bldg 28 on main plant site until research started move to ALM Dec85. I transferred from cambridge science center to SJR in 1977 (work with Jim Gray and Vera Watson on original SQL/relational System/R, got to play disk engineer in disk engineering and product test across the street, bunch of other stuff) ... then early 80s was transferred to YKT (for numerous transgressions, folklore 5of6 corporate executive committee wanted to fire me), left to live in San Jose, kept office in SJR (then ALM) along with part of wing and basement lab in Los Gatos lab ... but had to commute to YKT a couple times a month. Last product before leaving IBM was HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
in Los Gatos Lab

After transferring to SJR, got to wander around datacenters in silicon valley, including disk product test/bldg15 and disk engineering/bldg14 across the street. They were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (in that environment) requiring manual reboot. I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of concurrent testing, greatly improving productivity. Bldg15 gets 1st engineering 3033 outside POK processor engineering and since testing only took a percent or two of CPU, we scrounge up a 3830&3033 string for private online service.

At the time air-bearing simulation (part of design of thin-film disk heads) was getting a couple turn-arounds a month on SJR 370/195. We set it up on the bldg15 3033 (which had less than half MIP rate of 195) and they were getting several turn-arounds a day. I then offer MVS, full FBA support ... but they come back I don't have a business case. I needed something like $200m in incremental DASD sales ... to cover the $26M needed for MVS FBA training and documentation. Since IBM was selling every disk it could make, MVS FBA support would just translate into the same amount of disks sold, oh and I wasn't allowed to use total life-time savings in the business case. first thin film head was 3370
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

trivia: 3380 was already transitioning to fixed-block, can be seen in the records/track formulas where record size had to be rounded up to multiple of fixed "cell" size. Original 3380 had 20 track spacings between each data track. That spacing was cut in half for double tracks/cylinders/capacity ... then cut again for triple tracks/cylinders/capacity. Then games with things like "high performance" 3380, restricted to just number of tracks as original 3380 ... but only had 1/3rd the distrance to travel. Now no CKD DASD has been made for decades, all being simulated on industry standard fixed-block devices.

Early 80s, also got HSDT project (run out of Los Gatos lab), T1 and faster computer links (terrestrial and satellite) and battles with communication group (60s, IBM had 2701 controller supporting T1, but 70s transition to SNA/VTAM and issues caped controllers at 56kbits/sec. Local IBM San Jose had T3 microwave Collins digial radio and ran T1 circuits to main plant site. IBM also had 10M, T3 C-band TDMA satellite system and one of the LSG T1 circuits (to plant site) connected to T1 satellite circuit to Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (that had a whole boatload of Floating Point Systems boxes, that support 40mbyte/sec RAID disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

Also got a custom designed Ku-band TDMA satellite system, initially with 4.5M dishes in Los Gatos and Yorktown and 7M dish in Austin.

Had also been working with NSF director and was suppose to get $20M to interconnect the NSF supercomputing datacenters. Then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/360
Date: 21 Jul, 2025
Blog: Facebook
Early last decade, I was asked if I could track down decision to add virtual memory to all 370s. Found somebody that was staff member to executive making decision. Turns out that (OS/360) MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result a typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to running MVT in single 16mbyte virtual address space (sort of like running MVT in a CP67 16mbyte virtual machine) allowed number of concurrent regions to be increased by factor of four times with little or no paging (capped at 15 because 4bit storage protect keys). Ludlow was doing initial VS2/SVS work using 360/67 in POK (before any engineering 370s with virtual memory were working) and I would sometimes drop by. He had relatively small amount of code to have MVT setup the page tables and do simple paging. Biggest task was (similar to what CP67 implemented) the channel programs being passed to EXCP/SVC0 now all had virtual addresses ... and channel required real addresses ... and he borrows CP67 CCWTRANS to craft into EXCP/SVC0 to create a copy of passed channel programs, replacing the virtual addresses with real.

As systems continued to get larger, MVT needed ever increasing number of concurrently running regions/tasks to keep the systems busy (and justified; aka more than 15 caped by 4bit storage protect keys) ... and they move to giving each concurrently executing region their own 16mbyte virtual address space (VS2/MVS). The problem then was, OS/360 was heavily pointer passing API ... and so they map an 8mbyte image of the kernel into every virtual address space (leaving eight for application) ... aka a SVC call would be executing in the caller's virtual address space and could use the passed virtual address directly. Then because every subsystem was moved into their own separate virtual address space, there was a one mbyte area mapped into every 16mbyte virtual address space (Common Segment Area or "CSA", leaving seven mbytes for applications) where application calls to subsysstem could pass calling parameters. However, space requirements in CSA were somewhat proportional to number of concurrent regions and number of subsystems, CSA quickly exceeds 1mbyte and becomes "Common System Area" ... by 3033 time-frame it was frequently 5-6 mbytes (leaving 2-3 mybytes for application) ... and threatening to become eight (leaving zero bytes for running appliction). This was part of the desperate need to have customers migrate to MVS/XA.

trivia: I had taken two credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO (tape<->reader/printer/punch) in 360 Assembler for 360/30. The univ was getting 360/67 for TSS/360 replacing 709/1401 and temporarily got 360/30 replacing 1401 pending arrival of 360/67. The univ. shutdown datacenter on weekends and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a pile of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and after a few weeks had a 2000 assembler statement (cards) program. Within a year of taking intro class, the 360/67 arrives and I'm hired fulltime responsible for os/360 (used as 360/65, TSS/360 not coming to production) ... and I continued to have my dedicated 48hr weekend time.

709 (tape->tape) ran student fortran in less than second. Initially with OS/360 MFT (360/65), it ran over a minute. I add HASP and it cuts the time in half. I then start redoing stage2 sysgen, carefully placing datasets and pds members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ of Waterloo WATFOR (on 360/65, processed student fortran about 20,000 cards/min or 333 cards/sec ... student fortran jobs typically would run 30-60 cards).

Other trivia: after graduating I join IBM, in mid-70s started pointing out that disk I/O was increasingly becoming bottleneck. Then early 80s, wrote a tome that between OS/360 announce and early 80s, disk relative system throughput had declined by order of magnitude (disks got 3-5 times faster but systems had gotten 40-50 times faster). IBM disk division executive took exception and assigned the performance group to refute the claim ... after a couple weeks they basically came back and said I had slightly understated the problem. They then respin the analysis into how to configure disks to improve system throughput for SHARE presentation (16Aug1984, SHARE 63, B874).

There have been recent articles that current memory latency (like cache misses) when measured in count of processor cycles, is similar to 60s disk latency when measured in count of 60s processor cycles (memory is the new disk).

posts mentioning Ludlow, CCWTRANS, SVS, MVS,
https://www.garlic.com/~lynn/2025c.html#79 IBM System/360
https://www.garlic.com/~lynn/2025b.html#95 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025.html#15 Dataprocessing Innovation
https://www.garlic.com/~lynn/2024g.html#106 IBM 370 Virtual Storage
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2014d.html#54 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2011o.html#92 Question regarding PSW correction after translation exceptions on old IBM hardware
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?

other posts mentioning MPIO, SYSGEN, WATFOR
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2017f.html#36 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2015.html#51 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose Disk
Date: 21 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#107 IBM San Jose Disk

other trivia: after leaving IBM, I was brought into small client/server startup as consultant. Two former Oracle employees (had been in the Ellison/Hester HA/CMP meeting) were there responsible for something called a "commerce server" and wanted to do payment transactions. The startup had also invented something they called SSL/HTTPS that they wanted to use. It is now frequently called "e-commerce". I had complete responsibility for everything between webservers and financial industry payment networks.

payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

recent posts mentioning Ellison/Hester HA/CMP meeting
https://www.garlic.com/~lynn/2025c.html#104 IBM Innovation
https://www.garlic.com/~lynn/2025c.html#98 5-CPU 370/125
https://www.garlic.com/~lynn/2025c.html#93 FCS, ESCON, FICON
https://www.garlic.com/~lynn/2025c.html#69 Tandem Computers
https://www.garlic.com/~lynn/2025c.html#50 IBM RS/6000
https://www.garlic.com/~lynn/2025c.html#48 IBM Technology
https://www.garlic.com/~lynn/2025c.html#40 IBM & DEC DBMS
https://www.garlic.com/~lynn/2025c.html#37 IBM Mainframe
https://www.garlic.com/~lynn/2025c.html#24 IBM AIX
https://www.garlic.com/~lynn/2025c.html#15 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2025b.html#104 IBM S/88
https://www.garlic.com/~lynn/2025b.html#92 IBM AdStar
https://www.garlic.com/~lynn/2025b.html#72 Cluster Supercomputing
https://www.garlic.com/~lynn/2025b.html#41 AIM, Apple, IBM, Motorola
https://www.garlic.com/~lynn/2025b.html#36 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2025b.html#32 Forget About Cloud Computing. On-Premises Is All the Rage Again
https://www.garlic.com/~lynn/2025b.html#30 Some Career Highlights
https://www.garlic.com/~lynn/2025b.html#26 IBM 3880, 3380, Data-streaming
https://www.garlic.com/~lynn/2025b.html#22 IBM San Jose and Santa Teresa Lab
https://www.garlic.com/~lynn/2025b.html#5 RDBMS, SQL/DS, DB2, HA/CMP
https://www.garlic.com/~lynn/2025b.html#2 Why VAX Was the Ultimate CISC and Not RISC
https://www.garlic.com/~lynn/2025b.html#0 Financial Engineering

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/360
Date: 22 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360

... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at NPG
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Some of the MIT CTSS/7094 people went to 5th flr to do MULTICS (folklore some of the Bell labs people return home and do UNIX, a simplified MULTICS). Other CTSS/7094 go to the IBM Science Center (CSC) on the 4th flr, and do virtual machines, the science center wide-area network (that morphs into the IBM internal network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s, about the same time it was forced to adopt SNA/VTAM, technology also used for the corporate sponsored univ BITNET), invented GML (precursor to ISO SGML and HTML), lots of other stuff.

CSC had wanted a 360/50 to modify with virtual memory, but all the extra 50s were going to FAA/ATC program, so they had to settle for 360/40 and do CP40/CMS. When 360/67 standard with virtual memory became available, CP40/CMS morphs into CP67/CMS. Later with decision to add virtual memory to all 370s, there is decision to do VM370/CMS product ... in the morph from CP67/CMS to VM370/CMS, a lot of features were greatly simplified or dropped (like multiprocessor support). After FS implodes,
http://www.jfsowa.com/computer/memo125.htm

there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. The had of POK also convinces corporate to kill the VM370 product, shutdown the VM370 development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch)

Last product we did at IBM was approved 1988 as HA/6000, originally for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000 (run out of Los Gatos lab, bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (that have VAXCluster support in same source base with UNIX .... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s and hoping can upgrade to interoperable with FCS (planning for HA/CMP high-end).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).

Some concern that cluster scaleup would eat the mainframe .... 1993 MIPS benchmark (industry standard, number of program iterations compared to reference platform):

• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS

The executive we had been reporting to, goes over to head up Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with Motorola 88k bus enabling shared-memory, tightly-coupled, multiprocessor system implementations.

Second half of 90s, i86 processor vendors do a hardware layer that translates i86 instructions into RISC micro-ops for actual execution, largely negating throughput difference with RISC. industry standard benchmark, number of program iterations compared to industry reference platform, 1999:

• IBM PowerPC 440: 1,000MIPS
• Pentium3: 2,054MIPS (twice PowerPC 440)

Also 1988, the IBM branch office also asks me to help LLNL (national lab) standardize some serial stuff that they are working with, which quickly becomes fibre-channel standard ("FCS", initially 1gbit transfer, full-duplex, aggregate 200mbyte/sec). POK finally releases their serial stuff (when it is already obsolete) as ESCON, initially 10mbyte/sec, later improved to 17mbyte/sec. Then some POK engineers become involved with FCS and define a heavy duty protocol that significantly reduces throughput, eventually released as FICON. 2010 z196 "Peak I/O" benchmark got 2M IOPS using 104 FICON (about 20K IOPS/FICON, running over 104 FCS). About the same time an FCS was released for E5-2600 server blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON).

Note, IBM pubs claim SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... or 1.5M IOPS. Also no CKD DASD has been made for decades, all simulated on industry standard fixed-block disk

A max. configured (80-core) z196 was rated at 50BIPS, while (same year) E5-2600 server blades (16-core) were rated at 500BIPS (ten times z196) ... aka industry standard benchmark, number of program iterations compared to industry reference platform (aka not the actual instructions).

majority cloud, cluster supercomputers, cluster servers in large megadatacenters are all linux ... large megadatacenters with half million or more server blades ... each blade with BIPS/TIPS rating ten times or more max. configured mainframe. A large cloud operation will have scores of megadatacenters around the world and enormous automation.

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts doing enhanced systems for internal datacenters, cp67l, csc/vm, sjr/vm
https://www.garlic.com/~lynn/submisc.html#cscvm
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
megadatacenter pots
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM OS/360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM OS/360
Date: 22 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#108 IBM OS/360
https://www.garlic.com/~lynn/2025c.html#110 IBM OS/360

Early 70s, started Future System project (completely different from 370 and was going to completely replace it). During FS, internal politics was killing off 370 efforts, lack of new 370 during FS period is credited with giving clone 370 makers (including Amdahl) their market foothold (I continued to work on 360/370 all during FS, including periodically ridiculing what they were doing).
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm

One of the last nails in the FS coffin was IBM Houston Science Center analysis was if 370/195 applications were redone for FS machine built with the fastest available technology, they would have the throughput of 370/145 (about 30 times slowdown).

when FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. Also head of POK manages to convince corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). Some of the people were then doing simplified virtual machine facility (VMTOOL) supporting MVS/XA develoopment (never intended for release to customers).

After MVS/XA, customers weren't converting as fast as IBM needed (which MVS desperately needed). The 1st 370/XA was 3081D (308x systems were intended to be multiprocessor only, no single processor systems). However 3081D 2-CPU was slower than Amdahl's single processor ... and Amdahl was having more success in moving customers to MVS/XA because they had microcode HYPERVISOR (multiple domain) that could run MVS and MVS/XA concurrently. IBM then responds by doubling 3081 processor cache, brining 3081K up to about same aggregate MIPS as Amdahl single processor ... and releasing VMTOOL as VM/MA (migration aid) and then VM/SF (system facility). Note (because heavy MVS & MVS/XA multiprocessor overhead), IBM docs say that MVS 2-CPU support only has 1.2-1.5 times the throughput as same single processor (i.e. 3081K with same aggregate MIPS as Amdahl single CPU, MVS&MVS/XA 3081K only sees about .6-.75 the throughput of Amdahl single processor).

Then POK proposes a new couple hundred VM/XA development group to bring VMTOOL up to same performance/feature/function as VM/370. Endicott counter was an (internal) sysprog (in rochester) had added full XA support to VM370 (for some reason, POK prevails).

After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters (and the online sales&marketing support HONE systems were my first and long-time customer). 1974, I start adding performance/feature/function into VM370R2 for my internal CSC/VM. Then 1975, upgrading to VM370R3-base, I add multiprocessor support in, initially for the HONE complex (and the consolidated US HONE operation in Palo Alto starts with adding 2nd processor to all their systems). With very short pathlengths and some programming hacks, those US HONE systems were getting twice the throughput of the previous single processor systems).

I then get talked into helping with a 16-CPU 370 multiprocessor that everybody thought was really great ... and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting then remapping 168 logic to 20% faster chips). Then somebody tells the head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-CPU support (POK doesn't ship 16-CPU multiprocessor until after turn of century). Head of POK then invites some of us to never visit POK again, and directs the 3033 processor engineers, heads down and no distractions.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM system posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Virtual Memory (360/67 and 370)

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Virtual Memory (360/67 and 370)
Date: 22 Jul, 2025
Blog: Facebook
360/67 simplex/1cpu was pretty much 360/65 with virtual memory (and both 24bit and 32bit addtessing modes).

360/67 multiprocessor had channel controller and multiported memory (for CPUs and channels)... and all cpus could access all channels. By comparison 360/65MP was two cpus sharing same memory bus ... simulated mp i/o with dual channel controllers at same channel addresses ... more at 67 func char at bitsaver
http://bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf

MP required processor specific page zero be at different real addresses/locations

some installations got half-duplex ... with only one cpu ... could have higher aggregate thruput because multi-ported memory

right after joining IBM ... was coned into helping w/hyperthreading 370/195. pipeline did out-of-order but conditional branches drained pipeline and most codes ran half speed ... hoping two i-streams could keep system busy. was canceled with decision to add virtual memory to all 370s. see: "Sidebar: Multithreading"... in this website
https://people.computing.clemson.edu/~mark/acs_end.html
... amdahl had won fight to make acs 360 compatible, folklore is it was killed becuase concern it would advance state of art too fast and ibm would loose control of the market ... and amdahl leaves ibm

note: ibm docs mvt/mvs 2-cpu support only got 1.2-1.5 throughput of 1-cpu because of their multiprocessor overhead (so hyperthread MVT/195 couldn't make target throughput)

in early 370 virtual memory meetings, 165 engineers started complaining that if they had to do full 370 vm architecture, vm announce would have to slip 6months. eventually decision was made to drop back to 165 subset ... and every other hardware and any software also retrench to 165 subset

I did put cp67 multiprocessor support back into vm370 ... originally so US HONE could add 2nd provessor to all thier 168s ... and some coding hacks could get twice throughput of single cpu

some amount 85/165/168/3033/3090 were all same design with tweaks. 165 microcode avg. 2.1cycles per 370, improved for 168 to avg 1.6cycles and to one cycle for 3033. 3033 started out 168 logic remapped to 20% faster chips. 168-1 had main memory that was 4-5 times faster than 165. 168-3 doubled the size of processor cache ... but used the 2k bit to index cache lines. VS1 under VM370 move from 168-1 to 168-3, saw huge performance hit (ran half cache in VS1 2k page mode, and any switching between 2k & 4k mode would flush the cache).

because 165-subset 370 virtual memory dropped segment protect, CMS shared segments had to drop back to CP67 games that played with storage protect keys (that VM Assist didn't support). Internally I had done page mapped filesystem for CP67/CMS (3-5 times throughput of standard filesystem) along with a lot shared segment enhancements ... which I then moved to VM370/CMS for internal systems.

For VM370R3, they did a game to allow VM Assist to be used with CMS ... and sold 168 VM Assist hardware to some number of CMS customers. The game was to not protect shared segments ... but every time dispatched a different user ... scan all shared pages to see if any had been changed (and if so, unshare them and flag the pages as not in memory, so any references would bring in unmodified version). CMS originally had only one shared segment ... or 16 pages, so the scanning overhead was much less than the VM Assist benefit.

However, then they also decided to pick up a small subset of my changes (just some shared segments stuff but using DMKSNT) for "DCSS" ... which greatly increased the number of shared segments ... resulting in the scanning overhead becoming much greater than the VM Assist benefit. For internal systems, I continued to not using VM Assist for CMS users with shared segments. Business people said they couldn't do that for customers, because the customers had already paid for the VM Assist hardware changes.

It got much worse when VM370 multiprocessor support was released to customers. Now they had to have processor specific sets of shared pages and also update page table pointer anytime a user ran on different processor (if two users were concurrently running with the same set of shared pages, one user modified them ... the other user could see the modification before the scan&unsharing occurred).

SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

some posts mentioning virtual memory and multi/hyper threading
https://www.garlic.com/~lynn/2025c.html#21 Is Parallel Programming Hard, And, If So, What Can You Do About It?
https://www.garlic.com/~lynn/2025b.html#108 System Throughput and Availability
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2021k.html#46 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951
https://www.garlic.com/~lynn/2017g.html#39 360/95
https://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016h.html#45 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015.html#43 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014l.html#81 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014j.html#99 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2012l.html#73 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012d.html#73 Execution Velocity
https://www.garlic.com/~lynn/2009s.html#6 "Portable" data centers
https://www.garlic.com/~lynn/2009r.html#59 "Portable" data centers
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VNET/RSCS

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VNET/RSCS
Date: 23 Jul, 2025
Blog: Facebook
MIT CTSS/7094 had a form of email.
https://multicians.org/thvv/mail-history.html

Then some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS. Others went to the IBM Science Center on the 4th flr and did virtual machines (1st modified 360/40 w/virtual memory and did CP40/CMS, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available), science center wide-area network (that grows into corporate internal network, larger than arpanet/internet from beginning until sometime mid/late 80s; technology also used for the corporate sponsored univ BITNET), invented GML 1969 (precursor to SGML and HTML), lots of performance tools, etc. Later with decision was made to add virtual memory to all 370s, there was project that morphed CP67 into VM370 (although lots of stuff was initially simplified or dropped).

PROFS started out picking up internal apps and wrapping 3270 menus around (for the less computer literate). They picked up a very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version of VMSG, profs group tried to have him separated from the company. The whole thing quieted down when he demonstrated that every VMSG (and PROFS email) had his initials in a non-displayed field. After that he only shared his source with me and one other person.

I had been blamed for online computer conferencing on the IBM internal network in the late 70s and early 80s, it really took off the spring of 1981 when I distributed a trip report to see Jim Gray at Tandem; only about 300 directly contributed but claims that 25,000 were reading (folklore is that when the corporate executive committee was told, 5of6 wanted to fire me). From IBMJargon:
https://web.archive.org/web/20241204163110/https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

Before Jim left SJR for Tandem, we would have discussions about apps for getting corporation employees to use computers ... one of the things came up with was online directory. Jim would spend one person week doing an app that looked up phone numbers in specially formatted CMS files and I would write procedures for collecting softcopy phone files from IBM locations and reformatting into phone lookup files (app had to be able do lookup among tens of thousand employees in much less time that took somebody to pickup hard copy phone book and find the number)

one of the 1969 CSC GML inventor's comments about the scientific center wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

The PISA Scientific Center had enhanced CP67 with SPM (which was later added to the internal VM370, sort of a superset of the combination of SMSG, IUCV, and VMCF). The customer RSCS included SPM support, even though the customer VM370 didn't. The person responsible for REXX, did a multi-user spacewar game using SPM between (3270 CMS) clients and the server ... and since SPM was supported by RSCS, clients could be anywhere on the internal network. Trivia: very early client bots appeared beating human players ... and then the server was enhanced to increase client energy use non-linear as interval between moves decreased to less than human reaction time.

Co-worker at CSC responsible for VNET/RSCS (passed aug2020), the CP67-based scientific center wide-area network that morphs into the corporate internal network
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

newspaper article about some of Edson's IBM TCP/IP battles:
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
oneline computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VNET/RSCS

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VNET/RSCS
Date: 23 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS

The IBM AWD workstation division had done their own 4mbit token-ring card for PC/RT (had 16bit PC/AT bus). Then for the RS/6000 microchannel they were told they couldn't do their own microchannel cards, but had to use the (communication group heavily performance kneecapped) PS2 cards (example the PS2 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card ... joke that PC/RT 4mbit T/R server would have higher throughput than RS/6000 16mbit T/R server).

The new Almaden Research center had been heavily provisioned with IBM CAT wiring (assuming use for 16mbit token-ring). However, they found that 10mbit ethernet (over the same wiring) had higher aggregate throughput (8.5mbit) and lower latency than 16mbit token-ring. The $69 10mbit Ethernet cards (also clocking at 8.5mbit/sec) easily outperformd the $800 16mbit Token-Ring cards ... and for 300 RS/6000 configuration the price difference was ($240,000 - $20,700 =) $219,300, they could easily afford several high performance TCP/IP routers which included IBM mainframe channel interface (as well as non-IBM channel options), 16 10mbit Ethernet LAN interfaces, T1 & T3 telco options, 100mbit FDDI options (as few as three RS/6000s per 10mbit Ethernet LAN).

AWD had also tweaked early ESCON specification (initially 10mbytes/sec later 17mbytes/sec) for full-duplex and 40+mbyte/sec for RS/6000 "SLA". Problem was incompatible with everything else but another RS/6000 SLA. One of the high-performance TCP/IP router vendors was convinced to add a SLA option ... enabling high-performance RS/6000 TCP/IP servers.

trivia: mid-80s, communication group had been fighting off release of IBM mainframe TCP/IP, but when they lost ... they changed their tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to ship through them ... what shipped got 44kbyte/sec aggregate using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research, between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

more trivia: 1988 ACM SIGCOMM article about extensive study of 30 station 10mbit Ethernet ... showing regular use getting 8.5mbits/sec and all stations in tight low level device driver loop constantly transmitting minimum sized Ethernet packets, effective LAN throughput dropping off to 8mbits/sec.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

some recent posts mentioning 1988 ACM SIGCOMM study
https://www.garlic.com/~lynn/2025b.html#10 IBM Token-Ring
https://www.garlic.com/~lynn/2025.html#106 Giant Steps for IBM?
https://www.garlic.com/~lynn/2025.html#95 IBM Token-Ring
https://www.garlic.com/~lynn/2024f.html#27 The Fall Of OS/2
https://www.garlic.com/~lynn/2024e.html#52 IBM Token-Ring, Ethernet, FCS
https://www.garlic.com/~lynn/2024d.html#71 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#58 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#19 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#80 Channel I/O
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021b.html#17 IBM Kneecapping products

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM VNET/RSCS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM VNET/RSCS
Date: 24 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS

As an undergraduate I had taken a 2 credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ was getting a 360/67 for tss/360, replacing a 709/1401 combination (the univ. got a 360/30 temporarily replacing the 1401 pending arrival of 360/67). The 360/67 arrives within a year of taking intro class and I was hired fulltime responsible for OS/360 (tss/360 didn't come to production so ran as 360/65). Student Fortran had run in under a second on 709, but initially over a minute on OS/360. I install HASP which cut the time in half. They I redo stage2 sysgen, carefully placing datasets and pds members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

Then CSC comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it during by dedicated weekend 48hrs. I start out rewriting pathlengths for running OS/360 running in virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (CP67 CPU 534secs), after a couple months I have reduced CP67 CPU from 534secs to 113secs. I then start rewriting the dispatcher, scheduler, paging, adding ordered seek queuing (from FIFO) and mutli-page transfer channel programs (from FIFO and optimized for transfers/revolution, getting 2301 paging drum from 70-80 4k transfers/sec to channel transfer peak of 270).

Before graduate, I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (jokes about Boeing getting 360/65s like other companies acquired keypunches). Lots of politics between Renton director and CFO, who only had a 360/30 for payroll up at Boeing field (although they enlarge the room for a 360/67 for me to play with when I wasn't doing other stuff).

When I graduate, instead of staying with Boeing CFO, I join IBM CSC. One of my hobbies at IBM was enhanced production operating systems for internal datacenters and the online sales&marketing support HONE systems was one of my 1st and long time customers. CSC had also ported APL\360 to CP67 as CMS\APL and HONE began offering CMS\APL-based online aids to branch offices. I had also done automated benchmarking for CP67 ... implementing the autolog command to reboot between each benchmark (was quickly picked up for production automated operations). CSC was getting production system monitoring data from most internal datacenters and the benchmark scripts were chosen similar to the real live data. Then with morph of CP67->VM370 (and dropping lots of features), I start working on moving lots of stuff into VM370R2-base for my internal CSC/VM. First was automated benchmarking support (to have baseline comparison), but VM370 would consistently crash ... unable to complete the benchmark series ... so had to add CP67 kernal serialization and integrity features in order to get a VM370 performance baseline. Then for VM370R3-base CSC/VM, I add multiprocessor support, initially for US HONE so they could add a 2nd CPU to each 168 system.

The IBM 23Jun69 unbundling announce included starting to charge for (application) software, but managed to make the case that kernel software should still be free. Then in mid-70s (possibly because of the rise of 370 clone makers, including Amdahl), the decision was made to start charging for kernel software (starting with incremental new add-ons, eventually charging for all kernel software in the 80s) and some of my internal "dynamic adaptive scheduling and resource management" was chosen as guinea pig. As part of preparing for release, performed 2000 automated benchmarks (involving a vast array of workload and configuration combinations) that took three months elapsed time to complete. Company wanted me to do a new release every month tracking VM370 monthly PLC distributions, I said I could only do one every three months (including doing a benchmark subset validating there wasn't any performance regression) ... since it was all just a hobby along with doing the internal enhanced CSC/VM.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management & fairshare posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
ibm 23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
automated benchmark posts
https://www.garlic.com/~lynn/submain.html#benchmark

recent posts mentioning undergrad univ responsible for os/360 and then working for Boeing CFO:
https://www.garlic.com/~lynn/2025c.html#64 IBM Vintage Mainframe
https://www.garlic.com/~lynn/2025c.html#55 Univ, 360/67, OS/360, Boeing, Boyd
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 25 Jul, 2025
Blog: Facebook
some carry over from PROF post, internal network starting with the science center wide-area network (initially CP/67 then moved to VM/370)
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS

Then in the early 80s, got HSDT project (run out of Los Gatos lab), T1 and faster computer links (terrestrial and satellite) and battles with communication group (60s, IBM had 2701 controller supporting T1, but 70s transition to SNA/VTAM and issues caped controllers at 56kbits/sec. Local IBM San Jose had T3 microwave Collins digial radio and ran T1 circuits to main plant site. IBM also had 10M, T3 C-band TDMA satellite system and one of the LSG T1 circuits (to plant site) connected to T1 satellite circuit to Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (that had a whole boatload of Floating Point Systems boxes, that support 40mbyte/sec RAID disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.
... snip ...

Also got a custom designed Ku-band TDMA satellite system, initially with 4.5M dishes in Los Gatos and Yorktown and 7M dish in Austin.

Had also been working with NSF director and was suppose to get $20M to interconnect the NSF supercomputing datacenters. Then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, NSFnet becomes the NSFNET backbone, precursor to modern internet.

First webserver in US (outside Europe) was on the Stanford SLAC (CERN sister institution) VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

last product we did at IBM approved 1988 ... HA/6000 ... originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000 (run out of Los Gatos lab, bldg29). I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scaleup with national labs (LANL, LLNL, NCAR, etc) and commercial cluster scaleup with RDBMS vendors (that have VAXCluster support in same source base with UNIX .... Oracle, Sybase, Ingres, Informix). Was working with Hursley 9333s and hoping can upgrade to interoperable with FCS (planning for HA/CMP high-end).

Early Jan1992, meeting with Oracle CEO, IBM AWD executive Hester tells Ellison that we would have 16-system clusters mid-92 and 128-system clusters ye-92. Mid Jan1992 presentations with FSD convinces them to use HA/CMP cluster scaleup for gov. supercomputer bids. Late Jan1992, cluster scaleup is transferred to be announced as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than 4-systems (we leave IBM a few months later).

Some concern that cluster scaleup would eat the mainframe .... 1993 MIPS benchmark (industry standard, number of program iterations compared to reference platform):
• ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
• RS6000/990 : 126MIPS


The executive we had been reporting to, goes over to head up Somerset/AIM (apple, ibm, motorola) ... single chip power/pc with Motorola 88k bus enabling shared-memory, tightly-coupled, multiprocessor system implementations

Sometime after leaving IBM, brought into small client/server startup as consultant. Two former Oracle people (that were in the Ellison/Hester meeting) are there responsible for something they call "commerce server" and want to do payment transactions on the server. The startup also invented this technology they call SSL/HTTPS, that they want to use. The result is now frequently called e-commerce. I have responsibility for everything between webservers and the payment networks.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 25 Jul, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#116 Internet
https://www.garlic.com/~lynn/2025c.html#115 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#114 IBM VNET/RSCS
https://www.garlic.com/~lynn/2025c.html#113 IBM VNET/RSCS

One of the 1st problem cropped up was as webservers activity was scaling up, HTTP&HTTPS were creating and then almost immediately closing TCP sessions, as a result FINWAIT list was exploding to thousands of entries ... and webservers started spending 95+% of CPU time running FINWAIT list. NETSCAPE gets a large multiprocessor SEQUENT server that had fixed the FINWAIT scan overhead some time before in DYNIX. Most webserver platforms were running RENO/TAHOE 4.3 TCP and it took several more months before they started shipping fixes for the FINWAIT scan problem

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

lots of posts mentioning FINWAIT scan problem
https://www.garlic.com/~lynn/2025b.html#97 Open Networking with OSI
https://www.garlic.com/~lynn/2024g.html#71 Netscape Ecommerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#29 Quic gives the internet's data transmission foundation a needed speedup
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2018d.html#63 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#127 Early Networking
https://www.garlic.com/~lynn/2016e.html#43 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#113 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2015f.html#71 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015d.html#50 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd
https://www.garlic.com/~lynn/2014j.html#76 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014h.html#26 There Is Still Hope
https://www.garlic.com/~lynn/2014g.html#13 Is it time for a revolution to replace TLS?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013i.html#48 Google takes on Internet Standards with TCP Proposals, SPDY standardization
https://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2013h.html#8 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#83 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012i.html#15 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012e.html#89 False Start's sad demise: Google abandons noble attempt to make SSL less painful
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011n.html#6 Founders of SSL Call Game Over?
https://www.garlic.com/~lynn/2011g.html#11 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010p.html#9 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
https://www.garlic.com/~lynn/2010m.html#51 Has there been a change in US banking regulations recently?
https://www.garlic.com/~lynn/2010b.html#62 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009n.html#44 Follow up
https://www.garlic.com/~lynn/2009i.html#76 Tiny-traffic DoS attack spotlights Apache flaw
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2008p.html#36 Making tea
https://www.garlic.com/~lynn/2008m.html#28 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2007j.html#38 Problem with TCP connection close
https://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
https://www.garlic.com/~lynn/2006m.html#37 Curiosity
https://www.garlic.com/~lynn/2006k.html#2 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
https://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005o.html#13 RFC 2616 change proposal to increase speed
https://www.garlic.com/~lynn/2005g.html#42 TCP channel half closed
https://www.garlic.com/~lynn/2005c.html#70 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004m.html#46 Shipwrecks
https://www.garlic.com/~lynn/2003h.html#50 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003e.html#33 A Speculative question
https://www.garlic.com/~lynn/2002q.html#12 Possible to have 5,000 sockets open concurrently?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002i.html#39 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#3 The demise of compaq
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#1 Early tcp development?

--
virtualization experience starting Jan1968, online at home since Mar1970

Library Catalog

From: Lynn Wheeler <lynn@garlic.com>
Subject: Library Catalog
Date: 25 Jul, 2025
Blog: Facebook
I had taken two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO (reader/punch/printer frontend for 709) for 360/30. Univ was getting 360/67 for tss/360 replacing 709/1401 ... getting a 360/30 temporarily replacing 1401 pending arrival of 360/67. Univ. shutdown datacenter on weekends and I would have the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given a pile of hardware&software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, etc and within a few weeks had a 2000 card program. The 360/67 arrives within a year of taking intro class and I was hired fulltime responsible of OS/360. Student Fortran had run under second on 709, but initially over minute on 360/67 (as 360/65 w/OS360). I install HASP and it cuts time in half. I then start redoing STAGE2 SYSGEN carefully placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until after I install Univ. of Waterloo WATFOR.

Univ. library gets ONR grant to do online catalog and part of the money goes to getting a 2321 datacell. The effort was also selected by IBM for CICS product betatest and library project and CICS support was added to my tasks.

After graduating, I join the IBM Cambridge Scientific Center ... and then transfer to SJR on the west coast ... and work with Jim Gray and Vera Watson on the original SQL/relational, System/R ... and also involved with tech transfer to Endicott for SQL/DS (under the radar, while company was preoccupied with the next, great DBMS "EAGLE"). When "EAGLE" implodes, there is request for how fast can System/R be ported to MVS ... which is eventually released as DB2 (originally for decision support *ONLY*).

After leaving IBM in the early 90s, was asked into NIH NLM
https://www.nlm.nih.gov/

... and a couple of the people that had done their original implementation (about the same time, I was supporting the Univ. catalog effort) were still there. Both were similar IBM BDAM dataset implementations (and was still in use by NLM in the 90s).

CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#bdam
System/R (original sql/relational) posts
https://www.garlic.com/~lynn/submain.html#systemr

posts mentioning NIH NLM:
https://www.garlic.com/~lynn/2024c.html#96 Online Library Catalog
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2022d.html#74 WAIS. Z39.50
https://www.garlic.com/~lynn/2022c.html#104 Scientists have established a link between brain damage and religious fundamentalism
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2022.html#38 IBM CICS
https://www.garlic.com/~lynn/2021e.html#72 Politically polarized brains share an intolerance of uncertainty
https://www.garlic.com/~lynn/2021b.html#98 Extremist Brains Perform Poorly at Complex Mental Tasks, Study Reveals
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2019b.html#31 How corporate America invented 'Christian America' to fight the New Deal
https://www.garlic.com/~lynn/2018c.html#13 Graph database on z/OS?
https://www.garlic.com/~lynn/2018b.html#55 Brain size of human ancestors evolved gradually over 3 million years
https://www.garlic.com/~lynn/2018b.html#54 Brain size of human ancestors evolved gradually over 3 million years
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017h.html#48 endless medical arguments, Disregard post (another screwup)
https://www.garlic.com/~lynn/2017g.html#57 Stopping the Internet of noise
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2017e.html#88 I quit this NG
https://www.garlic.com/~lynn/2015b.html#64 Do we really?
https://www.garlic.com/~lynn/2015b.html#63 Do we really?
https://www.garlic.com/~lynn/2014d.html#55 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2011.html#39 The FreeWill instruction
https://www.garlic.com/~lynn/2009m.html#88 Continous Systems Modelling Package
https://www.garlic.com/~lynn/2008m.html#74 Speculation ONLY
https://www.garlic.com/~lynn/2005j.html#47 Where should the type information be?
https://www.garlic.com/~lynn/2005j.html#45 Where should the type information be?
https://www.garlic.com/~lynn/2005d.html#57 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2004n.html#47 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004e.html#53 c.d.theory glossary (repost)
https://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
https://www.garlic.com/~lynn/aadsm15.htm#15 Resolving an identifier into a meaning

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, , index - home