List of Archived Posts

2024 Newsgroup Postings (04/15 - )

Amdahl and IBM ACS
Disk & TCP/IP I/O
ReBoot Hill Revisited
ReBoot Hill Revisited
Cobol
Cobol
Testing
Testing
AI-controlled F-16
Boeing and the Dark Age of American Manufacturing
AI-controlled F-16
370 Multiprocessor
370 Multiprocessor
Boeing and the Dark Age of American Manufacturing
Bemer, ASCII, Brooks and Mythical Man Month
360&370 Unix (and other history)
CTSS, Multicis, CP67/CMS
IBM Millicode
CP40/CMS
IBM Millicode
IBM Millicode
TDM Computer Links
FOILS
CP40/CMS
TDM Computer Links
Tymshare & Ann Hardy
The Last Thing This Supreme Court Could Do to Shock Us
PDP1 Spacewar
Wondering Why DEC Is The Most Popular
Wondering Why DEC Is The Most Popular
GML and W3C
HONE &/or APL
UNIX & IBM AIX
Old adage "Nobody ever got fired for buying IBM"
Old adage "Nobody ever got fired for buying IBM"
The man reinventing economics with chaos theory and complexity science
Old adage "Nobody ever got fired for buying IBM"
Planet Mainframe Profile
Joseph Stiglitz is still walking the road to freedom
Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
CMS RED, XEDIT, IOS3270, FULIST, BROWSE
Congratulations Lynne
Netscape
TYMSHARE, VMSHARE, ADVENTURE
IBM Mainframe LAN Support
IBM Mainframe LAN Support
Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
IBM Mainframe LAN Support
IBM Mainframe LAN Support

Amdahl and IBM ACS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl and IBM ACS
Date: 15 Apr, 2024
Blog: Facebook

Note Amdahl wins battle to make ACS, 360 compatible ... folklore is
that executives then shutdown the operation because they were afraid
that it would advance the state of the art too fast and IBM would
loose control of the market ... shortly later Amdahl leaves
IBM. Following lists some ACS/360 features that show up more than
20yrs later in the 90s with ES/9000

https://people.computing.clemson.edu/~mark/acs_end.html
ACS
https://people.computing.clemson.edu/~mark/acs.html
https://people.computing.clemson.edu/~mark/acs_legacy.html

some recent posts mentioning Amdahl and end of ACS
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#91 7Apr1964 - 360 Announce
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk & TCP/IP I/O

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk & TCP/IP I/O
Date: 15 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O

135/145, 138/148, 4331/4341 were conventional microprocessors with
microcode to emulate 370 instructions, avg 10 native instruction per
370 instruction. I got con'ed into helping with ECPS originally for
138/148 ... old archive post with the initial analysis of kernel
pathlengths for selecting what to microcode. I was told 138/148 had 6k
bytes and 370 kernel instructions would translate into native
microcode on approx. byte-for-byte basis (highest executing 6k 370
pathlengths accounted for approx. 80% of kernel execution) ...
https://www.garlic.com/~lynn/94.html#21

around 1980, there was effort to move variety of IBM internal
microprocessors to 801/risc ... low&mid-range 370s (Iliad 801 for
4361&4381), s38->as/400, controllers, etc. I got roped into help with
white paper that VLSI technology had advanced to point that it was
possible to implement nearly all 370 instructions directly in silicon
... as well as other proposed 801 solutions and those 801 efforts
floundered ... seeing some number of 801/RISC engineers leaving for
RISC projects at other vendors.

Note 801/ROMP was suppose to be for the next generation displaywriter
... when that got canceled, they decided to pivot to unix workstation
market and got the company that had done AT&T Unix port to IBM/PC for
PC/IX, to do one for ROMP ... becomes AIX (and PC/RT). The follow-on
chip set was RIOS for RS/6000. Then AIM is formed (Apple, IBM,
Motorola) and the executive we reported to for HA/CMP, went over to
head-up Somerset (single chip power/pc effort) which included adopting
some features from Motorola 88k RISC processor.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

tome about being frequently told that I had no career, no raises, no
promotions and about all the people that wanted to see me fired
... including 5of6 of the corporate executive committee, being blamed
for doing online computer conferencing in the late 70s and early 80s
on the IBM internal network (larger than arpanet/internet from just
about the beginning until mid/late 80s).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook

ReBoot Hill Revisited
https://planetmainframe.com/2016/03/reboot-hill-revisited/

Learson tried (and failed) to block the bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... further complicated by the failure of "Future System"
https://www.amazon.com/Computer-Wars-Future-Global-Technology/dp/0812923006/

"and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive."

... snip ...

future system refs:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... and 20yrs later, IBM has one of the largest losses in the history
of US corporations and it looked like it might be the end; IBM being
re-orged into the 13 "baby blues" in preparation for breaking up the
company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get a call from the bowels of
Armonk asking if we could help with the company breakup. Before we get
started, the board brings in the former AMEX president as CEO, who
(somehat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Mid-90s, financial industry was expanding globally and spending
billions on redoing batch cobol overnight settlement (some of it
originating from the 60s) ... combination of increased business and
globalization shortening the overnight window ... was not getting
settlement done in the time available. They were going to
straight-through financial processing on large numbers of parallel
"killer micros". Some of us tried to point out that the standard
parallelization libraries being used, had hundred times the overhead
of batch cobol ... and were ignored ... until some major pilots went
down in throughput flames.

After the turn of the century I was helping somebody that did a
high-level financial processing language that translated
specifications into (parallelizable) fine-grain SQL statements for
execution. Also in the late 90s, i86 processor makers had gone to
hardware layer that translated i86 into RISC micro-ops, largely
negating throughput difference between i86 and RISC.


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

In the same period, major (non-mainframe) RDBMS vendors (including IBM) had done significant optimization work on parallelizing (non-mainframe) RDBMS cluster operation. In 2003, had demo'ed a six system parallel RDBMS cluster, each system a four Pentium4 multiprocessor (each Pentium4 equivalent of max-configured z990, each system equivalent of four max-configured z990, six of them equivalent of 24 max-configured z990 ... or 232.8BIPS, aggregate more than current max-configured z16), Using the financial processing language, implemented equivalent "straight-through" processing of several existing major production (overnight batch window) systems with throughput greatly exceeding any existing requirement. This was taken to major financial industry meetings, initially with great acceptance ... then brick wall. Eventually were told that executives still bore the scars of the 90s attempts, and it would be a long time before it was tried again. some recent posts mentioning "straight-through" processing implementaton https://www.garlic.com/~lynn/2024.html#113 Cobol https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster https://www.garlic.com/~lynn/2021b.html#4 Killer Micros Turn of the century, IBM mainframe hardware sales had dropped to a few percent of total revenue (compared to over half in the 80s). In the z12 time-frame, it was down to a couple percent (and still dropping), but mainframe group was 25% of total revenue (and 40% of profit) ... nearly all software & services. I/O trivia: 1980 I was con'ed into doing channel-extender support for STL (since renamed SVL) that was moving 300 people from IMS group to offsite bldg with service back to STL datacenter. They had tried "remote 3270", but found human factors unacceptable. Channel-extender allowed placing channel-attached 3270 controllers at the offsite bldg with no perceptible difference in human factors between offsite and inside STL (although some tweaks with channel-extender increased system throughput by 10-15%, prompting suggestion that all their systems should use channel-extender). Then some POK engineers playing with some serial stuff, blocked the release of support to customers. channel-extender posts https://www.garlic.com/~lynn/submisc.html#channel.extender Later in 1988, the IBM branch office asks if I could help LLNL (national lab) get some serial stuff they were playing with, standardized. It quickly becomes "fibre-channel" standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff (after more than decade) finally gets released with ES/9000 as ESCON (when it is already obsolete) 17mbyes/sec. Then some POK engineers get involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, which eventually ships as FICON (running over FCS). The latest public benchmark I can find is z196 "Peak I/O" getting 2M IOPS with 104 FICON. About the same time, an FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend limiting SAPs (system assist processors that actually do I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating are CKD DASD, which haven't been made for decades, needing to be simulated on industry standard fixed-block disks. FICON &/or FCS posts https://www.garlic.com/~lynn/submisc.html#ficon

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

2010 max configured z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

2010 E5-2600 server blade ten times max configured z196 and still more than twice current max-configured z16 (current generation server blade closer to 40 times max-configured z16) reference to some discussion about performance technologies https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964 https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022 https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark https://www.garlic.com/~lynn/2021k.html#120 Computer Performance https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model https://www.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented https://www.garlic.com/~lynn/2014m.html#164 Slushware https://www.garlic.com/~lynn/2014l.html#90 What's the difference between doing performance in a mainframe environment versus doing in others https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer https://www.garlic.com/~lynn/2014c.html#96 11 Years to Catch Up with Seymour https://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's? -- virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited

Attractive Alternatives to Mainframes Are Breaking Their Decades-Old
Hold on Wall Street
https://web.archive.org/web/20120125090143/http://www.wallstreetandtech.com/operations/197007742

... before we left ibm (before our ha/cmp cluster scale-up was
transferred for announce as IBM supercomputer for technical/scientific
*only* and we were told we couldn't work on anything with more than
four processors), we did number of calls on NYSE and SAIC ... part of
it was their need for more processor power ... and HA/CMP would be
capable to have processors 128 RS/6000 clusters doing both
technical/scientific as well was RDBMS commercial.


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS (128*126MIPS = 16BIPS)

Hardware reliability had been increasing and service outages were increasingly shifting to environmental (earthquakes, hurricanes, floods) we were doing replicated systems and I had coined the terms disaster survivability and geographic survivability when out marketing. The IBM (rebranded) S/88 product administrator was taking us into their customers. They had also gotten me to write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the objectives). We had been brought into NYSE and SIAC; they had a datacenter very carefully located in NYC in a building that was supplied from multiple water, power, and telco sources that traveled different routes past the building. NYSE/SAIC was taken out when a transformer exploded in the basement, contaminating bldg with PCB. ha/cmp posts https://www.garlic.com/~lynn/subtopic.html#hacmp continuous availability, disaster survivability, geographic survivability posts https://www.garlic.com/~lynn/submain.html#available 801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts https://www.garlic.com/~lynn/subtopic.html#801 -- virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook

Turn of the century was brought into large financial outsourcing
datacenter, handled over half of all (issuing/consumer) credit card
accounts in the US (real-time auths, statementing, call-centers, etc)
... had 40+ max configured IBM mainframe systems (constant rolling
upgrades, none older than 18months) all running the same 450K
statement cobol application (number needed to finish batch settlement
in the overnight window). They had large group supporting performance
care and feeding for a couple decades ... but possibly got a little
myopic.

I offer to use some different performance analysis techniques (from
the IBM science center in the 70s) ... and was able to identify a 14%
improvement (including finding large complex operation that was using
three times the expected processing, turns out it was being invoked
three different times instead of just once) ... represented savings of
six max configured mainframes (at the time going rate around
@$30M). They had other datacenters that handled 70% of all acquiring
(merchant) credit card processing.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

past posts mentioning financial outsourcing and 450k statement cobol
application handling over half of all issuing/consumer credit card
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2023g.html#87 Mainframe Performance Analysis
https://www.garlic.com/~lynn/2023g.html#50 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#13 IBM today
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2017h.html#18 IBM RAS
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2013h.html#42 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?

--
virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#4 Cobol

the financial services company had once been unit of AMEX, but in
1992, it was spun off in the largest IPO up until that time ... same
time that IBM looked about at its end, having one of the largest
losses in the history of US corporations and was being reorged into
the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex (that the
financial services company had previously reported to) as CEO, who
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 17 Apr, 2024
Blog: Facebook

IBM 23jun1969 unbundling announcement started to charge for
(application) software (managed to make the case that kernel software
could still be free), system engineers (SE), maintenance, etc.

after graduation I joined the IBM science center and one of hobbies
was enhanced production operating systems for internal datacenters.

with the decision to add virtual memory to all 370s (basically MVT
storage management was so bad that regions were specified four times
larger than used and 1mbyte 370/165 typically only ran four concurrent
regions, insufficient to keep system busy and justified; going to
running MVT with 16mbyte address space ... similar to running MVT in a
16mbyte virtual machine ... aka VS2/SVS, would allow the number
of concurrently running regions to be increased by a factor of four
times ... with little or no paging) ... first thing was enhancing CP67
to optionally support 370 virtual machines with 370 virtual
memory ... and modifying a CP67 to run on 370 virtual memory
architecture (this was in regular production use for a year before the
1st engineering 370 with virtual memory was operational (in fact the
CP67-370 was used as part of validating the engineering 370). Then
there was a decision to release a VM370 product and in the morph from
CP67->VM370, a lot of features were dropped or simplified.

I had also done an automated benchmarking process ... run a specified
script giving number of simulated users with specified execution
profiles (as part of automated benchmarking I had also done the
"autolog" command that also came to be used for automating lots of
standard production operation), with automated system reboot between
each benchmark. With more internal datacenters installing VM370, early
1974, I started migrating lots of CP67 features to VM370
Release2... initially i found the VM370 automated benchmarking were
consistently crashing VM370 ... so the next thing I migrated was the
CP67 kernel syncronization&serialization ... in order to complete
a full set of benchmarks, w/o VM370 constantly crashing. Towards the
end of 1974, I had a VM370 R2-based production "CSC/VM" (for internal
datacenters).

Also in the period, IBM took a sharp swerve with the Future System
... which was completely different from 370 and was going to
completely replace 370. Internal politics during FS period was also
killing off 370 efforts, and the lack of new IBM 370s during the
period is credited with giving clone 370 makers, their market
foothold. When FS finally implodes, there is mad rush to get stuff
back into the 370 product pipeline, including kicking off
quick&dirty 3033&3081 efforts in parallel. some more detail
http://www.jfsowa.com/computer/memo125.htm

With the demise of FS (and the rise of 370 clone makers), it was
decided to start transition to kernel software charging ... beginning
with new kernel code "add-ons" (transition complete in the 1st half of
the 80s) ... and much of my internal "CSC/VM" was selected as guinea
pig (I also get to spend lots of time with business planners and
lawyers on kernel software charging practices).

As part of my release (some focus on the dynamic adaptive resource
manager & scheduler that I had done as undergraduate) for kernel
software add-on charging was 2000 automated validation benchmarks that
took 3months elapsed time to run. Science center had years of system
activity monitoring data for large number of different systems ... and
created a multiple dimension system activity specification (uniform
distribution of different combinations of number of users with
different amounts of real storage available, paging, working set
sizes, file I/O, CPU intensive, etc) with several benchmarks outside
normally observed activity ... for the 1st 1000 benchmarks.

Also done at the science center was an APL-based analytical system
model. This was made available on the world-wide, online
sales&marketing HONE as the Performance Predictor, branch
people could enter customer configuration and workload profile data
and ask "what-if" questions about what happens with configuration
and/or workload changes. The US HONE systems had been consolidated in
silicon valley resulting in the largest loosely-coupled, shared DASD
complex with fall-over and load-balancing ... where a modified version
of the APL-based model made load-balancing decisions.

Another modified version of the APL-base model would predict the
result of each of the 1st 1000 benchmarks and then checked the
prediction with the actual results (somewhat validating both the model
and my dynamic adaptive implementation). The APL-base model then was
modified to specify the benchmark profile for each of the 2nd 1000
benchmarks, looking at the results of all benchmarks run so far
... searching for possible anomalies.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
23un1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management and scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent performance predictor specific posts
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 18 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#6 Testing

other trivia: last product did at IBM was HA/CMP, started out HA/6000
for the NYTimes to move their newspaper system (ATEX) off VAXCluster
to RS/6000. I rename it HA/CMP when start doing technical/scientific
cluster scale-up with national labs and commercial cluster scale-up
with RDBMS vendors (Oracle, Sybase, Informix, Ingres ... that had both
VAXCluster and UNIX support in same source base). Lots of studies on
why things fail. In part, commodity hardware was increasingly becoming
more reliable and service outages were starting to increasingly shift
to other factors like earthquakes, floods, hurricanes, etc ... so had
to include replicated systems and different locations (less likely to
be subject to common events) ... out marketing I coined the terms
disaster survivability and geographic survivability. The IBM S/88
product administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
strategy document (but it got pulled when both Rochester/AS400 and
POK/mainframe complained they couldn't meet the objectives).

Early Jan1992, meeting with Oracle, IBM AWD/Hester told Oracle CEO
that IBM would have 16processor HA/CMP clusters by mid92 and
128processor HA/CMP clusters by ye92. I was then briefing IBM (gov)
FSD about HA/CMP and they apparently told the Kingston supercomputer
group that they were going with HA/CMP for gov. customers. Then end
Jan92, we were told that cluster scale-up was being transferred to
Kingston for announce as IBM supercomputer (for technical/scientific
*ONLY*) and we couldn't work with anything that had more than four
processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

.. when I transferred to SJR in the 70s, got to wander around IBM
& non-IBM datacenters including disk engineering (bldg14) and disk
product test (bldg15) across the street. They were doing prescheduled,
around the clock, stand-alone mainframe testing (they said they had
recently tried MVS, but MVS had 15min mean-time-between-failures
... requiring manual re-ipl ... in that environment). I offered to
rewrite I/O supervisor to make it bullet proof and never fail,
allowing any amount of on-demand, concurrent testing, improving
productivity ... downside was they would increasingly blame me for
problems and I had to spend increasing amount of time playing disk
engineer diagnosing their hardware problems. Engineering & Product
Test were completely separated, departments didn't report to common
management until the executive level ... and members didn't have badge
access to each others' machine rooms and bldgs (since I provided the
mainframe systems for both bldgs, my badge was enabled for access in
both bldgs, I assume not being in disk division, I wasn't subject to
the separation rules).

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

repost from over in another FACEBOOK group

The Birth OF SQL
https://www.youtube.com/watch?v=z8L202FlmD4&si=FHDLe1v_QZNUHZwM

.. when I transferred to SJR in the 70s , they were doing original
SQL/relational, "System/R" on vm370 370/145 there ... worked with Jim
Gray and Vera Watson. Some amount of conflict with STL and mainstream
DBMS "IMS" ... then the company was working on the next great DBMS
"EAGLE" ... and was able to do tech transfer (under the "radar") to
Endicott for SQL/DS. Then when "EAGLE" implodes there is request for
how fast could "System/R" be ported from VM/370 to MVS .... which
eventually ships as DB2, originally for decision-support *only*.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 20 Apr, 2024
Blog: Facebook

Following AESA radar first flight on F-16, Aselsan eyes 5th-gen
https://breakingdefense.com/2024/03/following-aesa-radar-first-flight-on-f-16-aselsan-eyes-5th-gen-aircraft-integration/
US Air Force Secretary to fly in AI-piloted F16 to demonstrate safety
https://interestingengineering.com/military/usaf-to-fly-ai-controlled-f16
US Air Force Secretary to fly in AI-controlled F-16
https://www.theregister.com/2024/04/10/usaf_ai_f16_tests/
US Air Force says AI-controlled F-16 has fought humans
https://www.theregister.com/2024/04/18/darpa_f16_flight/

I was introduced to John Boyd in the early 80s and would sponsor his
briefings. He was largely responsible for LWF ... he would say he used
his E-M theory on the original F15 design (supposedly started out as
F-111 follow-on with swing wing), showing that the weight of the pivot
more than offset the advantage of swing wing.
https://en.wikipedia.org/wiki/Lightweight_Fighter_program
and then YF16 and YF17
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Lightweight_Fighter_program

In the late 1960s, Boyd gathered a group of like-minded innovators who
became known as the Fighter Mafia, and in 1969, they secured
Department of Defense funding for General Dynamics and Northrop to
study design concepts based on the theory.[13][14]

... snip ...

YF16 with relaxed stability requiring "fly-by-wire" that was fast
enough for flight control surfaces
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Relaxed_stability_and_fly-by-wire
https://en.wikipedia.org/wiki/Relaxed_stability
https://fightson.net/150/general-dynamics-f-16-fighting-falcon/

The F-16 is the first production fighter aircraft intentionally
designed to be slightly aerodynamically unstable, also known as
"relaxed static stability" (RSS), to improve manoeuvrability. Most
aircraft are designed with positive static stability, which induces
aircraft to return to straight and level flight attitude if the pilot
releases the controls; this reduces manoeuvrability as the inherent
stability has to be overcome. Aircraft with negative stability are
designed to deviate from controlled flight and thus be more
maneuverable. At supersonic speeds the F-16 gains stability
(eventually positive) due to aerodynamic changes.

... snip ...

misc. other
http://www.aviation-history.com/airmen/boyd.htm
https://www.nytimes.com/2003/03/09/books/40-second-man.html
https://www.nytimes.com/1997/03/13/us/col-john-boyd-is-dead-at-70-advanced-air-combat-tactics.html
https://www.usni.org/magazines/proceedings/1997/july/genghis-john

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

Around 2010, there were online social media claims that F-35 was
stealth and would replace F-15s, F-16s, F-18s, EA-18s, and A10s. Later
in the decade, I found some analysis that showed it was less stealth
than claimed and saw claims changed to "low observable".
https://www.ausairpower.net/APA-2009-01.html
http://www.ausairpower.net/jsf.html
http://www.ausairpower.net/APA-JSF-Analysis.html

Then found an online 2011 radar tutorial that made claims about the
processing power needed to do real-time recognizing low-observable F-35
radar signatures (which was more than currently available ... however
that fall articles appeared about self-driving cars claiming that the
processing power used was 100 times the 2011 claims needed for
real-time F-35 radar signature). Then within a year, articles appeared
announcing that new radar jamming pods were being delivered for EA-18s
to handle frequencies that could be used to target F-35s.

Posts mentioning F-35 "stealth" and 2011 radar tutorial
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2022e.html#101 The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 21 Apr, 2024
Blog: Facebook

Boeing and the Dark Age of American Manufacturing. Somewhere along the
line, the plane maker lost interest in making its own planes. Can it
rediscover its engineering soul?
https://www.theatlantic.com/ideas/archive/2024/04/boeing-corporate-america-manufacturing/678137/

I took two credit hr intro to fortran/computers and at the end of
semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30
... the univ. was getting 360/67 replacing the 709/1401, a 360/30
temporarily replaced 1401 (getting 360/30 for 360 experience) pending
delivery of 360/67. The 360/67 arrives within a year of my taking intro
class and I'm hired fulltime responsible fo os/360.

Then before I graduate I'm hired fulltime into small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services ... I think Renton datacenter possibly largest in the world
with 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around the machine room. Lots of
politics between Renton director and CFO, who only had a 360/30 up at
Boeing field for payroll, although they enlarge the room for a 360/67
for me to play with when I'm not doing other stuff. 747#3 was flying
skies of Seattle getting FAA flt certification. There was also
disaster plan to replicate Renton up at the new 747 plant in Everett
(Mt. Rainier heats up and the resulting mud slide takes out
Renton). When I graduate, I join IBM science center instead of staying
with Boeing CFO.

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent Boyd post
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16

Boyd told story about being vocal that the electronics across the
trail wouldn't work ... he then is put in command of spook base (about
the same time I'm at Boeing).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd biography has "spook base" a $2.5B windfall for IBM (ten times
Renton).

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html#,a/>

Did Stock Buybacks Knock the Bolts Out of Boeing?
https://lesleopold.substack.com/p/did-stock-buybacks-knock-the-bolts

Since 2013, the Boeing Corporation initiated seven annual stock
buybacks. Much of Boeing's stock is owned by large investment firms
which demand the company buy back its shares. When Boeing makes
repurchases, the price of its stock is jacked up, which is a quick and
easy way to move money into the investment firms' purse. Boeing's
management also enjoys the boost in price, since nearly all of their
executive compensation comes from stock incentives. When the stock
goes up via repurchases, they get richer, even though Boeing isn't
making any more money.

... snip ...

2016, one of the "The Boeing Century" articles was about how the
merger with MD has nearly taken down Boeing and may yet still
(infusion of military industrial complex culture into commercial
operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout

Unlike Boeing, McDonnell Douglas was run by financiers rather than
engineers. And though Boeing was the buyer, McDonnell Douglas
executives somehow took power in what analysts started calling a
"reverse takeover." The joke in Seattle was, "McDonnell Douglas bought
Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution

Sorscher had spent the early aughts campaigning to preserve the
company's estimable engineering legacy. He had mountains of evidence
to support his position, mostly acquired via Boeing's 1997 acquisition
of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft
plant in Long Beach and a CEO who liked to use what he called the
"Hollywood model" for dealing with engineers: Hire them for a few
months when project deadlines are nigh, fire them when you need to
make numbers. In 2000, Boeing's engineers staged a 40-day strike over
the McDonnell deal's fallout; while they won major material
concessions from management, they lost the culture war. They also
inherited a notoriously dysfunctional product line from the
corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern
capitalism. Deregulation means a company once run by engineers is now
in the thrall of financiers and its stock remains high even as its
planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buybacks
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

Recent posts mentioning Boeing CFO, Boeing Computer Services, Renton
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying

some posts mentioning M/D financiers taking over Boeing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT:  Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#39 Crash Course
https://www.garlic.com/~lynn/2019e.html#33 Boeing's travails show what's wrong with modern capitalism
https://www.garlic.com/~lynn/2019d.html#39 The Roots of Boeing's 737 Max Crisis: A Regulator Relaxes Its Oversight
https://www.garlic.com/~lynn/2019d.html#20 The Coming Boeing Bailout?

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 21 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

The USAF Pairs Piloted Jets With AI Drones. Has AI spawned the
ultimate "loyal wingman"--or just the next smart weapon?
https://spectrum.ieee.org/military-drones-us-air-force

2021 post/article mentioning loyal wingman/Valkyrie
https://www.garlic.com/~lynn/2021j.html#67 A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A
Valkyrie
https://nationalinterest.org/blog/buzz/mini-f-35-dont-go-crazy-over-air-forces-stealth-xq-58a-valkyrie-46527

While the Air Force refused to disclose specifics of the XQ-58A, the
drone is billed as having long range and a "high subsonic" speed. It
is designed to be "runway independent," which suggests it will be
flown from rough airstrips and forward bases. Still more clues can be
found in a $40.8 million Air Force contract awarded to Kratos in 2016
under the Low-Cost Attritable Strike Unmanned Aerial System
Demonstration program. That contract called for a drone with a top
speed of Mach 0.9 (691 miles per hour), a 1,500-mile combat radius
carrying a 500-pound payload, the capability to carry two GBU-39 small
diameter bombs, and costing $2 million apiece when in mass production
(an F-35 costs around $100 million).

... snip ...

... at one point, F-35 price was so unreasonable they started quoting
plane w/o engine and separate price for the engine.

I was introduced to John Boyd in the early 80s and would sponsor his
briefings. One of Boyd stories was being asked to review the USAF
newest air-to-air missile before Vietnam. They showed him a film where
the missile hit flares on a drone every time. He asked them to rewind
the film and then just before the missile hits, had them stop the film
and asked them what kind of guidance. They eventually say
heat-seeking, he then asks them what kind of heat-seeking and gets
them to eventually say "pin-point". He then asks him where is the
hottest part of a jet plane. They answer the engine ... he says wrong,
it is the plume some 30yrds behind the plane ... aka the missile will
be lucky to hit 10% of the time (they gather up all their material and
leave). Roll forward to Vietnam and Boyd is proved correct. At some
point the USAF commanding general in Vietnam has all the fighters
grounded until the USAF missiles are replaced with Navy Sidewinders
(that have better than twice the hit rate). The general lasts 3months
before he is called on the carpet back in the Pentagon for violating a
cardinal (USAF) Pentagon rule, cutting (USAF) budget (by not using USAF
missiles) and what was much worse, increasing the Navy budget.

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 21 Apr, 2024
Blog: Facebook

Charlie had invented compare&swap when doing CP67 multiprocessor
fine-grain locking support at the science center. When we tried to get
the 370 architecture owners to include compare&swap for 370, they
said that the POK favorite son operating system owners (MVT, then
SVS&MVS) said the (360) test&set" instruction was more than
sufficient, if compare&swap was to be justified had to come up with
justifications that weren't multiprocessor specific; thus were born
the examples for application multithreading/multiprogramming use (like
DBMS).

SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

A decade ago, I was asked to track down the decision to add virtual
memory to all 370s; basically MVT storage management was so bad that
regions had to be specified four times larger than used, so 1mbyte,
370/165 typically ran only four concurrent regions ... insufficient to
keep system busy and justified. Going to 16mbyte virtual address space
("SVS", similar to running MVT in a CP67 16mbyte virtual machine)
could increase concurrently running regions by a factor of four times,
with little or no paging. The 370 virtual memory decision also
resulted in doing VM370, and in the morph of CP67->VM370, they
simplified and/or dropped lots of features (including multiprocessing
support).

archived posts with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (including online sales&marketing
support US HONE was long time customer from CP67 days, which evolves
into world-wide VM370). As internal datacenters were migrating to
VM370, in 1974 I started moving a lot of the CP67 missing features to
a release2-based VM370 production "CSC/VM" ... which included kernel
re-organization for multiprocessing ... but not the actual
multiprocessor support.

The US HONE datacenters were consolidated in silicon valley with the
largest loosely-coupled shared DASD configuration including
load-balancing and fall-over support. Then I added multiprocessor
support to Release3-based VM370 "CSC/VM", initially for US HONE so
they could add a second processor for eight tightly-coupled systems in
a loosely-coupled configuration. I did some tricks with hightly
optimized multiprocessor pathlengths coupled with some processor cache
affinity tricks (improving cache-hit and processor throughput
offsetting multiprocessor pathlengths) showing twice the throughput of
a single processor (this was at the time when MVS documentation was
giving MVS multiprocessor throughput as 1.2-1.5 times the troughput of
a single processor).

CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#csc/vm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: when facebook 1st moves into silicon valley, it is into a new
bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of
releasing no software before its time" when z/VM finally releasing
similar loosely-coupled support.

more trivia: after "future system" imploded (was going to replace all
370s and lack of new 370s during the period is credited with giving
370 clone makers their market foothold)
http://www.jfsowa.com/computer/memo125.htm
I got roped into helping with a 16-processor tightly-coupled,
multiprocessor 370 ... and we con the 3033 processor engineers into
working on it in their spare time (a lot more interesting than remapping
168 logic to 20% faster chips). Everybody thought it was great until
somebody tells the head of POK that it could be decades before the POK
favorite son operating system (MVS) had effective 16-processor support
(with 2-processor only 1.2-1.5 times throughput of single processor
and if not careful, multiprocessor overhead growing non-linear with
increase in processors) The head of POK then directs that some of us
never visit POK again and that the 3033 processor engineers keep
concentrated on 3033 (... and POK doesn't ship a 16-processor system
until after the turn of the century)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 22 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#11 370 Multiprocessor

3033 started out 168 logic remaped to 20% faster (and somewhat more
circuits/chip) ... the 303x channel director was 158 engine with just
the integrated channel microcode (for six channels) and w/o the 370
microcode ... to get full 16 channels would require three channel
director boxes.

A 3031 was two 158 engines... one with only the 370 microcode and a
2nd with just the integrated channel microcode.

A 3032 was 168 using the channel director box for external channels.

Trivia: the (original) 168 external channels were actual faster than
the 303x channel director box (i.e. 158 engine with just the
integrated channel microcode)

final(?) trivia: compare-and-swap was chosen because "CAS" were
Charlie's initials

360 had 2301&2303 "drum"" ... 2305-1 & 2305-2 were fixed head
disks. 2301 was similar to 2303 ... same capacity but read/write four
heads in parallel ... 1/4 no. tracks, each track 4 times larger, 4
times transfer rate

2305-1, 5.4mbytes, avg rotational delay 2.5msecs, 3mbyte/sec transfer
most were 2305-2, 11.2mbytes, avg rotational delay, 5msecs,
1.5mbyte/sec

2305-1 had same number of heads as 2305-2 but heads were paired,
offset 180degrees, read/write simultaneously, transfer on 2-byte
channel. Start of record had only to rotate avg. 1/4 revolution for
record to come under pair of (offset) head pair.

URL still there 2023 ... but now gone "404" ... easiest to just go to
wayback machine
https://web.archive.org/web/20230821125023/https://www.ibm.com/ibm/history/exhibits/storage/storage_2305.html

By 1980, there was no follow-on product. For internal datacenters, IBM
then contracted with vendor for what they called "1655", electronic
disks that would emulate a 2305 ... but had no rotational delay. One
of the issue was that while IBM had fixed-block disks, the company
favorite son batch operating system never supported anything other
than CKD DASD ... so for their use it had to simulate an existing CKD
2305 running over 1.5mbyte I/O channels. However for other IBM systems
that supported FBA ... 1655s could be configured as fixed-block disk
running on 3mbyte/sec I/O channels ... similar to SSD ... but had
standard electronic memory that wasn't persistent w/o power.

posts mentioning DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

past posts mentioning 2301, 2305, and 1655
https://www.garlic.com/~lynn/2022e.html#41 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2012c.html#1 Spontaneous conduction: The music man with no written plan
https://www.garlic.com/~lynn/2011c.html#48 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010q.html#67 ibm 2321 (data cell)
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
https://www.garlic.com/~lynn/2008n.html#93 How did http get a port number as low as 80?
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003n.html#52 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003n.html#50 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003j.html#6 A Dark Day
https://www.garlic.com/~lynn/2003j.html#5 A Dark Day
https://www.garlic.com/~lynn/2003h.html#14 IBM system 370
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 22 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

Boeing's problems were as bad as you thought. Experts and
whistleblowers testified before Congress today. The upshot? "It was
all about money."
https://www.vox.com/money/2024/4/17/24133324/boeing-senate-hearings-whistleblower-sam-salehpour-congress

Boeing went under the magnifying glass at not one, but two Senate
hearings today examining allegations of deep-seated safety issues
plaguing the once-revered plane manufacturer. Witnesses, including two
whistleblowers, painted a disturbing picture of a company that cut
corners, ignored problems, and threatened employees who spoke up.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Bemer, ASCII, Brooks and Mythical Man Month

From: Lynn Wheeler <lynn@garlic.com>
Subject: Bemer, ASCII, Brooks and Mythical Man Month
Date: 24 Apr, 2024
Blog: Facebook

360s were suppose to be ASCII machines but the ASCII unit record gear
wasn't ready ... so they were (supposedly) going to temporarily use
the (old) BCD unit gear with EBCDIC ... "the biggest computer goof
ever"
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

Unfortunately, the software for the 360 was constructed by thousands
of programmers, with great and unexpected difficulties, and with
considerable lack of controls. As a result, the nearly $300 million
worth of software (at first delivery!) was filled with coding that
depended upon the EBCDIC representation to work, and would not work
with any other! Dr. Frederick Brooks, one of the chief designers of
the IBM 360, informed me that IBM indeed made an estimate of how much
it would cost to provide a reworked set of software to run under
ASCII. The figure was $5 million, actually negligible compared to the
base cost. However, IBM (present-day note: Read "Learson") made the
decision not to take that action, and from this time the worldwide
position of IBM hardened to "any code as long as it is ours".

... snip ...

https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above attributes it to Learson ... however, it was also Learson that
was trying to block the bureaucrats, careerists (and MBAs) from
destroying the Watson Legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
So by the early 90s, it was looking like it was nearly over, 1992 IBM
has one of the largest losses in history of US corporations and was
being re-orged into the 13 "baby blues" in preparation for breaking up
the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup the company. Before we get
started, the board brings in the former president of Amex that
(mostly) reverses the breakup (although it wasn't long before the disk
division is gone).

posts mentioning ASCII & Mythical Man Month
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT

--
virtualization experience starting Jan1968, online at home since Mar1970

360&370 Unix (and other history)

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360&370 Unix (and other history)
Date: 24 Apr, 2024
Blog: Facebook

Trivia: Story was both Amdahl & IBM field support claimed they
wouldn't support customer machines w/o industrial strength EREP
... adding it to UNIX would have been several times the effort of just
doing direct UNIX port to 370. SSUP was stripped down TSS/360 with
just hardware and device support ... and EREP. Amdhal UTS and other
IBM UNIX 370 efforts ran in VM/370 (leveraging its EREP).

possibly more than you asked for

Took two credit hr intro to fortran/computers and end of semester was
hired to rewrite 1401 MPIO in assembler for 360/30. Univ replacing
709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was
replaced with 360/30 (pending availability of 360/67, 360/30 for
starting to get familiar with 360, 360/30 also had microcode 1401
emulation). The univ shutdown datacenter on weekends and I would have
it dedicated, although 48hrs w/o sleep made Monday classes hard. They
gave me a bunch of hardware and software manuals and I got to design
and implement my own monitor, device drivers, interrupt handlers,
storage management, error recovery, etc. and within a few weeks had a
2000 card assembler program.

Then within a year of intro class, the 360/67 comes in and I'm hired
fulltime responsible for OS/360 (tss/360 never really came to
production, so ran as 360/65, I continue to have my 48hr dedicated
datacenter on weekends). Student fortran had run under a second on
709, initially on os/360 ran over a minute. I install HASP and it cuts
the time in half. I then start redoing OS/360 STAGE2 SYSGEN, careful
placing datasets and PDS members to optimize arm seek and multi-track
search, cutting another 2/3rds to 12.9secs. Never got better than 709
until I install Univ. of Waterloo WATFOR.

CSC had come out to install CP67/CMS (precursor to vm370, 3rd
installation after CSC itself and MIT Lincoln Labs) and I mostly
played with it in my weekend dedicated time. Early on the IBM TSS/360
SE was around for a time and we created synthetic benchmark of fortran
edit, compile, & execute. Unmodified CP67/CMS ran 35 simulated users
with better response and throughput than TSS/360 did with four
simulated users.

Initially for CP67, I mostly worked on rewriting pathlengths for
running os/360 in virtual machine. OS/360 test ran 322 secs on "bare
machine", initially 856secs in virtual machine (CP67 CPU 534secs),
after a few months, got CP67 CPU down to 113secs (from 534secs). I
then redid I/O for paging (chained requests for optimized transfer per
revolution) and for all disk optimized ordered arm seek; new optimized
page replacement algorithm, and dynamic adaptive resource management
and scheduling.

CP67 came with 2741&1052 terminal with automagic terminal type support
(SAD CCW to switch port terminal type scanner). The univ. had some
number of TTY/ASCII terminals and I integrated ASCII terminal support
with automagic terminal type support (trivia: ASCII terminal type
support had come in a "HEATHKIT" box for install in the IBM
telecommunication controller). I then wanted a single dialup telephone
number ("hunt group") for all terminals. Didn't quite work, while
could dynamically change terminal type scanner ... IBM had taken a
short cut and hardwired port line speed.

This kicks off a univ project to do clone controller, build a channel
interface board for an Interdata/3 programmed to simulate IBM
telecommunication controller, with addition it could do dynamic line
speed). Later was upgraded to a Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) were selling it as clone controller and four of us are
written up for (some part of) clone controller business. Around the
turn of century I run into descendant at large datacenter that was
handling majority of point-of-sale dailup credit card machines east of
the Mississippi.

some more CSC & CP67/CMS history
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

plug compatible 360 controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate I'm hired fulltime into small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services ... I think Renton datacenter possibly largest in the world
with 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around the machine room. Lots of
politics between Renton director and CFO, who only had a 360/30 up at
Boeing field for payroll, although they enlarge the room for a 360/67
for me to play with when I'm not doing other stuff. 747#3 was flying
skies of Seattle getting FAA flt certification. There was also
disaster plan to replicate Renton up at the new 747 plant in Everett
(Mt. Rainier heats up and the resulting mud slide takes out
Renton). When I graduate, I join IBM science center instead of staying
with Boeing CFO.

Charlie had invented compare&swap (mnemonic chosen because "CAS" were
his initials) instruction when he was doing CP67 fine-grain,
multiprocessor locking at the science center. When we tried to get the
370 architecture owners to include compare&swap for 370, they said
that the POK favorite son operating system owners (MVT, then SVS&MVS)
said the (360) test&set" instruction was more than sufficient, if
compare&swap was to be justified had to come up with justifications
that weren't multiprocessor specific; thus were born the examples for
application multithreading/multiprogramming use (like DBMS).

A decade ago, I was asked to track down the decision to add virtual
memory to all 370s; basically MVT storage management was so bad that
regions had to be specified four times larger than used, so 1mbyte,
370/165 typically ran only four concurrent regions ... insufficient to
keep system busy and justified. Going to 16mbyte virtual address space
("SVS", similar to running MVT in a CP67 16mbyte virtual machine)
could increase concurrently running regions by a factor of four times,
with little or no paging. The 370 virtual memory decision also
resulted in doing VM370, and in the morph of CP67->VM370, they
simplified and/or dropped lots of features (including multiprocessing
support).

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (including online sales&marketing
support US HONE was long time customer from CP67 days, which evolves
into world-wide VM370-based HONE). As internal datacenters were
migrating to VM370, in 1974 I started moving a lot of the CP67 missing
features to a release2-based VM370 production "CSC/VM" ... which
included kernel re-organization for multiprocessing ... but not the
actual multiprocessor support. The US HONE datacenters were
consolidated in silicon valley with the largest loosely-coupled shared
DASD configuration including load-balancing and fall-over
support.

Then I added multiprocessor support to Release3-based VM370 "CSC/VM",
initially for US HONE so they could add a second processor for eight
tightly-coupled systems in a loosely-coupled, shared-DASD
configuration. I did some tricks with highly optimized multiprocessor
pathlengths coupled with some processor cache affinity tricks
(improving cache-hit and processor throughput offsetting
multiprocessor pathlengths) showing twice the throughput of a single
processor (this was at the time when MVS documentation was giving MVS
multiprocessor throughput as 1.2-1.5 times the throughput of a single
processor).

trivia: when facebook 1st moves into silicon valley, it is into a new
bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of
releasing no software before its time" when z/VM finally releasing
similar loosely-coupled support.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

I had joined IBM Science Center not long before "Future System"
started (early 70s, completely different and was going to completely
replace 370, lack of new 370 during the period is credited with giving
the 370 clone makers their market foothold). I continued to work on
360&370 all during the Future System period ... even periodically
ridiculing them (like speculating they didn't really know what they
were doing, not exactly career enhancing activity). more background:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

when FS finally implodes, there is mad rush to get stuff back into the
370 product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel.
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive"

... snip ...

In the wake of the FS implosion, I was also roped into an effort to do
a 16-processor, tightly-coupled, multiprocessor 370 and we con the
3033 processor engineers into working on it in their spare time (a lot
more interesting than remapping 168 logic to 20% faster chips);
everybody thought it was great until somebody tells the head of POK
that it could be decades before the POK favorite son operating system
(MVS) has effective 16-processor support (goes along with
documentation that 2-processor MVS only had 1.2-1.5 throughput of
single processor). Then some of us were invited to never visit POK
again (and the 3033 processor engineers directed to concentrate on
3033 and no more distractions). trivia: POK doesn't ship a
16-processor machine until after the turn of the century.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since
Mar1970

CTSS, Multicis, CP67/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS, Multicis, CP67/CMS
Date: 24 Apr, 2024
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

trivia: I was undergraduate and univ hired me fulltime responsible for
OS/360 (360/67 originally for tss/360, but was being run as
360/65). Then CSC came out to install CP67/CMS (3rd installation after
CSC itself, and MIT Lincoln Labs). I mostly got to play with it during
my 48hr weekend dedicated time (univ. shutdown datacenter on
weekends). CSC had 1052&2741 support, but univ. had some number of
TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC picked
up and distributed with standard CP67 (as well as lots of my other
stuff). I had done a hack with one byte values for TTY line
input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg
across quad from 545). Somebody down at Harvard got an ascii device
with 1200(?) char length ... they modified field for max. lengths
... but didn't adjust my one-byte hack ... crashing system 27 times in
single day.
https://www.multicians.org/thvv/360-67.html

But on that day, a user at Harvard School of Public Health had
connected a plotter to a TTY line and was sending graphics to it, and
every time he did, the whole system crashed. (It is a tribute to the
CP/CMS recovery system that we could get 27 crashes in in a single
day; recovery was fast and automatic, on the order of 4-5
minutes. Multics was also crashing quite often at that time, but each
crash took an hour to recover because we salvaged the entire file
system. This unfavorable comparison was one reason that the Multics
team began development of the New Storage System.)

... snip ...

I had done automated benchmarking system where I could specify
different configurations, types of workloads, number of users, etc
... and then reboot between benchmarks. When I 1st started migration
from CP67 to VM370, the 1st thing I did was automated benchmarking
... but found that VM370 would crash several times before completing
standard set of benchmarks. As a result, the next things I had to
migrate to VM370 was the CP67 kernel serialization mechanism so VM370
could finish a standard set of benchmarks.

There was some friendly rivalry between 4th and 5th flrs ... one area
was federal gov. ... Multics had installation at USAFDS in the
Pentagon
https://www.multicians.org/site-afdsc.html

In 2nd half of 70s, had transferred out to IBM Research in San Jose
and in spring 1979 got a call that a couple people from USAFDS wanted
to come out to talk about getting 20 VM/4341s ... however by the time
they got around to coming out the following fall, it had increased to
210 VM/4341s.

a reference to instead of upgrading UNIX with mainframe EREP, "borrowing" it
https://www.garlic.com/~lynn/2024c.html#4 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024c.html#5 360&370 Unix (and other history)

above also refs adding CP67 multiprocessing to VM370 ... but just
before I did it, somehow AT&T Longlines was able to get a copy of my
CSC/VM with full source ... and over the following years, migrated it
to newest processors and propagated to multiple AT&T
datacenters. Roll-forward to new IBM 3081, which was originally
intended to be multiprocessor *only* and the IBM AT&T corporate
marketing rep tracks me down to help AT&T with this archaic CSC/VM
system (afraid that AT&T would migrate everything to the latest Amdahl
machines ... which had faster single processor that had almost the
throughput of the two processor 3081).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 24 Apr, 2024
Blog: Facebook

IBM Millicode
https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processor
https://public.dhe.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_10_what_and_why_of_system_z_millicode.pdf

IBM high-end machines are horizontal microcode which is really
difficult and time-consuming to program. After Future System implosion
http://www.jfsowa.com/computer/memo125.htm

Endicott cons me into helping with ECPS microcode assist for 138/148
(low&mid range 370) that were vertical microcode ... basically
microprocessor machine language. Then in early 80s, I got permission
to give ECPS presentations at user group meetings, including monthly
BAYBUNCH hosted by Stanford SLAC. Afterwards the Amdahl people would
grill me for more information. They said that they had developed
"MACROCODE" (370-like instructions running in microcode mode for their
high-end horizontal microcode machine) during IBM's 3033 period, to
quickly respond to IBM trivial new (horizontal) microcode functions
were being shipped required for MVS to run. At the time they were in
the process of implementing "HYPERVISOR" (subset of virtual machine
functions running w/o VM370). IBM wasn't able to respond with
LPAR&PR/SM until nearly end of the decade for 3090.

Similar, but different, late last century, the i86 vendors went to a
hardware layer that translated i86 into RISC micro-ops for actual
execution ... largely negating the throughput advantage of RISC
processors (industry standard benchmark program that counts number of
iterations compared to 1MIP reference platform).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
    (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

360/370 microcode posts https://www.garlic.com/~lynn/submain.html#360mcode 801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts https://www.garlic.com/~lynn/subtopic.html#801 -- virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 25 Apr, 2024
Blog: Facebook

IBM CP-40
https://en.m.wikipedia.org/wiki/IBM_CP-40

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

paper about CP40/CMS ... some amount taken from CTSS
https://www.garlic.com/~lynn/cp40seas1982.txt
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

science center wanted 360/50 to modify with virtual memory, but all
the spare 360/50s were going to FAA ATC project ... and so they had to
settle for a 360/40. When 360/67 becomes available standard with
virtual memory, CP40 morphs into CP67

some more details (univ. I was at, becomes 3rd installation, after CSC itself, and MIT Lincoln Labs)
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2024.html#49 Card Sequence Numbers
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

last product we did at IBM was HA/CMP ... it originally was HA/6000
for the NYTimes to move their newspaper system (ATEX) off VAXCluster
to RS/6000; I rename it HA/CMP when start doing technical/scientific
cluster scale-up with national labs and commercial cluster scaleup with
RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had both
VAXCluster and Unix in the same source base. I did an enhanced
distributed lock manager with VAXCluster API semantics to simplify
their HA/CMP support. Disclaimer: When transferred to IBM Research, I
got roped into doing some work with Jim Gray and Vera Watson on the
original SQL/relational implementation ("System/R") and then helping
with tech transfer to Endicott for SQL/DS ... "under the radar", while
the corporation was preoccupied with the next great DBMS,
"EAGLE". Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which eventually ships as DB2,
originally for decision-support only.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Part of HA/CMP was studying how things fail ... and at one point was
in brought in to latest ATC modernization effort. Turns out it
involved fault-tolerant triple-redundant hardware with guidelines that
since all failures would be masked ... the software didn't have to
worry about such things. However, it turns out that there were some
"business/operational rules" that could have failures ... and the
software effort had to be reset to handle non-hardware related
failures. We then got into the habit of dropping in on staff person in
the office of IBM FSD President.

First part of Jan1992, had Oracle meeting and IBM AWD/Hester told
Oracle CEO that we would have 16-processor clusters by mid92 and
128-processor clusters by ye92 ... and during Jan1992 was keeping FSD
appraised of HA/CMP status and work with national labs. Apparently
during Jan, FSD told Kingston supercomputer project that FSD was going
with HA/CMP for gov. accounts. Then end of Jan, cluster scale-up was
transferred to Kingston for announce as IBM supercomputer (for
technical/scientific *only*) and we were told that we couldn't work on
anything with more than four processors ... we leave IBM a few months
later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

more trivia: never dealt with Fox while in IBM; FAA ATC, The Brawl in
IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement
special. The computer selected will be in the critical loop of making
sure that there are no more mid-air collisions. Many in IBM want to
not bid. A marketing manager with but 7 years in IBM and less than one
year as a manager is the proposal manager. IBM is in midstep in coming
up with the new line of computers - the 360. Chaos sucks into the fray
many executives- especially the next chairman, and also the IBM
president. A fire house in Poughkeepsie N Y is home to the technical
and marketing team for 60 very cold and long days. Finance and legal
get into the fray after that.

... snip ...

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794

After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had
his standard management presentation -to IBM and CIA groups -
published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings
and was translated into Spanish -and has been offered continuously for
sale as a used book on Amazon.com. It is now reprinted -verbatim- and
available from Createspace, Inc - for $15 per copy. The book presents
a total of 22 traits and qualities and their role in real life
situations- and their resolution- encountered during Mr. Fox's 20
years with IBM and with major computer customers, both government and
commercial. The presentation and the book followed a focus and use of
quotations to Identify and characterize the role of the traits and
qualities. Over 400 quotations enliven the text - and synthesize many
complex ideas.

... snip ...

... but after leaving IBM, had a project with Fox and his company that
also had some other former FSD FAA people.

other trivia: doing HA/CMP we started out reporting to executive, who
later went over to head up Somerset ... single RISC chip design effort
for AIM (apple, ibm, motorola), some amount of motorola 88k RISC
features incorporated into power/pc.

trivia: CPS (run under OS/360 ... similar to APL\360, CPS included
microcode assist on the 360/50) was handled by Boston Programming
Center which was on 3rd flr, below Cambridge Scientific Center on 4th
flr (and Multics on 5th flr). With the decision to do CP67->VM/370
some of the science center people went to the 3rd flr taking over the
Boston Programming Center for the VM/370 development group. When the
development group outgrew their half of the 3rd flr (there was a
gov. agency that the bldg register listed as law firm in the other
half), they moved out to the empty SBC bldg at Burlington mall (off
128, SBC had been spun off to another computer company in a legal
matter).

Note: after Future System implosion and mad rush to get stuff back
into 370 product pipelines, including kicking off the quick and dirty
3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm

the head of POK also managed to convince corporate to kill the vm370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (presumably claiming that otherwise MVS/XA wouldn't be
able to ship on time in the 80s). Eventually, Endicott managed to save
the VM/370 product mission (for low-end and mid-range), but had to
recreate a development group from scratch.

they weren't going to tell the people about the shutdown until the
very last minute, to minimize the number that might be able to escape
into the boston area ... however the information manage to leak and
several managed to escape (including to the infant DEC VMS effort,
joke was that head of POK was major contributor to VMS). They did a
hunt for the source of the leak, fortunately for me, nobody gave the
source up.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode

1980 I was con'ed into doing channel-extender support for STL (since
renamed SVL) that was moving 300 people from IMS DBMS group to offsite
bldg with service back to STL datacenter. They had tried "remote
3270", but found human factors unacceptable. Channel-extender allowed
placing channel-attached 3270 controllers at the offsite bldg with no
perceptible difference in human factors between offsite and inside STL
(although some tweaks with channel-extender increased system
throughput by 10-15%, prompting suggestion that all their systems
should use channel-extender, aka they had spread 3270 controllers
across all the channels with DASD and "slow" 3270 controller channel
busy was interfering with DASD I/O, channel-extender boxes were much
faster and reduced channel busy for same amount of 3270 transfer).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Then some POK engineers playing with some serial stuff, blocked the
release of support to customers. Later in 1988, the IBM branch office
asks if I could help LLNL (national lab) get some serial stuff they
were playing with, standardized. It quickly becomes "fibre-channel"
standard ("FCS", including some stuff I had done in 1980), initially
1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff
(after more than decade) finally gets released with ES/9000 as ESCON
(when it is already obsolete) 17mbyes/sec. Then some POK engineers get
involved in FCS and define a heavy weight protocol that significantly
cuts the native throughput, which eventually ships as FICON (running
over FCS). The latest public benchmark I can find is z196 "Peak I/O"
getting 2M IOPS with 104 FICON. About the same time, an FCS was
announced for E5-2600 server blades claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend limiting SAPs (system assist processors that actually do
I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating
are CKD DASD, which haven't been made for decades, needing to be
simulated on industry standard fixed-block disks.

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia: channel attached 3272/3277 had .086sec hardware response
... this was in days of studies showing improved productivity with
quarter second response, so to get interactive .25sec, system response
had to be no more than .164sec (several of my internal enhanced
systems were getting .11sec interactive system response). For the
3278, they moved lots of electronics back into controller, so protocol
chatter drove hardware response to .3-.5sec (somewhat dependent amount
of data), making quarter second impossible. A complaint to the 3278
product administrator got a response that 3278 wasn't for interactive
computing but "data entry" (aka electronic keypunch). Later IBM/PC
3277 emulation cards had 4-5 times the upload/download throughput of
3278 cards. Note MVS/TSO users never noticed since their system
response was rarely even 1sec (so any change from 3272/3277 to
3274/3278 wasn't noticed).

other trivia: When I transfer to San Jose Research, I get to wander
around (IBM and non-IBM) datacenters in silicon valley, including disk
engineering (bldg14) and disk product test (bldg15) across the
street. They were running prescheduled, around the clock, stand-alone
mainframe testing. They mentioned that they had recently tried MVS,
but it had 15min mean-time-between-failure (in that environment). I
offer to rewrite I/O supervisor to make it bullet proof and never
fail, enabling any amount of on-demand, concurrent testing, greatly
improving productivity (downside was they started blaming me for any
problems, and I had to spend increasing amount of time playing disk
engineer shooting hardware issues). The engineers were complaining
that bean-counting/accountants had forced the 3880 to have
inexpensive, slow microprocessor (compared to 3830, 3880 had special
hardware path for 3380 3mbyte/sec transfers, but everything else was
much slower, significantly increasing channel busy).

Roll forward to 3090, which had initially configured number of
channels to achieve target throughput, assuming 3880 was same as 3830
but with addition for 3mbyte/sec transfers. When they found out how
bad it really was, they realized they would have to significantly
increase the number of channels (to achieve target throughput), which
required an additional TCM (3090 were semi-facetiously claiming they
would bill the 3880 group for the increase in 3090 manufacturing
cost. Eventually marketing respun the significant increase in number
of channels as 3090 being wonderful I/O machine (rather than
countermeasure to the 3880 channel busy increase).

I wrote (IBM internal) research report about work for disk division
and happened to mention the MVS 15min MTBF ... bringing down the wrath
of the MVS organization on my head.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode

Shortly after joining IBM ... I got roped into help on project for
multithreading 370/195.  195 had 64 instruction pipeline and supported
out-of-order execution .... but didn't have speculative execution or
branch prediction and so conditional branches drained the pipeline
... so most codes ran 195 at half throughput. Multi-threading is
mentioned in this webpage about the end of ACS/360
https://people.computing.clemson.edu/~mark/acs_end.html

aka Amdahl had won the battle to make ACS 360 compatible ... but then
(folklore) is that executives were worried that it would advance the
state-of-the-art too fast and IBM would loosely control of the market
... and kill the project (Amdahl leaves IBM shortly later).

195 multithreading would simulate two processor multiprocessing (two
instructions streams, two sets of registers, etc) ... two instruction
streams, each running processor at half throughput ... would
(possibly) result in keeping the 195 fully busy ... modulo that the
MVT 65/MP support was at least as bad as the MVS two processor support
only 1.2-1.5 times the throughput of a single processor. Then the
decision was to add virtual memory to all 370s (as countermeasure to
the bad/poor MVT storage management) and it was decided to stop all
new work on 370/195 (considered too much effort to add virtual memory
to 195).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

archived post with pieces of email exchange about decision to add
virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

... trivia: original 3380 had 20 track spacings between each data
track, they then cut the spacings in half for double the tracks (&
capacity) and then cut the spacing again for triple the tracks (&
capacity). The father of 801/RISC wanted me to help him with "wide
disk head" .... disks are formated with 16 closely spaced data tracks
with servo track between. A "wide" disk head would transmit 16 data
tracks in parallel, following servo tracks on each side. The problem
was that was 50mbyte/sec transfer and IBM (mainframe) channels were
still 3mbytes/sec. It wasn't until a couple years later that I was
involved with "FCS" and could do 100mbyte/sec concurrently in each
direction ... but was getting FCS for RS/6000 (wasn't until much later
for IBM mainframe).

posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 25 Apr, 2024
Blog: Facebook

I was blamed for online computer conferencing in the late 70s and
early 80s on the internal network (larger than arpanet/internet from
just about beginning until sometime mid/late 80s) ... folklore is that
when corporate executive committee was told, 5of6 wanted to fire
me. One of the outcomes was official sanctioned and moderated online
forums. Early 80s, I got HSDT project ... T1 and faster computer links
(both terrestrial and satellite/TDMA&broadcast). Mid-80s, HSDT was
having some custom hardware built on the other side of the Pacific. On
Friday before leaving for a visit, got an email announcement about new
online forum about computer links from the communication group

low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec

monday morning on wall of conference room on the other side of pacific, there were these definitions:

low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec

HSDT posts https://www.garlic.com/~lynn/subnetwork.html#hsdt online computer conferencing posts https://www.garlic.com/~lynn/subnetwork.html#cmc internal network https://www.garlic.com/~lynn/subnetwork.html#internalnet -- virtualization experience starting Jan1968, online at home since Mar1970

FOILS

From: Lynn Wheeler <lynn@garlic.com>
Subject: FOILS
Date: 25 Apr, 2024
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was redone for CP67/CMS as "SCRIPT"

GML was invented in 1969 at the science center ("G", "M", "L" are
initials of 3 inventors last name) and GML tag processing added to
SCRIPT ... ref by one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Edson was responsible for CP67 wide-area network which grows into the
corporate network (larger than arpanet/internet from just about the
beginning until until sometimed mid/late 80s) ... also used for the
corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed internet)
references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

... and back to "foils", from IBM Jargon:

foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of
transparent plastic material used for overhead projection of
illustrations (visual aids). Only the term Foil is widely used in
IBM. It is the most popular of the three presentation media (slides,
foils, and flipcharts) except at Corporate HQ, where even in the 1980s
flipcharts are favoured. In Poughkeepsie, social status is gained by
owning one of the new, very compact, and very expensive foil
projectors that make it easier to hold meetings almost anywhere and at
any time. The origins of this word have been obscured by the use of
lower case. The original usage was FOIL which, of course, was an
acronym. Further research has discovered that the acronym originally
stood for Foil Over Incandescent Light. This therefore seems to be
IBM's first attempt at a recursive language.

... snip ..

Overhead projector
https://en.wikipedia.org/wiki/Overhead_projector
Transparency (projection)
https://en.wikipedia.org/wiki/Transparency_(projection)

:frontm.
:titlep.
:title.GML for Foils
:date.August 24, 1984
:author.xxx1
:author.xxx2
:author.xxx3
:author.xxx4
:address.
:aline.T.J. Watson Research Center
:aline.P.O. Box 218
:aline.Yorktown Heights, New York
:aline.&rbl.
:aline.San Jose Research Lab
:aline.5600 Cottle Road
:aline.San Jose, California
:eaddress.
:etitlep.
:logo.
:preface.
:p.This manual describes a method of producing foils automatically
using DCF Release 3 or SCRIPT3I. The foil package will run with the
following GML implementations:
:ul.
:li.ISIL 3.0
:li.GML Starter Set, Release 3
:eul.
:note.This package is an :q.export:eq. version of the foil support
available at Yorktown and San Jose Research as part of our floor
GML. Yorktown users should contact xxx4 for local
documentation. Documentation for San Jose users is available in the
document stockroom.
.*
:p.Any editor can be used to create the foils. Preliminary proofing
can be done at the terminal with final output to one of the printers
supported by the various implementations:
:ul compact.
:li.APS-5
:li.4250
:li.Sherpa
:li.Phoenix
:li.6670
:li.3800
:li.1403
:eul.
:note.:hp2.The FOIL package is distributed and maintained only through
the IBMTEXT conference disk. This project is not part of our real
job. We will enhance it and fix bona fide bugs as time permits. Please
report bugs only via FOIL BUGS on the IBMTEXT disk.:ehp2.

... snip ... ... trivia: 6670 was sort of IBM Copier3 with computer link. San Jose Research then modified 6670 for all-points-addressable (6670APA and later added postscript engine) which becomes Sherpa science center posts https://www.garlic.com/~lynn/subtopic.html#545tech GML, SGML, HTML, etc posts https://www.garlic.com/~lynn/submain.html#sgml internal network posts https://www.garlic.com/~lynn/subnetwork.html#internalnet bitnet/earn posts https://www.garlic.com/~lynn/subnetwork.html#bitnet -- virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 26 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS

... little drift ... Learson tried (and failed) to stop the
bureaucrats, careerists, and MBAs from destroying the Watson
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

20yrs later, appeared to be nearly end of IBM ... IBM has one of the
largest losses in history of US corporations and was being reorganized
into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk if we
could help with breakup. Before we get started, the board hires former
president of AMEX as CEO, who (somewhat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner

for other drift, a series of "z/VM 50th" postings (50 yrs since VM/370
1972)
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 26 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#21 TDM Computer Links

communication group ... i.e. SNA communication products division

the communication group mainframe products were cap'ed at 56kbit links
... although they had support for "fat pipes" that could treat
multiple parallel links as a single logical link. About the same time
as the announce for new communication link forum ... they prepared an
analysis for the corporate executive committee that customers weren't
looking for T1 support until sometime in the 90s. They surveyed "fat
pipe" users, showing that use of "fat pipes" for more than six
parallel (56kbit) links had dropped to zero. What they didn't know (or
didn't want to tell the corporate executive committee) was that telco
tariff for T1 link was about the same as six 56kbit links. HSDT
trivial survey found 200 customers that had gone to full T1 with
non-IBM controller and software.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning "fat pipe"
https://www.garlic.com/~lynn/2024b.html#112 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

post mentioning when I was undergraduate in 60s, univ hires me
fulltime responsible of os/360
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet

I'm not sure when I became aware of name Grace Hopper. While I was at
the univ, the library had gotten an ONR (office of naval research)
https://www.nre.navy.mil/

grant to do online catalog ... and they used some of the money to get
an IBM 2321 (datacell). Other trivia, the library online catalog was
also selected as betatest for the original CICS program product
... and CICS support was added to my tasks. First problem was CICS
wouldn't come up. Eventually figured out that CICS code had some
undocumented hardcoded BDAM options and the library had built the BDAM
files with a different set of options.

cics & bdam posts
https://www.garlic.com/~lynn/submain.html#cics

some recent posts mentioning ONR grant, univ library online catalog,
cics betatest
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024.html#69 NIH National Library Of Medicine
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#108 IBM CICS

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare & Ann Hardy

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare & Ann Hardy
Date: 27 Apr, 2024
Blog: Facebook

Tymshare & Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89

Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

Much more Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Gnosis/KeyKOS trivia: After M/D bought Tymshare, I was brought in to
review Gnosis as part of the spinoff to Key Logic (note following
mentions Augment and Doug Engelbart while at Tymshare)
http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html

The GNOSIS write-up also mentions the SHARE LSRAD study. I had scanned
my copy for putting up on bitsavers
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf
... trivia: note the year it was published, the gov. had increased the
duration of copyright, so I had to spend sometime finding somebody in
SHARE that would approve putting it up on bitsavers

In 1976, Tymshare also started offering their CMS-based online
computer conferencing system to the (IBM mainframe) user group, SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and
later also PCSHARE) files for putting up on internal network and
systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE)
that somebody found on Stanford SAIL PDP10 system and ported to
VM370/CMS ... I got copy and started making it (also) on internal
networks/systems.

virtual machine based commercial online companies
https://www.garlic.com/~lynn/submain.html#online

Posts mentioning GNOSIS and/or Tymshare:
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021j.html#71 book review:  Broad Band:  The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

The Last Thing This Supreme Court Could Do to Shock Us

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Last Thing This Supreme Court Could Do to Shock Us
Date: 27 Apr, 2024
Blog: Facebook

The Last Thing This Supreme Court Could Do to Shock Us. There will be
no more self-soothing after this.
https://slate.com/news-and-politics/2024/04/supreme-court-immunity-arguments-which-way-now.html

For three long years, Supreme Court watchers mollified themselves (and
others) with vague promises that when the rubber hit the road, even
the ultraconservative Federalist Society justices of the Roberts court
would put democracy before party whenever they were finally confronted
with the legal effort to hold Donald Trump accountable for Jan. 6.

... snip ...

... "fake news" dates back to at least founding of the country, both
Jefferson and Burr biographies, Hamilton and Federalists are portrayed
as masters of "fake news". Also portrayed that Hamilton believed
himself to be an honorable man, but also that in political and other
conflicts, he apparently believed that the ends justified the
means. Jefferson constantly battling for separation of church & state
and individual freedom, Thomas Jefferson: The Art of Power,
https://www.amazon.com/Thomas-Jefferson-Power-Jon-Meacham-ebook/dp/B0089EHKE8/
loc6457-59:

For Federalists, Jefferson was a dangerous infidel. The Gazette of the
United States told voters to choose GOD AND A RELIGIOUS PRESIDENT or
impiously declare for "JEFFERSON-AND NO GOD."

... snip ...

.... Jefferson targeted as the prime mover behind the separation of
church and state. Also Hamilton/Federalists wanting supreme monarch
(above the law) loc5584-88:

The battles seemed endless, victory elusive. James Monroe fed
Jefferson's worries, saying he was concerned that America was being
"torn to pieces as we are, by a malignant monarchy faction." 34 A
rumor reached Jefferson that Alexander Hamilton and the Federalists
Rufus King and William Smith "had secured an asylum to themselves in
England" should the Jefferson faction prevail in the government.

... snip ...

posts mention Federalist Society and/or Heritage Foundation
https://www.garlic.com/~lynn/2023d.html#99 Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2023c.html#51 What is the Federalist Society and What Do They Want From Our Courts?
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022g.html#14 It Didn't Start with Trump: The Decades-Long Saga of How the GOP Went Crazy
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2020.html#5 Book:  Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019e.html#127 The Barr Presidency
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019c.html#66 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#34 The Rise of Leninist Personnel Policies
https://www.garlic.com/~lynn/2012c.html#56 Update on the F35 Debate
https://www.garlic.com/~lynn/2012b.html#75 The Winds of Reform
https://www.garlic.com/~lynn/2012.html#41 The Heritage Foundation, Then and Now

--
virtualization experience starting Jan1968, online at home since Mar1970

PDP1 Spacewar

From: Lynn Wheeler <lynn@garlic.com>
Subject: PDP1 Spacewar
Date: 27 Apr, 2024
Blog: Facebook

In 60s, person responsible for the internal network, ported PDP1 space
war
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
to CSC's 2240M4 (included 1130)
https://en.wikipedia.org/wiki/IBM_2250
i.e. had 1130 as controller
http://www.ibm1130.net/functional/DisplayUnit.html

I would bring my kids in on weekends and they would play

other drift, one of the inventors of GML at science center in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... then science center "wide area network" morphs into the corporate
network (larger than arpanet/internet from just about the beginning
until sometime mid/late 80s), technology also used for corporate
sponsored univ BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

past posts specifically mentioning pdp1 and 1130/2250 spacewar
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#63 Calma, 3277GA, 2250-4
https://www.garlic.com/~lynn/2021k.html#47 IBM CSC, CMS\APL, IBM 2250, IBM 3277GA
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2018f.html#59 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2014j.html#103 ? How programs in c language drew graphics directly to screen in old days without X or Framebuffer?
https://www.garlic.com/~lynn/2014g.html#77 Spacewar Oral History Research Project
https://www.garlic.com/~lynn/2013g.html#72 DEC and the Bell System?
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012f.html#6 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011o.html#21 The "IBM Displays" Memory Lane (Was: TSO SCREENSIZE)
https://www.garlic.com/~lynn/2011n.html#9 Colossal Cave Adventure
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2003m.html#14 Seven of Nine
https://www.garlic.com/~lynn/2003f.html#39 1130 Games WAS Re: Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information

--
virtualization experience starting Jan1968, online at home since Mar1970

Wondering Why DEC Is The Most Popular

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wondering Why DEC Is  The Most Popular ...
Newsgroups: alt.folklore.computers
Date: Mon, 29 Apr 2024 12:39:41 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

Looking at the software-docs collection at Bitsavers
<http://bitsavers.trailing-edge.com/pdf/>, there is over half a
terabyte of files there.

Inside IBM: Lessons of a Corporate Culture in Action
https://www.amazon.com/Inside-IBM-Lessons-Corporate-Culture-ebook/dp/B0C8BV1HM3/

Inside IBM: Lessons of a Corporate Culture in Action
https://www.jstor.org/stable/10.7312/cort21300
CHAPTER 11 GRAY LITERATURE IN IBM'S INFORMATION ECOSYSTEM (pp. 317-358)
https://www.jstor.org/stable/10.7312/cort21300.15

It was said within IBM in the 1970s and 1980s that the company was the
world's second-largest publisher after the U.S. Government Printing
Office (GPO), as measured by the number of pages printed. It might have
been an urban myth because there are no extant statistics to document
how much IBM published, but a look at a KWIC (Key Word in Context) index
of its publications from that period reveals it occupied four to five
linear feet. Each page in it had two columns of brief citations printed
in font sizes normally reserved for endnotes in academic publications.

... I remember hearing the claim in the 80s ... however it was the total
number of pages printed ... as opposed to total number of pages from
unique documets

--
virtualization experience starting Jan1968, online at home since Mar1970

Wondering Why DEC Is The Most Popular

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wondering Why DEC Is  The Most Popular ...
Newsgroups: alt.folklore.computers
Date: Mon, 29 Apr 2024 13:34:24 -1000

re:
https://www.garlic.com/~lynn/2024c.html#28 Wondering Why DEC Is  The Most Popular ...

note VAX sold into the same mid-range market with IBM 4300s and in about
the same numbers for small number orders ... however some large
corporations had multi-hundred vm/4300s orders for placing out in
departmental areas (sort of the leading edge of the coming distributed
computing tsunami). IBM was expecting that 4361/4381 order volume would
continue like the 4331/4341 orders ... however can be seen in the VAX
numbers, by the mid-80s the mid-range market was starting to move to
workstation and large PC servers.

a.f.c. repost from 2002:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

more drift ... from a 1988 IDC report:

            VAX INVENTORY
            -------------
SYSTEM       US       NON-US    TOTAL
--------- --------- --------- ---------
11/725         950       550     1,500
11/730       4,100     2,950     7,050
11/750      12,230     9,370    21,600
11/780      14,280     9,660    23,940
11/782         190       120       310
11/785       2,460     1,590     4,050
MVI          1,840       960     2,800
MVII        41,000    23,900    64,900
82XX         2,800     1,870     4,670
83XX           900       600     1,500
85XX         1,200       905     2,105
86XX         2,360     1,240     3,600
8700           400       270       670
8800           300       200       500
--------  --------  --------
TOTAL       85,010    54,185   139,195

              VAX SHIPMENTS
              -------------
                                           NO. OF VAX
YEAR         US       NON-US    TOTAL    MODELS SHIPPED
--------- --------- --------- ---------  --------------
1978          312        78       390          1
1979          627       313       940          1
1980        1,512     1,038     2,550          2
1981        1,979     1,726     3,705          2
1982        4,129     2,794     6,923          4
1983        6,178     4,384    10,562          5
1984       11,703     8,227    19,930          7
1985       17,600     7,300    24,900          8
1986       19,190    12,840    32,030         12
1987       21,780    15,485    37,265         12
--------  --------  --------
TOTAL       85,010    54,185   139,195

                 VAX SHIPMENTS - NON US
                 ----------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725         450       100         0         0       550
11/730       2,350       600         0         0     2,950
11/750       7,040     1,700       430       200     9,370
11/780       7,700     1,500       270       190     9,660
11/782         120         0         0         0       190
11/785          40     1,100       350       100     1,590
MVI            860       100         0         0       960
MVII             0     1,900    10,000    12,000    23,900
82XX             0         0       725     1,145     1,870
83XX             0         0       200       400       600
85XX             0         0       305       600       905
86XX             0       300       470       470     1,240
8700             0         0        60       210       270
8800             0         0        30       170       200
--------  --------  --------  --------  --------
TOTAL       18,560     7,300    12,840    15,485    54,185

                        VAX SHIPMENTS - US
                        ------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725         650       300         0         0       950
11/730       3,200       900         0         0     4,100
11/750       9,300     2,200       560       170    12,230
11/780      11,500     2,200       400       180    14,280
11/782         190         0         0         0       190
11/785         260     1,600       500       100     2,460
MVI          1,340       500         0         0     1,840
MVII             0     9,000    15,000    17,000    41,000
82XX             0         0     1,150     1,650     2,800
83XX             0         0       300       600       900
85XX             0         0       420       780     1,200
86XX             0       900       730       730     2,360
8700             0         0        80       320       400
8800             0         0        50       250       300
--------  --------  --------  --------  --------
TOTAL       26,440    17,600    19,190    21,780    85,010

                 VAX SHIPMENTS - WORLD-WIDE
                 --------------------------
             1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725       1,100       400         0         0     1,500
11/730       5,550     1,500         0         0     7,050
11/750      16,340     3,900       990       370    21,600
11/780      19,200     3,700       670       370    23,940
11/782         310         0         0         0       310
11/785         300     2,700       850       200     4,050
MVI          2,200       600         0         0     2,800
MVII             0    10,900    25,000    29,000    64,900
82XX             0         0     1,875     2,795     4,670
83XX             0         0       500     1,000     1,500
85XX             0         0       725     1,380     2,105
86XX             0     1,200     1,200     1,200     3,600
8700             0         0       140       530       640
8800             0         0        80       420       500
--------  --------  --------  --------  --------
TOTAL       45,000    24,900    32,030    37,265   139,195

... also 1988

 6,500 clusters installed, From 14,000 DEC VAX sites:

Percentage of VAX processors clustered

15% - 1985
21% - 1986
26% - 1987

... IBM favorite son batch system (MVS) looked at the size of the distributed vm/4341 market and wanted some of the business ... however it required non-datacenter hardware ... and MVS was CKD DASD only, never getting around to supporting FBA (fixed-block) disk ... and the only new CKD DASD was large datacenter 3880/3380 (note there has been no CKD DASD made for decades, all being simulated on industry standard fixed-block disks). Eventually IBM came up with CKD simulation for the 3370 FBA as 3375 ... but didn't do MVS much good. MVS was still scores of staff per system, and the distributed computing market was scores of systems per staff. posts mentioning DASD, CKD, FBA, multi-track search, etc https://www.garlic.com/~lynn/submain.html#dasd -- virtualization experience starting Jan1968, online at home since Mar1970

GML and W3C

From: Lynn Wheeler <lynn@garlic.com>
Subject: GML and W3C
Date: 30 Apr, 2024
Blog: Facebook

Note GML was invented in 1969 at IBM science center in tech sq ... old
reference by one of the GML inventors about CSC "wide area network" ...
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... Edson was responsible for science center "wide area network" which
morphs into the corporate network (larger than arpanet/internet from
just about the beginning until sometime mid/late 80s), technology also
used for corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

a decade after GML invented, it morphs into ISO standard SGML, and
after another decade morphs into HTML at CERN, then W3C offices were a
block or two from tech sq.

Science Center also noted for virtual machines, 1st CP40 (on
360/40 with virtual memory hardware modifications), which morphs into
CP67 when 360/67s becomes available standard with virtual memory
... then VM370 (when decision was made to add virtual memory to all
370s). First webserver in the US was on Stanford SLAC VM370 system
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

HONE &/or APL

From: Lynn Wheeler <lynn@garlic.com>
Subject: HONE &/or APL
Date: 30 Apr, 2024
Blog: Facebook

23jun1969 unbundling announcement, IBM started charging for
(application) software, SE services, maintenance, etc. SE training use
to include trainee part of large group at customer datacenter
... however they couldn't figure out how *not* to charge for SE
trainee time. This kicked off US CP67 "HONE" datacenter were US branch
offices had online connection to HONE datacenters and could practice
on guest operating systems in virtual machines.

The science center had also ported APL\360 to CMS for CMS\APL,
increased workspaces from 16kbytes (sometimes 32kb) to large virtual
memory. Had to redo storage management, every time APL executed an
assignment, it allocated new storage location, quickly touching every
storage location in workspace .... causing page thrashing in large
demand page virtual memory. Also did API for system services (like
file i/o). Combination enabled a lot of real-world applications and
HONE started to offer CMS\APL-based sales&marketing support
applications ... which came to dominate all HONE activity. HONE moved
to VM370 and HONE-clones started sprouting up all over the world
... and then all the US HONE datacenters were consolidated in Palo
Alto (when FACEBOOK 1st moved into silicon valley, it was into a new
bldg built next door to the former US HONE consolidated
datacenter). World-wide HONE became the largest user of APL.

trivia: when I 1st joined IBM, one of my hobbies was enhanced
production operating systems for internal datacenters and HONE was a
long time customer. In the morph from CP67->VM370 they simplified
and/or dropped lots of stuff (including multiprocessor support). In
1974, I started migrating lots of stuff from CP67->VM/370 and soon had
a VM/370 release2 based production CSC/VM ... that included kernel
re-org for multiprocessor operation (but not multiprocessor support
itself). Consolidated US HONE VM370 was initially enhanced to the
largest 370 "loosely-coupled", shared DASD operation with fall-over
and load balancing across the complex. I then added multiprocessor
support to a release3-based CSC/VM, initially so US HONE could add a
2nd processor to each of the eight systems (for 16 processors total).

IBM 23jun69 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE &/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

UNIX & IBM AIX

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNIX & IBM AIX
Date: 30 Apr, 2024
Blog: Facebook

trivia: feb post in facebook private group
https://www.garlic.com/~lynn/2024.html#103 Multicians

Chandersekaran sent out a request (copying you) asking for somebody to
teach CP internals which found its way to me ... my reply (from long
ago ... nearly 40yrs ago ... and far away):
https://www.garlic.com/~lynn/2024.html#email851114
https://www.garlic.com/~lynn/2024.html#email851114b
https://www.garlic.com/~lynn/2024.html#email851114c
also
https://www.garlic.com/~lynn/2024b.html#email851114
https://www.garlic.com/~lynn/2011b.html#email851114

as per above, internal IBM politics shutdown the effort with NSF &
supercomputer centers.

Note that 801/RISC ROMP chip was suppose to be for the displaywriter
follow-on. When that got canceled, the Austin group decided to pivot
to unix workstation market and hired the company that had done PC/IX
to do port for ROMP ... which becomes "AIX" for the PC/RT.

The IBM Palo Alto group was doing port of UCB BSD to mainframe to
VM/370 with mods to do forking. The Palo Alto group then was
redirected to do BSD for PC/RT instead ... which was released as
"AOS".

trivia: in spring of 1982, I had sponsored an IBM adtech conference
and had some of the UNIX projects present ... old archived post
https://www.garlic.com/~lynn/96.html#4a

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 01 May, 2024
Blog: Facebook

Early 80s, co-worker at San Jose Research left IBM and was doing lots
of consulting in silicon valley ... including for senior VP of
engineering at large chip shop. He did port of AT&T mainframe
C-compiler to CMS doing lots of bug fixes and enhancing code
optimization. He then ported a lot of UCB BSD chip apps to CMS. One
day IBM marketing rep came through and asked him what he was doing. He
said ethernet support so their SGI graphical workstations can use
VM/CMS for backend processing. He was then told that he should be
doing token-ring support instead or otherwise the shop might not find
their mainframe service as timely as in the past. I then got an hour
phone call filled with 4-letter words. The next morning the senior VP
of engineering had a press conference saying that they were replacing
all thier IBM mainframes with SUN servers. IBM then had some number of
task forces to analyze why silicon valley wasn't using IBM mainframes,
but they weren't allowed to consider IBM marketing and token-ring.

Late 80s, a disk division senior engineer got a talk scheduled at a
communication group, annual, world-wide, internal conference
supposedly on 3174 performance but open the talk with statement that
the communication group was going to be responsible for the demise of
the disk division. The disk division was seeing drop in disk sales
with data fleeing mainframe datacenters to more distributed computing
friendly platforms. The disk division had come up with a number of
solutions but were constantly being vetoed by the communication
group. The communication group had corporate stragic responsibility
for everything that crossed the datacenter walls and were fiercely
fighting off client/server and distributed computing. The disk
division VP of software partial countermeasure was investing in
distributed computing startups that would use IBM disks and he would
ask us to periodically stop by his investments to see if we could
offer any help.

A few short years later, IBM has one of the largest losses in the
history of US corporations and was being re-orged into the 13 "baby
blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex as CEO, who
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone)

some other background, 1972 (CEO) Learson was trying (and failed) to
block the bureaucrats, careerists, and MBAs from destorying the watson
culture/legacy (two decades later, IBM has its enormous loss and being
prepared for breakup).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Turn of the century, IBM mainframe hardware revenue was a few percent
of IBM revenue and dropping. z12 era, IBM mainframe hardware revenue
was a couple percent of IBM revenue and still dropping ... but the
mainframe group was 25% of IBM revenue (and 40% of profit), nearly all
software and services.

trivia: 1st part of 90s, IBM was "divesting" (not breakup) lots of
stuff and was spinning off lots of its chip design software to major
chip design software vendor. The problem was that the industry
standard platform was SUN. I got a contract to port a Pascal/VS 50,000
statement chip design application to SUN (pascal, in retrospect it
would have been easier to rewrite in "C"). I started to think that SUN
Pascal had never been used for anything other than educational
purposes. SUN hdqtrs was just up the road so it was easy to drop in
... but they had outsourced their Pascal to operation on the opposite
of the world (in this case it was really rocket science, I have bill
cap from place called "space city") ... so problems took minimum of
24hr turn-around.

CPD having corporate strategic responsibility for everything crossing
datacenter wall posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning porting VLSI chip app to SUN
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2023c.html#75 IBM Los Gatos Lab
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#6 "In Defense of ALGOL"
https://www.garlic.com/~lynn/2022f.html#22 STL & other San Jose facilities
https://www.garlic.com/~lynn/2022f.html#13 COBOL and tricks
https://www.garlic.com/~lynn/2021j.html#24 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#47 vs/pascal
https://www.garlic.com/~lynn/2021g.html#31 IBM Programming Projects
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2021.html#37 IBM HA/CMP Product
https://www.garlic.com/~lynn/2017g.html#43 The most important invention from every state
https://www.garlic.com/~lynn/2014b.html#4 IBM Plans Big Spending for the Cloud ($1.2B)
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 01 May, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"

There have been a number of articles about cache-miss, memory latency
when measured in count of processor cycles ... is similar to 60s disk
I/O latency when measured in count of 60s processor cycles (memory is
new disk). This mentions the justification to add virtual memory to
all 370s was because they couldn't get enough MVT regions running
concurrently ... overlapping processor use while waiting for disk I/O
... to get throughput up for justifying 370/165. some past posts
mentioning the issues
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model
https://www.garlic.com/~lynn/2018f.html#12 IBM mainframe today
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951

equivalent with cache miss is out-of-order execution (branch
prediction, speculative execution, etc) ... being able to execute
other instructions while (preceding) instruction(s) wait on memory.

late last century, the i86 vendors went to a hardware layer that
translated i86 instruction into RISC micro-ops for actual execution
... largely negating the throughput advantage of RISC processors
(industry standard benchmark program that counts number of iterations
compared to 1MIP reference platform).

1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
     (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

... snip ... max configured z196 went for $30M, IBM base list price for E5-2600 blade was $1815. This century large cloud operations have been claiming that they assemble their own server blades for 1//3rd price of brand name server blades ($603, or $1.2/BIPS compared to z196 $600,000/BIPS) . Then there was press that i86 server chip makers were shipping at least half their product directly to cloud operations and IBM sells off its i86 server business a large cloud operation will have a dozen or more megadatacenters around the world, each megadatacenter will half million or more blade servers, each blade server with ten times (or more) max. configured mainframe. cloud megadatacenter posts https://www.garlic.com/~lynn/submisc.html#megadatacenter trivia: 1980, STL (since rename SVL) was bursting at the seams and were moving 300 from the IMS group to offsite bldg (with dataprocessing back to STL machine room). They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel extender support, placing channel attached 3270 controllers at offsite bldg with no perceptible human factors difference between offsite and in STL. The was desire to make the support available to customers, but there was group in POK playing with some serial stuff that were afraid it would make it harder to get their stuff release, and get it vetoed. Also STL had been placing 3270 controllers across system channel shared with DASD ... placing 3270 controllers on channel-extenders (which had much lower channel busy) improving system throughput by 10-15% (there was some discussion using channel-extender for all 3270 controllers, for all systems). channel-extender posts https://www.garlic.com/~lynn/submisc.html#channel.extender 1988, IBM branch office asked me if I could help LLNL (national lab) standardize some serial stuff that they were playing with, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then POK gets their serial stuff released with ES/9000 as ESCON (when it is already obsolete), 17mbyte/sec. Later some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces native throughput which eventually ships as FICON. Latest public FICON benchmark I can find is z196 "Peak I/O" getting 2m IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend that SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would be more like 1.5M IOPS. Also no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks. FICON and/or FCS posts https://www.garlic.com/~lynn/submisc.html#ficon CKD DASD, FBA, multi-track search posts https://www.garlic.com/~lynn/submain.html#dasd -- virtualization experience starting Jan1968, online at home since Mar1970

The man reinventing economics with chaos theory and complexity science

From: Lynn Wheeler <lynn@garlic.com>
Subject: The man reinventing economics with chaos theory and complexity science
Date: 01 May, 2024
Blog: Facebook

The man reinventing economics with chaos theory and complexity
science. Traditional economics makes ludicrous assumptions and poor
predictions. Now an alternative approach using big data and
psychological insights is proving far more accurate
https://www.newscientist.com/article/mg26234870-200-the-man-reinventing-economics-with-chaos-theory-and-complexity-science/

Fecalnomics
https://www.counterpunch.org/2021/02/17/fecalnomics/

Fecalnomics is the study of poor decision-making. The concept of
"fecalnomics" originated with a review I wrote of the book, Thinking:
Fast and Slow, in which Nobel economist Daniel Kahneman shows how
monkeys throwing feces are more accurate than human stock pickers over
the long toss.

... snip ...

something of takeoff on Freakonomics
https://en.wikipedia.org/wiki/Freakonomics
http://freakonomics.com/
https://www.amazon.com/Freakonomics-Rev-Ed-Economist-Everything-ebook/dp/B000MAH66Y/

The (MIS)Behavior Of Markets (Mandelbrot & Hudson)
https://www.amazon.com/The-Misbehavior-Markets-Turbulence-ebook/dp/B004PYDBEO
although
https://en.wikipedia.org/wiki/Benoit_Mandelbrot
from above: Mandelbrot left IBM in 1987, after 35 years and 12 days,
when IBM decided to end pure research

Mendelbrot description of period from 60s through the last decade was
continuing to use same computations even when they are repeatedly
shown to be wrong. Some of Mendelbrot's references are similar to this
(by nobel prize winner in economics) Thinking Fast and Slow
https://www.amazon.com/Thinking-Fast-and-Slow-ebook/dp/B00555X8OA
pg212/loc3854-60:

"Since then, my questions about the stock market have hardened into a
larger puzzle: a major industry appears to be built largely on an
illusion of skill. Billions of shares are traded every day, with many
people buying each stock and others selling it to them"

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

Old adage "Nobody ever got fired for buying IBM"

From: Lynn Wheeler <lynn@garlic.com>
Subject: Old adage "Nobody ever got fired for buying IBM"
Date: 02 May, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#33 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#34 Old adage "Nobody ever got fired for buying IBM"

Note linux took over large cloud megadatacenters and cluster
supercomputing (big overlap in paradigm and technology) because they
needed full unrestricted source to adapt to the huge change with mega
cluster operation (somewhat as mega cluster operation started to
mature and settle down a little, some of the proprietary software
vendors tried to emulate).

.. a large cloud operation will have a dozen or more megadatacenters
around the world, each with half million or more server blades, each
blade 10-40 times the processing power of max configured IBM
mainframe. cloud operations had so radically reduced their system
costs ... that power and cooling was increasingly becoming major
cost. For on-demand interactive, peak requirements can be ten times
(or more) avg use ... so require enormous over provisioning ... and
they put enormous pressure on chip makers that power (& cooling) drops
to zero when idle, but "instant on" when needed for on-demand
interactive.

More than decade ago there were articles where it was possible to use
a credit card at a large cloud operator to (remotely) spin-up a
"supercomputer" (ranking in top 40 in the world) for couple of
"off-shift" hrs. A typical megadatacenter will have something like
70-80 total staff (enormous automation).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

Large corporations with tens of thousands of 3270s could get IBM/PC
with 3270 emulation for lower cost that did both mainframe terminal
emulation and some local processing in single desktop foot print.

.... some history of PC market
http://arstechnica.com/articles/culture/total-share.ars
http://arstechnica.com/articles/culture/total-share.ars/3
http://arstechnica.com/articles/culture/total-share.ars/4
http://arstechnica.com/articles/culture/total-share.ars/5

My brother was regional Apple marketing rep (largest physical area
CONUS) ... and when he came into town for hdqtrs meetings and I could
be invited to business dinners ... I would get to argue MAC design
with the Apple people (even before MAC was announced) ... they were
pretty much immune to the argument about having single (business)
desktop footprint.

... other trivia: he figured out how to remote dial into the IBM S/38,
that ran the business, to track manufacturing and delivery schedules

--
virtualization experience starting Jan1968, online at home since Mar1970

Planet Mainframe Profile

From: Lynn Wheeler <lynn@garlic.com>
Subject: Planet Mainframe Profile
Date: 02 May, 2024
Blog: Facebook

I was told they were going to do this, but it just appeared
https://planetmainframe.com/influential-mainframers-2024/lynn-wheeler/
used same picture
lhw picture in article






... they may have picked up "construction" ref from one of my archived posts https://www.garlic.com/~lynn/2024b.html#44 then there is also mainframe hall of fame https://www.enterprisesystemsmedia.com/mainframehalloffame and knights of VM http://mvmua.org/knights.html Greater IBM Connections Member Profile 4/2/2009, gone 404 https://www.garlic.com/~lynn/ibmconnect.html mar2005, systems mag, a little garbled, at wayback https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/ other IBM history https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/ IBM downturn/downfall/breakup posts https://www.garlic.com/~lynn/submisc.html#ibmdownfall past posts mentioning systems mag article https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing https://www.garlic.com/~lynn/2023f.html#3 CP/67, VM/370, VM/SP, VM/XA https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia https://www.garlic.com/~lynn/2022h.html#61 Retirement https://www.garlic.com/~lynn/2022e.html#17 VM Workshop https://www.garlic.com/~lynn/2022c.html#40 After IBM https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM https://www.garlic.com/~lynn/2021h.html#105 Mainframe Hall of Fame https://www.garlic.com/~lynn/2021e.html#24 IBM Internal Network https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station https://www.garlic.com/~lynn/2017g.html#8 Mainframe Networking problems https://www.garlic.com/~lynn/2017f.html#105 The IBM 7094 and CTSS https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe? https://www.garlic.com/~lynn/2016c.html#25 Globalization Worker Negotiation https://www.garlic.com/~lynn/2015g.html#80 Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me? https://www.garlic.com/~lynn/2014d.html#42 Computer museums https://www.garlic.com/~lynn/2013l.html#60 Retirement Heist https://www.garlic.com/~lynn/2013k.html#29 The agency problem and how to create a criminogenic environment https://www.garlic.com/~lynn/2013k.html#28 Flag bloat https://www.garlic.com/~lynn/2013k.html#2 IBM Relevancy in the IT World https://www.garlic.com/~lynn/2013h.html#87 IBM going ahead with more U.S. job cuts today https://www.garlic.com/~lynn/2013h.html#77 IBM going ahead with more U.S. job cuts today https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software https://www.garlic.com/~lynn/2013f.html#49 As an IBM'er just like the Marines only a few good men and women make the cut, https://www.garlic.com/~lynn/2013e.html#79 As an IBM'er just like the Marines only a few good men and women make the cut, https://www.garlic.com/~lynn/2013.html#74 mainframe "selling" points https://www.garlic.com/~lynn/2012p.html#60 Today in TIME Tech History: Piston-less Power (1959), IBM's Decline (1992), TiVo (1998) and More https://www.garlic.com/~lynn/2012o.html#32 Does the IBM System z Mainframe rely on Obscurity or is it Security by Design? https://www.garlic.com/~lynn/2012k.html#34 History--punched card transmission over telegraph lines https://www.garlic.com/~lynn/2012g.html#87 Monopoly/ Cartons of Punch Cards https://www.garlic.com/~lynn/2012g.html#82 How do you feel about the fact that today India has more IBM employees than US? https://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance https://www.garlic.com/~lynn/2011p.html#12 Why are organizations sticking with mainframes? https://www.garlic.com/~lynn/2011c.html#68 IBM and the Computer Revolution https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM https://www.garlic.com/~lynn/2010q.html#30 IBM Historic computing https://www.garlic.com/~lynn/2010o.html#62 They always think we don't understand https://www.garlic.com/~lynn/2010l.html#36 Great things happened in 1973 https://www.garlic.com/~lynn/2008p.html#53 Query: Mainframers look forward and back https://www.garlic.com/~lynn/2008j.html#28 We're losing the battle https://www.garlic.com/~lynn/2008b.html#66 How does ATTACH pass address of ECB to child? https://www.garlic.com/~lynn/2008b.html#65 How does ATTACH pass address of ECB to child? https://www.garlic.com/~lynn/2006q.html#26 garlic.com https://www.garlic.com/~lynn/2006i.html#11 Google is full https://www.garlic.com/~lynn/2006c.html#43 IBM 610 workstation computer https://www.garlic.com/~lynn/2005h.html#19 Blowing My Own Horn https://www.garlic.com/~lynn/2005e.html#14 Misuse of word "microcode" https://www.garlic.com/~lynn/2005e.html#9 Making History

-- virtualization experience starting Jan1968, online at home since Mar1970

Joseph Stiglitz is still walking the road to freedom

From: Lynn Wheeler <lynn@garlic.com>
Subject: Joseph Stiglitz is still walking the road to freedom
Date: 03 May, 2024
Blog: Facebook

Joseph Stiglitz is still walking the road to freedom. The veteran
economist warned in 2003 of the problems that led to the 2008
crash. Today we are in a better place, he says, but dangers still lurk
https://www.thetimes.co.uk/article/joseph-stiglitz-on-the-threat-of-fake-capitalism-and-freedom-rhetoric-dmd9f2bcp

... Jan1999 I was asked to help prevent the coming economic mess. I
was told that some investment bankers had "walked away clean" from the
S&L Crisis, were then running "IPO Mills" (invest few million,
hype, IPO for a couple billion, needed to fail to leave field clear
for next round), and were predicted next to get into securitized
mortgages. I worked on improving the integrity of securitized mortgage
supporting documents. They then were paying rating agencies
for triple-A rating when the agencies knew they weren't
worth triple-A (from Oct2008 congressional hearings)
... enabling no-documentation, liar loans/mortgages, 2001-2008 selling
over $27T into the bond market.

They then find they can build securitize mortgages designed to fail
(creating enormous market/demand for bad mortgages), pay
for triple-A, sell into the bond market, and take out CDS
gambling bets that they would fail. The largest holder of the CDS
gambling bets was AIG and negotiating to pay off at 50cents on the
dollar when the SECTREAS steps in and sign a document that they
couldn't sue those making the gambling bets and take TARP funds to pay
off at 100cents on the dollar. The largest recipient of TARP funds was
AIG and the large recipient of face-value payoffs was the firm
previously headed by SECTREAS.

Jan2009, I was asked to HTML'ize the Pecora Hearings (30s senate
hearings into the '29 crash) with lots of internal HREFs and URLs
comparing what happened this time and what happened then (some
comments that the new congress might have appetite to do something). I
work on it for awhile and then get a call saying it wouldn't be needed
after all (comments that capital hill was totally buried under
enormous mountains of wallstreet cash).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
regulatory "capture" posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
too-big-to-fail, too-big-to-prosecute, too-big-to-jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
glass-steagall and/or pecora hearing posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
Date: 04 May, 2024
Blog: Facebook

Big oil spent decades sowing doubt about fossil fuel dangers, experts
testify | Oil and gas companies
https://www.theguardian.com/us-news/2024/may/01/big-oil-danger-disinformation-fossil-fuels

US Senate hearing reviewed report showing sector's shift from climate
denial to 'deception, disinformation and doublespeak'

... snip ...

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on
Issues from Tobacco Smoke to Global Warming
https://en.wikipedia.org/wiki/Merchants_of_Doubt
Merchants of Doubt
https://www.merchantsofdoubt.org/
Merchants of Doubt
https://www.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1608193942
https://www.amazon.com/Merchants-Doubt-Handful-Scientists-Obscured/dp/1596916109

... also ... Confessions of an Economic Hit Man
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man

Merchants of Doubt posts
https://www.garlic.com/~lynn/submisc.html#merchants.of.doubt
Capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia

posts specifically mentioning "big oil"
https://www.garlic.com/~lynn/2023e.html#96 Fracking Fallout: Is America's Drinking Water Safe?
https://www.garlic.com/~lynn/2023c.html#81 $209bn a year is what fossil fuel firms owe in climate reparations
https://www.garlic.com/~lynn/2023.html#35 Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's
https://www.garlic.com/~lynn/2022g.html#89 Five fundamental reasons for high oil volatility
https://www.garlic.com/~lynn/2022g.html#21 'Wildfire of disinformation': how Chevron exploits a news desert
https://www.garlic.com/~lynn/2022f.html#16 The audacious PR plot that seeded doubt about climate change
https://www.garlic.com/~lynn/2022e.html#69 India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
https://www.garlic.com/~lynn/2022d.html#96 Goldman Sachs predicts $140 oil as gas prices spike near $5 a gallon
https://www.garlic.com/~lynn/2022c.html#117 Documentary Explores How Big Oil Stalled Climate Action for Decades
https://www.garlic.com/~lynn/2021i.html#28 Big oil's 'wokewashing' is the new climate science denialism
https://www.garlic.com/~lynn/2021g.html#72 It's Time to Call Out Big Oil for What It Really Is
https://www.garlic.com/~lynn/2021g.html#16 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021g.html#13 NYT Ignores Two-Year House Arrest of Lawyer Who Took on Big Oil
https://www.garlic.com/~lynn/2021g.html#3 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2018d.html#112 NASA chief says he changed mind about climate change because he 'read a lot'
https://www.garlic.com/~lynn/2014m.html#27 LEO
https://www.garlic.com/~lynn/2013e.html#43 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012e.html#30 Senators Who Voted Against Ending Big Oil Tax Breaks Received Millions From Big Oil
https://www.garlic.com/~lynn/2012d.html#61 Why Republicans Aren't Mentioning the Real Cause of Rising Prices at the Gas Pump
https://www.garlic.com/~lynn/2007s.html#67 Newsweek article--baby boomers and computers

--
virtualization experience starting Jan1968, online at home since Mar1970

CMS RED, XEDIT, IOS3270, FULIST, BROWSE

From: Lynn Wheeler <lynn@garlic.com>
Subject: CMS RED, XEDIT, IOS3270, FULIST, BROWSE
Date: 04 May, 2024
Blog: Facebook

Old archived post
https://www.garlic.com/~lynn/2006u.html#26
with some email about "RED" and "XEDIT" editors:
https://www.garlic.com/~lynn/2006u.html#email790606
https://www.garlic.com/~lynn/2006u.html#email800311
https://www.garlic.com/~lynn/2006u.html#email800312
https://www.garlic.com/~lynn/2006u.html#email800429
https://www.garlic.com/~lynn/2006u.html#email800501

Part of discussion with Endicott was about them releasing RED (instead
of XEDIT) because RED was much more mature, more feature/function, and
faster. Endicott's retort was it was the RED author's fault that RED
was so much better than XEDIT ... and so it should be his
responsibility to bring XEDIT up to RED's level. Note: after Future
System imploded, the head of POK managed to convince corporate to kill
the VM370/CMS product, shutdown the development group and move all the
people to POK for MVS/XA (or I guess the claim might of been that
otherwise MVS/XA wouldn't ship on time; Endicott eventually manages to
save the VM370/CMS product mission, but had to recreate a development
group from scratch).

Another part was discussion about reworking RED for "R/O" shared
segments. I had done a page mapped filesystem for CP67 and could load
a (1mbyte 360/67) SHARED SEGMENT direct from CMS module in the
filesystem. Then when CP67 was modified to run on 370s (well before
VM370), I modified it for 370 64kbyte shared segments. In 1974, I
moved a lot of CP67 feature/function to VM370/CMS Release 2 (including
full CMS page mapped filesystem and shared segment support) as
DCSS. Then for VM370, a very small subset of the shared segment
support was added to VM370 Release 3 (w/o the CMS page mapped
filesystem support) ... aka up to then VM370 shared segment was only
available via the "IPL" command.

CMS IOS3270, FULIST and BROWSE came from sysprog at EU Uithoorne
"HONE" datacenter; old archived email with Theo about FULIST
https://www.garlic.com/~lynn//2001f.html#email781010
https://www.garlic.com/~lynn//2001f.html#email781011

trivia: one of my hobbies after joining IBM was enhanced operating
systems for internal datacenters and HONE was long time customer. HONE
was originally CP67 for US branch office SEs to dial in and practice
guest operating skills running in virtual machines. Science center had
also done port of APl\360 to CP67/CMS for CMS\APL with lots of
improvements ... and HONE started offering CMS\APL-based
sales&marketing support applications, which came to dominate all HONE
activity (with guest operating system use just dwindled away). US HONE
moved from enhanced CP67/CMS to enhanced VM370/CMS and all datacenters
consolidated in silicon valley ... as well as HONE datacenter clones
cropping up all over the world (I had been asked to do initial
installs of a couple of the clones). The early morph of CP67->VM370
simplified and/or dropped a lot of feature/function (including
multiprocessor support). For my release 2 work, I did the kernel reorg
needed by multiprocessor support, but not actual multiprocessor
itself. Then for a VM370R3 "CSC/VM", I added multiprocessor support
initially for consolidated US HONE could add a 2nd processor to each
system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE and/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
cms paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
posts discussing adcons i shared segments
https://www.garlic.com/~lynn/submain.html#adcon
SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Congratulations Lynne

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Congratulations Lynne
Newsgroups: alt.folklore.computers
Date: Sat, 04 May 2024 16:33:17 -1000

Iron Spring Software <Peter_Flass@Yahoo.com> writes:

2024 Influential Mainframers - Lynn Wheeler
https://planetmainframe.com/infl - Lynn Wheeleruential-mainframers-2024/

"Lynn Wheeler has significantly shaped the world of mainframe
computing, most notably through his enhancements to z/VM's CP and CMS,
including the creation of the "Wheeler Scheduler." His pioneering work
earned him a spot in the founding class of the Knights of VM,
highlighting his influence in the mainframe community." ...

and
https://www.garlic.com/~lynn/2024c.html37 Planet Mainframe Profile

... misc. other
in addition to  knights of VM
http://mvmua.org/knights.html
Greater IBM Connections Member Profile 4/2/2009, gone 404
https://www.garlic.com/~lynn/ibmconnect.html
mar2005, systems mag, a little garbled, at wayback
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 05 May, 2024
Blog: Facebook

Late 80s, HA/6000 was originally approved for NYTimes to move their
newspaper system (ATEX) off VAXCluster to RS/6000, I rename it HA/CMP
when start doing technical/scientific cluster scale-up with national
labs and commercial cluster scale-up with RDBMS vendors (Oracle,
Sybase, Informix, Ingres that had VAXcluster and unix support in the
same source base). Early Jan1992, meeting with Oracle, AWD/Hester
tells Oracle CEO that we would have HA/CMP 16 processor clusters mid92
and 128 processor clusters ye92. Then late Jan92, cluster scale-up is
transferred for announce as IBM Supercomputer (technical/scientific
*ONLY*) and we were told that we couldn't work on anything with more
than four processors. We leave IBM a few months later.

Not long later I'm brought in as consultant into a small client/server
startup that had been formed by some people from NCSA
http://www.ncsa.illinois.edu/enabling/mosaic
... two of the former Oracle people (that were in the cluster scale-up
Oracle CEO meeting) are there responsible for something called
"commerce server" and want to do payment transactions on the server;
the startup had also done some technology called "SSL" they want to
use, frequently now called "electronic commerce". I had complete
authority for everything between the webservers and financial industry
payment networks. Note NCSA complains about their use of the name
... and they have to change it (trivia: what silicon valley company
provided the new name?).

other trivia: early 80s, I had HSDT project, T1 and faster computer
links (both terrestrial and satellite) and was working with NSF
directory; was suppose to get $20M to interconnect the NSF
Supercomputer centers. Then congress cuts the budge, some other things
happen and eventually an RFP is released. from 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn//2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed). The NSF
director tried to help by writing the company a letter (3Apr1986, NSF
Director to IBM Chief Scientist and IBM Senior VP and director of
Research, copying IBM CEO) with support from other gov. agencies
... but that just made the internal politics worse (as did claims that
what we already had operational was at least 5yrs ahead of the winning
bid, awarded 24Nov87), as regional networks connect in, it becomes the
NSFNET backbone, precursor to modern internet.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE, VMSHARE, ADVENTURE

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE, VMSHARE, ADVENTURE
Date: 05 May, 2024
Blog: Facebook

I would periodically drop in on Tymshare and/or see them at the
monthly user group meetings hosted by Stanford SLAC. In Aug1976,
TYMSHARE start offering their VM370/CMS based online computer
conferencing system to the (IBM mainframe) SHARE user group as
"VMSHARE" ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and
later also PCSHARE) files for putting up on internal network and
systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE)
that somebody found on Stanford SAIL PDP10 system and ported to
VM370/CMS ... I get copy and started making it (also) available on
internal networks/systems. I would send source to anybody that could
demonstrate they got all the points. Relatively shortly, versions with
lots more points appear as well as PLI versions.

posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

posts mentioning tymshare, vmshare, and adventure
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 05 May, 2024
Blog: Facebook

There was bus&tag interface card done for pc/at in mid-80s
(referred to PCCA, aka PC channel attach) used for number of internal
mainframe things. The IBM communication group was fighting off
client/server, distributed computing and release of mainframe tcp/ip
support. The release of TCP/IP got approved and the communication
group changed their strategy, saying that since they had corporate
strategic responsibility for everything that cross datacenter walls,
it had to be released through them. What shipped got 44kbytes
aggregate using nearly whole 3090 processor and the PCCA/8232 that had
been expected to be priced at $5k, was $40k.

I then did the software changes to support RFC1044 and in some tuning
tests at Cray Research between Cray and IBM 4341, got sustained
channel throughput using only modest amount of 4341 CPU (something
like 500 times improvement in bytes moved per instruction
executed). Of course it wasn't 8232, it was non-IBM channel attached
router supporting up to 16 LAN interfaces, multiple T1&T3 telco
interfaces, FDDI (about the same price as 8232; then later also
supporting RS/6000 SLA ... an incompatible enhanced, faster,
full-duplex, modification of ESCON). The engineer responsible for SLA,
then wanted to do 800mbit/sec version but manage to con him into
joining the FCS standards committee instead (in 1988, the IBM branch
office had con'ed me into helping LLNL standardize some serial stuff
they had been playing with, which quickly becomes FCS, initially
1gbit/sec, full-duplex, 200mbyte/sec aggregate, when ESCON ships it
was already obsolete).

The communication group telco products had been capped at 56kbit/sec
and prepared report for the corporate executive committee that
customers wouldn't be interested in T1 until at least later in the
90s. VTAM had fat-pipe support treating multiple parallel 56kbit links
as single logical link ... and they showed number of customers with
fat-pipe had dropped to zero by seven parallel links (what they didn't
know or didn't want to tell executive committee that typical telco
tariff for T1 was about the same as between 5 and 7 56kbit links). I
had HSDT project from the early 80s, T1 and faster computer links
(both terrestrial and satellite) and trivia customer survey found 200
customers with T1 links (with non-IBM software and hardware).

The communication group did finally come out with the 3737 in the
later 80s, it had a boat load of M68k processors and memory that
simulated a CTCA VTAM to the local mainframe VTAM ... immediately
acking receipt of RUs, spoofing host VTAM, in order to keep the
traffic flowing ... and then using non-SNA to the remote 3737 on T1
link ... peaking out 2mbit/sec on even short-haul T1 terrestrial link
(US full-duplex T1 3mbit/sec aggregate, EU full-duplex T1 4mbit/sec
aggregate).

trivia: AWD (workstation division) for the PC/RT (had pc/at 16bit bus)
had done their own 4mbit t/r card. However for RS/6000 with
microchannel, corporate told AWD that they couldn't do their own
microchannel cards but had to use the (communication group heavily
performance kneecaped) PS2 microchannel cards.  Simple example was PS2
microchannel 16mbit t/r card had lower card throughput than the PC/RT
4mbit t/r card (making a RS/6000 16mbit t/r server slower than PC/RT
4mbit t/r server). Note: the heavy performance kneecaping microchannel
cards wasn't just the t/r cards.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

sme recent posts mentioning 3737
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#103 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#109 The Age of Battleships Is Dead and Long Gone
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook

re:
htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support

A couple issues with PCCA/8232 .... instead of releasing it with
TCP/IP router support ... its was released as LAN/MAC bridge ... that
required all the IP->LAN/MAC work had to be done back in the host IP
code. The story I heard about 8232 being $40k (instead of $5k) was the
communication contrived that forecast was just for the AT&T UNIX +
TSS/370 SSUP market which was just internal AT&T ... as 14 sales total
... so all the upfront fixed IBM product costs were just spread across
14 units (instead of the large number of VM370 TCP/IP ... note the
VM370 TCP/IP was also made available for MVS by implementing VM370
diagnose instruction function simulation ... since it was already slow
for VM370 ... the extra overhead contributed to MVS complaints about
how slow it was). Some part of my performance getting VM370 TCP/IP
running at sustained channel throughput using only a modest amount of
4341 was supporting a (non-IBM) router box, aka RFC1044 (rather than
LAN bridge box).

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

PCCA &/or 8232 posts
https://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2021i.html#73 IBM MYTE
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2013m.html#9 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013i.html#62 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013g.html#17 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2010n.html#27 z/OS, TCP/IP, and OSA
https://www.garlic.com/~lynn/2010c.html#25 Processes' memory
https://www.garlic.com/~lynn/2010c.html#24 Processes' memory
https://www.garlic.com/~lynn/2008l.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2006n.html#18 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005u.html#49 Channel Distances
https://www.garlic.com/~lynn/2005t.html#48 FULIST
https://www.garlic.com/~lynn/2005t.html#45 FULIST
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2005r.html#17 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#2 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#35 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?

--
virtualization experience starting Jan1968, online at home since Mar1970

Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

From: Lynn Wheeler <lynn@garlic.com>
Subject: Big oil spent decades sowing doubt about fossil fuel dangers, experts testify
Date: 06 May, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#39 Big oil spent decades sowing doubt about fossil fuel dangers, experts testify

An Oil Price-Fixing Conspiracy Caused 27% of All Inflation Increases
in 2021. The FTC just found evidence that American oil companies
colluded with the Saudi government to hike gas prices, costing the
average family $3,000 last year. The question is, what can we do about
it?
https://www.thebignewsletter.com/p/an-oil-price-fixing-conspiracy-caused

... griftopia ... commodity market secret letters allowing speculators
to play
http://www.amazon.com/Griftopia-Machines-Vampire-Breaking-America-ebook/dp/B003F3FJS2/
... commodity market used to require players to have significant
holdings ... because speculators resulted in wild, irrational price
swings (betting on how prices would move and manipulating news to push
prices in direction bet on (both up and down).

There were articles about US speculators being behind enormous oil (&
gas) price spike summer 2008. Then a member of congress releases the
speculation transactions that identified the corporations responsible
for the enormous oil (& gas) price spike/swings. For some reason, the
press then pillared&vilified the member of congress for violating
corporation privacy (& exposing the corporations preying on US public,
rather than trying to hold the speculators accountable).

griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook

re:
htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support

almaden research was heavily provisioned with cat4 presumably for
16mbit token-ring ... but found 10mbit ethernet had higher aggregate
bandwidth and lower latency over cat4. that is besides that $69 10mbit
ethernet had much higher card thruput (capable of 8.5mbit) than the
$800 (heavily performance kneecapped) 16mbit token-ring cards

also for 300 machines .... the price difference between the high
performance ethernet cards and the $800 (kneecaped) token-ring cards
... could get five high-end TCP/IP routers, each with 16 ethernet
networks (80 total), IBM channel interfaces and other features
... even be able spreading 300 machines across the 80 networks (four
machines/network) ... while traditional SNA token-ring would tend to
have all 300 sharing a LAN network.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning Almaden token-ring versus ethernet
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2014h.html#88 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2013m.html#7 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2011h.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe LAN Support

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe LAN Support
Date: 06 May, 2024
Blog: Facebook

re:
htt://www.garlic.com/~lynn/2024c.html#44 IBM Mainframe LAN Support
htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support
htt://www.garlic.com/~lynn/2024c.html#45 IBM Mainframe LAN Support

... from long ago and far away, I don't remember (physical) internals
for the Cambridge box ... although requirements were a PC/AT for each
"cambridge channel attach box" and supported standard 3088/trouter
CTCA mode (but not "waitread" or 3088*/spider mode). I had HSDT
project starting in early 80s, T1 and faster computer links (both
terrestrial and satellite). With regard to part of the following,
Boulder had developed a channel emulation card (I think part of
hardware/sofware testing 3800(?) printer).



Date: 08/21/85 12:19:55
From: wheeler

re: boulder channel attach versis pcca;

problem with YKT pcca is that it is 360 ctca ... not even
3088(/trouter). For HSDT you need two things a) dual 370 subchannel
addresses, one for input/reads -- the other for output/writes and b)
"waitread" function. Standard vendor box (for almost decade) has a
"waitread" operation and 3088*(/spider) now has a similar
function. Problem on existing CTC/3088(/trouter) is that input
operation requires that operating system wait for an attention
interrupt, operating system fields the attention and then schedules a
read operation. Waitread allows an outstanding read operation to be
alwas be pending on the input channel. W/o waitread, the "latency" of
the software in fielding the attention interrupt and scheduling the
read operation takes longer than anything else.

Boulder channel attach is cheap (if you already have a 3088*/spider)
and they are ready to ship now. Software can be developed on that
basis pending upgrading the PCCA to 3088*(/spider) mode.

  
... snip ... top of post, old email index, NSFNET email



Date: 08/21/85 16:43:08
From: wheeler

re: channel attach cards; cc: hsdt; there have been some comments
about the "performance" of the various PC channel attach cards. One of
the areas was the YKT PCCA card is suppose to be "good" (better than
the others) is in hardware latency to start data transfer. However,
the effective thru-put of a card will also be dependent on its total
operation characteristic. Protocols that use the old 360 CTCA
protocol, with a single subchannel address, have effectively a very
long start-up latency for incoming data ... because there is first an
attention interrupt that has to be presented to the operating system,
the operating system has to field the interrupt and then put up a
read. Such software "start-up latency" appears to be much longer than
any of the various "hardware" latencies.

To address this problem you need support for both (a) pairs of
read/write subchannel addresses and (b) the equivalent of the standard
vendor box waitread CCW ... or the special 3088*/spider CCW op. The
YKT PCCA uses the old 360 CTCA protocol; single subchannel address, no
special op. The Cambridge channel attach supposedly supports
3088/trouter mode ... but doesn't have 3088* support. It would look
like the PC 370 channel simulators can be attached to a 3088* and you
get the support very inexpensively (assuming that you already have a
3088*).

For software development, 3088*/spider to PC via PC channel simulator
would appear to be the best bet ... pending the availability of a PC
control unit simulator that attaches to a 370 channel (i.e. upgrade
YKT PCCA to 3088*/spider mode).

... snip ... top of post, old email index, NSFNET email



Date: 08/29/85 17:53:38
From: wheeler
To: cambridge

can i get (over network) copies of the cambridge control unit
documents? Can I be added to the computer conference on the same? Any
schedule on plans for supporting a 3088* interface to the host???


... snip ... top of post, old email index, NSFNET email

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home