List of Archived Posts

2024 Newsgroup Postings (04/15 - )

Amdahl and IBM ACS
Disk & TCP/IP I/O
ReBoot Hill Revisited
ReBoot Hill Revisited
Cobol
Cobol
Testing
Testing
AI-controlled F-16
Boeing and the Dark Age of American Manufacturing
AI-controlled F-16
370 Multiprocessor
370 Multiprocessor
Boeing and the Dark Age of American Manufacturing
Bemer, ASCII, Brooks and Mythical Man Month
360&370 Unix (and other history)
CTSS, Multicis, CP67/CMS
IBM Millicode
CP40/CMS
IBM Millicode
IBM Millicode
TDM Computer Links
FOILS
CP40/CMS
TDM Computer Links
Tymshare & Ann Hardy
The Last Thing This Supreme Court Could Do to Shock Us
PDP1 Spacewar

Amdahl and IBM ACS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Amdahl and IBM ACS
Date: 15 Apr, 2024
Blog: Facebook

Note Amdahl wins battle to make ACS, 360 compatible ... folklore is
that executives then shutdown the operation because they were afraid
that it would advance the state of the art too fast and IBM would
loose control of the market ... shortly later Amdahl leaves
IBM. Following lists some ACS/360 features that show up more than
20yrs later in the 90s with ES/9000

https://people.computing.clemson.edu/~mark/acs_end.html
ACS
https://people.computing.clemson.edu/~mark/acs.html
https://people.computing.clemson.edu/~mark/acs_legacy.html

some recent posts mentioning Amdahl and end of ACS
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#91 7Apr1964 - 360 Announce
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#44 Amdahl CPUs
https://www.garlic.com/~lynn/2023g.html#23 Vintage 3081 and Water Cooling
https://www.garlic.com/~lynn/2023g.html#11 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk & TCP/IP I/O

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk & TCP/IP I/O
Date: 15 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#116 Disk & TCP/IP I/O

135/145, 138/148, 4331/4341 were conventional microprocessors with
microcode to emulate 370 instructions, avg 10 native instruction per
370 instruction. I got con'ed into helping with ECPS originally for
138/148 ... old archive post with the initial analysis of kernel
pathlengths for selecting what to microcode. I was told 138/148 had 6k
bytes and 370 kernel instructions would translate into native
microcode on approx. byte-for-byte basis (highest executing 6k 370
pathlengths accounted for approx. 80% of kernel execution) ...
https://www.garlic.com/~lynn/94.html#21

around 1980, there was effort to move variety of IBM internal
microprocessors to 801/risc ... low&mid-range 370s (Iliad 801 for
4361&4381), s38->as/400, controllers, etc. I got roped into help with
white paper that VLSI technology had advanced to point that it was
possible to implement nearly all 370 instructions directly in silicon
... as well as other proposed 801 solutions and those 801 efforts
floundered ... seeing some number of 801/RISC engineers leaving for
RISC projects at other vendors.

Note 801/ROMP was suppose to be for the next generation displaywriter
... when that got canceled, they decided to pivot to unix workstation
market and got the company that had done AT&T Unix port to IBM/PC for
PC/IX, to do one for ROMP ... becomes AIX (and PC/RT). The follow-on
chip set was RIOS for RS/6000. Then AIM is formed (Apple, IBM,
Motorola) and the executive we reported to for HA/CMP, went over to
head-up Somerset (single chip power/pc effort) which included adopting
some features from Motorola 88k RISC processor.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

tome about being frequently told that I had no career, no raises, no
promotions and about all the people that wanted to see me fired
... including 5of6 of the corporate executive committee, being blamed
for doing online computer conferencing in the late 70s and early 80s
on the IBM internal network (larger than arpanet/internet from just
about the beginning until mid/late 80s).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook

ReBoot Hill Revisited
https://planetmainframe.com/2016/03/reboot-hill-revisited/

Learson tried (and failed) to block the bureaucrats, careerists, and
MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... further complicated by the failure of "Future System"
https://www.amazon.com/Computer-Wars-Future-Global-Technology/dp/0812923006/

"and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive."

... snip ...

future system refs:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... and 20yrs later, IBM has one of the largest losses in the history
of US corporations and it looked like it might be the end; IBM being
re-orged into the 13 "baby blues" in preparation for breaking up the
company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left the company, but get a call from the bowels of
Armonk asking if we could help with the company breakup. Before we get
started, the board brings in the former AMEX president as CEO, who
(somehat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Mid-90s, financial industry was expanding globally and spending
billions on redoing batch cobol overnight settlement (some of it
originating from the 60s) ... combination of increased business and
globalization shortening the overnight window ... was not getting
settlement done in the time available. They were going to
straight-through financial processing on large numbers of parallel
"killer micros". Some of us tried to point out that the standard
parallelization libraries being used, had hundred times the overhead
of batch cobol ... and were ignored ... until some major pilots went
down in throughput flames.

After the turn of the century I was helping somebody that did a
high-level financial processing language that translated
specifications into (parallelizable) fine-grain SQL statements for
execution. Also in the late 90s, i86 processor makers had gone to
hardware layer that translated i86 into RISC micro-ops, largely
negating throughput difference between i86 and RISC.


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

In the same period, major (non-mainframe) RDBMS vendors (including IBM) had done significant optimization work on parallelizing (non-mainframe) RDBMS cluster operation. In 2003, had demo'ed a six system parallel RDBMS cluster, each system a four Pentium4 multiprocessor (each Pentium4 equivalent of max-configured z990, each system equivalent of four max-configured z990, six of them equivalent of 24 max-configured z990 ... or 232.8BIPS, aggregate more than current max-configured z16), Using the financial processing language, implemented equivalent "straight-through" processing of several existing major production (overnight batch window) systems with throughput greatly exceeding any existing requirement. This was taken to major financial industry meetings, initially with great acceptance ... then brick wall. Eventually were told that executives still bore the scars of the 90s attempts, and it would be a long time before it was tried again. some recent posts mentioning "straight-through" processing implementaton https://www.garlic.com/~lynn/2024.html#113 Cobol https://www.garlic.com/~lynn/2023g.html#12 Vintage Future System https://www.garlic.com/~lynn/2022g.html#69 Mainframe and/or Cloud https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems https://www.garlic.com/~lynn/2022b.html#3 Final Rules of Thumb on How Computing Affects Organizations and People https://www.garlic.com/~lynn/2021k.html#123 Mainframe "Peak I/O" benchmark https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster https://www.garlic.com/~lynn/2021b.html#4 Killer Micros Turn of the century, IBM mainframe hardware sales had dropped to a few percent of total revenue (compared to over half in the 80s). In the z12 time-frame, it was down to a couple percent (and still dropping), but mainframe group was 25% of total revenue (and 40% of profit) ... nearly all software & services. I/O trivia: 1980 I was con'ed into doing channel-extender support for STL (since renamed SVL) that was moving 300 people from IMS group to offsite bldg with service back to STL datacenter. They had tried "remote 3270", but found human factors unacceptable. Channel-extender allowed placing channel-attached 3270 controllers at the offsite bldg with no perceptible difference in human factors between offsite and inside STL (although some tweaks with channel-extender increased system throughput by 10-15%, prompting suggestion that all their systems should use channel-extender). Then some POK engineers playing with some serial stuff, blocked the release of support to customers. channel-extender posts https://www.garlic.com/~lynn/submisc.html#channel.extender Later in 1988, the IBM branch office asks if I could help LLNL (national lab) get some serial stuff they were playing with, standardized. It quickly becomes "fibre-channel" standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff (after more than decade) finally gets released with ES/9000 as ESCON (when it is already obsolete) 17mbyes/sec. Then some POK engineers get involved in FCS and define a heavy weight protocol that significantly cuts the native throughput, which eventually ships as FICON (running over FCS). The latest public benchmark I can find is z196 "Peak I/O" getting 2M IOPS with 104 FICON. About the same time, an FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommend limiting SAPs (system assist processors that actually do I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating are CKD DASD, which haven't been made for decades, needing to be simulated on industry standard fixed-block disks. FICON &/or FCS posts https://www.garlic.com/~lynn/submisc.html#ficon

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

2010 max configured z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

2010 E5-2600 server blade ten times max configured z196 and still more than twice current max-configured z16 (current generation server blade closer to 40 times max-configured z16) reference to some discussion about performance technologies https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964 https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022 https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark https://www.garlic.com/~lynn/2021k.html#120 Computer Performance https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud https://www.garlic.com/~lynn/2019e.html#102 MIPS chart for all IBM hardware model https://www.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented https://www.garlic.com/~lynn/2014m.html#164 Slushware https://www.garlic.com/~lynn/2014l.html#90 What's the difference between doing performance in a mainframe environment versus doing in others https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer https://www.garlic.com/~lynn/2014c.html#96 11 Years to Catch Up with Seymour https://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's? -- virtualization experience starting Jan1968, online at home since Mar1970

ReBoot Hill Revisited

From: Lynn Wheeler <lynn@garlic.com>
Subject: ReBoot Hill Revisited
Date: 16 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited

Attractive Alternatives to Mainframes Are Breaking Their Decades-Old
Hold on Wall Street
https://web.archive.org/web/20120125090143/http://www.wallstreetandtech.com/operations/197007742

... before we left ibm (before our ha/cmp cluster scale-up was
transferred for announce as IBM supercomputer for technical/scientific
*only* and we were told we couldn't work on anything with more than
four processors), we did number of calls on NYSE and SAIC ... part of
it was their need for more processor power ... and HA/CMP would be
capable to have processors 128 RS/6000 clusters doing both
technical/scientific as well was RDBMS commercial.


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS (128*126MIPS = 16BIPS)

Hardware reliability had been increasing and service outages were increasingly shifting to environmental (earthquakes, hurricanes, floods) we were doing replicated systems and I had coined the terms disaster survivability and geographic survivability when out marketing. The IBM (rebranded) S/88 product administrator was taking us into their customers. They had also gotten me to write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the objectives). We had been brought into NYSE and SIAC; they had a datacenter very carefully located in NYC in a building that was supplied from multiple water, power, and telco sources that traveled different routes past the building. NYSE/SAIC was taken out when a transformer exploded in the basement, contaminating bldg with PCB. ha/cmp posts https://www.garlic.com/~lynn/subtopic.html#hacmp continuous availability, disaster survivability, geographic survivability posts https://www.garlic.com/~lynn/submain.html#available 801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts https://www.garlic.com/~lynn/subtopic.html#801 -- virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook

Turn of the century was brought into large financial outsourcing
datacenter, handled over half of all (issuing/consumer) credit card
accounts in the US (real-time auths, statementing, call-centers, etc)
... had 40+ max configured IBM mainframe systems (constant rolling
upgrades, none older than 18months) all running the same 450K
statement cobol application (number needed to finish batch settlement
in the overnight window). They had large group supporting performance
care and feeding for a couple decades ... but possibly got a little
myopic.

I offer to use some different performance analysis techniques (from
the IBM science center in the 70s) ... and was able to identify a 14%
improvement (including finding large complex operation that was using
three times the expected processing, turns out it was being invoked
three different times instead of just once) ... represented savings of
six max configured mainframes (at the time going rate around
@$30M). They had other datacenters that handled 70% of all acquiring
(merchant) credit card processing.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

past posts mentioning financial outsourcing and 450k statement cobol
application handling over half of all issuing/consumer credit card
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2023g.html#87 Mainframe Performance Analysis
https://www.garlic.com/~lynn/2023g.html#50 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#68 How Gerstner Rebuilt IBM
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#49 IBM CEO
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#11 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#62 Cobol
https://www.garlic.com/~lynn/2018f.html#13 IBM today
https://www.garlic.com/~lynn/2018d.html#43 How IBM Was Left Behind
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2017h.html#18 IBM RAS
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014b.html#83 CPU time
https://www.garlic.com/~lynn/2013h.html#42 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
https://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
https://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?

--
virtualization experience starting Jan1968, online at home since Mar1970

Cobol

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cobol
Date: 17 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#4 Cobol

the financial services company had once been unit of AMEX, but in
1992, it was spun off in the largest IPO up until that time ... same
time that IBM looked about at its end, having one of the largest
losses in the history of US corporations and was being reorged into
the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex (that the
financial services company had previously reported to) as CEO, who
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 17 Apr, 2024
Blog: Facebook

IBM 23jun1969 unbundling announcement started to charge for
(application) software (managed to make the case that kernel software
could still be free), system engineers (SE), maintenance, etc.

after graduation I joined the IBM science center and one of hobbies
was enhanced production operating systems for internal datacenters.

with the decision to add virtual memory to all 370s (basically MVT
storage management was so bad that regions were specified four times
larger than used and 1mbyte 370/165 typically only ran four concurrent
regions, insufficient to keep system busy and justified; going to
running MVT with 16mbyte address space ... similar to running MVT in a
16mbyte virtual machine ... aka VS2/SVS, would allow the number
of concurrently running regions to be increased by a factor of four
times ... with little or no paging) ... first thing was enhancing CP67
to optionally support 370 virtual machines with 370 virtual
memory ... and modifying a CP67 to run on 370 virtual memory
architecture (this was in regular production use for a year before the
1st engineering 370 with virtual memory was operational (in fact the
CP67-370 was used as part of validating the engineering 370). Then
there was a decision to release a VM370 product and in the morph from
CP67->VM370, a lot of features were dropped or simplified.

I had also done an automated benchmarking process ... run a specified
script giving number of simulated users with specified execution
profiles (as part of automated benchmarking I had also done the
"autolog" command that also came to be used for automating lots of
standard production operation), with automated system reboot between
each benchmark. With more internal datacenters installing VM370, early
1974, I started migrating lots of CP67 features to VM370
Release2... initially i found the VM370 automated benchmarking were
consistently crashing VM370 ... so the next thing I migrated was the
CP67 kernel syncronization&serialization ... in order to complete
a full set of benchmarks, w/o VM370 constantly crashing. Towards the
end of 1974, I had a VM370 R2-based production "CSC/VM" (for internal
datacenters).

Also in the period, IBM took a sharp swerve with the Future System
... which was completely different from 370 and was going to
completely replace 370. Internal politics during FS period was also
killing off 370 efforts, and the lack of new IBM 370s during the
period is credited with giving clone 370 makers, their market
foothold. When FS finally implodes, there is mad rush to get stuff
back into the 370 product pipeline, including kicking off
quick&dirty 3033&3081 efforts in parallel. some more detail
http://www.jfsowa.com/computer/memo125.htm

With the demise of FS (and the rise of 370 clone makers), it was
decided to start transition to kernel software charging ... beginning
with new kernel code "add-ons" (transition complete in the 1st half of
the 80s) ... and much of my internal "CSC/VM" was selected as guinea
pig (I also get to spend lots of time with business planners and
lawyers on kernel software charging practices).

As part of my release (some focus on the dynamic adaptive resource
manager & scheduler that I had done as undergraduate) for kernel
software add-on charging was 2000 automated validation benchmarks that
took 3months elapsed time to run. Science center had years of system
activity monitoring data for large number of different systems ... and
created a multiple dimension system activity specification (uniform
distribution of different combinations of number of users with
different amounts of real storage available, paging, working set
sizes, file I/O, CPU intensive, etc) with several benchmarks outside
normally observed activity ... for the 1st 1000 benchmarks.

Also done at the science center was an APL-based analytical system
model. This was made available on the world-wide, online
sales&marketing HONE as the Performance Predictor, branch
people could enter customer configuration and workload profile data
and ask "what-if" questions about what happens with configuration
and/or workload changes. The US HONE systems had been consolidated in
silicon valley resulting in the largest loosely-coupled, shared DASD
complex with fall-over and load-balancing ... where a modified version
of the APL-based model made load-balancing decisions.

Another modified version of the APL-base model would predict the
result of each of the 1st 1000 benchmarks and then checked the
prediction with the actual results (somewhat validating both the model
and my dynamic adaptive implementation). The APL-base model then was
modified to specify the benchmark profile for each of the 2nd 1000
benchmarks, looking at the results of all benchmarks run so far
... searching for possible anomalies.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
23un1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management and scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent performance predictor specific posts
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Testing
Date: 18 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#6 Testing

other trivia: last product did at IBM was HA/CMP, started out HA/6000
for the NYTimes to move their newspaper system (ATEX) off VAXCluster
to RS/6000. I rename it HA/CMP when start doing technical/scientific
cluster scale-up with national labs and commercial cluster scale-up
with RDBMS vendors (Oracle, Sybase, Informix, Ingres ... that had both
VAXCluster and UNIX support in same source base). Lots of studies on
why things fail. In part, commodity hardware was increasingly becoming
more reliable and service outages were starting to increasingly shift
to other factors like earthquakes, floods, hurricanes, etc ... so had
to include replicated systems and different locations (less likely to
be subject to common events) ... out marketing I coined the terms
disaster survivability and geographic survivability. The IBM S/88
product administrator started taking us around to their customers and
also had me write a section for the corporate continuous availability
strategy document (but it got pulled when both Rochester/AS400 and
POK/mainframe complained they couldn't meet the objectives).

Early Jan1992, meeting with Oracle, IBM AWD/Hester told Oracle CEO
that IBM would have 16processor HA/CMP clusters by mid92 and
128processor HA/CMP clusters by ye92. I was then briefing IBM (gov)
FSD about HA/CMP and they apparently told the Kingston supercomputer
group that they were going with HA/CMP for gov. customers. Then end
Jan92, we were told that cluster scaleup was being transferred to
Kingston for announce as IBM supercomputer (for technical/scientific
*ONLY*) and we couldn't work with anything that had more than four
processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

.. when I transferred to SJR in the 70s, got to wander around IBM
& non-IBM datacenters including disk engineering (bldg14) and disk
product test (bldg15) across the street. They were doing prescheduled,
around the clock, stand-alone mainframe testing (they said they had
recently tried MVS, but MVS had 15min mean-time-between-failures
... requiring manual re-ipl ... in that environment). I offered to
rewrite I/O supervisor to make it bullet proof and never fail,
allowing any amount of on-demand, concurrent testing, improving
productivity ... downside was they would increasingly blame me for
problems and I had to spend increasing amount of time playing disk
engineer diagnosing their hardware problems. Engineering & Product
Test were completely separated, departments didn't report to common
management until the executive level ... and members didn't have badge
access to each others' machine rooms and bldgs (since I provided the
mainframe systems for both bldgs, my badge was enabled for access in
both bldgs, I assume not being in disk division, I wasn't subject to
the separation rules).

getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

repost from over in another FACEBOOK group

The Birth OF SQL
https://www.youtube.com/watch?v=z8L202FlmD4&si=FHDLe1v_QZNUHZwM

.. when I transferred to SJR in the 70s , they were doing original
SQL/relational, "System/R" on vm370 370/145 there ... worked with Jim
Gray and Vera Watson. Some amount of conflict with STL and mainstream
DBMS "IMS" ... then the company was working on the next great DBMS
"EAGLE" ... and was able to do tech transfer (under the "radar") to
Endicott for SQL/DS. Then when "EAGLE" implodes there is request for
how fast could "System/R" be ported from VM/370 to MVS .... which
eventually ships as DB2, originally for decision-support *only*.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 20 Apr, 2024
Blog: Facebook

Following AESA radar first flight on F-16, Aselsan eyes 5th-gen
https://breakingdefense.com/2024/03/following-aesa-radar-first-flight-on-f-16-aselsan-eyes-5th-gen-aircraft-integration/
US Air Force Secretary to fly in AI-piloted F16 to demonstrate safety
https://interestingengineering.com/military/usaf-to-fly-ai-controlled-f16
US Air Force Secretary to fly in AI-controlled F-16
https://www.theregister.com/2024/04/10/usaf_ai_f16_tests/
US Air Force says AI-controlled F-16 has fought humans
https://www.theregister.com/2024/04/18/darpa_f16_flight/

I was introduced to John Boyd in the early 80s and would sponsor his
briefings. He was largely responsible for LWF ... he would say he used
his E-M theory on the original F15 design (supposedly started out as
F-111 follow-on with swing wing), showing that the weight of the pivot
more than offset the advantage of swing wing.
https://en.wikipedia.org/wiki/Lightweight_Fighter_program
and then YF16 and YF17
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Lightweight_Fighter_program

In the late 1960s, Boyd gathered a group of like-minded innovators who
became known as the Fighter Mafia, and in 1969, they secured
Department of Defense funding for General Dynamics and Northrop to
study design concepts based on the theory.[13][14]

... snip ...

YF16 with relaxed stability requiring "fly-by-wire" that was fast
enough for flight control surfaces
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Relaxed_stability_and_fly-by-wire
https://en.wikipedia.org/wiki/Relaxed_stability
https://fightson.net/150/general-dynamics-f-16-fighting-falcon/

The F-16 is the first production fighter aircraft intentionally
designed to be slightly aerodynamically unstable, also known as
"relaxed static stability" (RSS), to improve manoeuvrability. Most
aircraft are designed with positive static stability, which induces
aircraft to return to straight and level flight attitude if the pilot
releases the controls; this reduces manoeuvrability as the inherent
stability has to be overcome. Aircraft with negative stability are
designed to deviate from controlled flight and thus be more
maneuverable. At supersonic speeds the F-16 gains stability
(eventually positive) due to aerodynamic changes.

... snip ...

misc. other
http://www.aviation-history.com/airmen/boyd.htm
https://www.nytimes.com/2003/03/09/books/40-second-man.html
https://www.nytimes.com/1997/03/13/us/col-john-boyd-is-dead-at-70-advanced-air-combat-tactics.html
https://www.usni.org/magazines/proceedings/1997/july/genghis-john

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

Around 2010, there were online social media claims that F-35 was
stealth and would replace F-15s, F-16s, F-18s, EA-18s, and A10s. Later
in the decade, I found some analysis that showed it was less stealth
than claimed and saw claims changed to "low observable".
https://www.ausairpower.net/APA-2009-01.html
http://www.ausairpower.net/jsf.html
http://www.ausairpower.net/APA-JSF-Analysis.html

Then found an online 2011 radar tutorial that made claims about the
processing power needed to do real-time recognizing low-observable F-35
radar signatures (which was more than currently available ... however
that fall articles appeared about self-driving cars claiming that the
processing power used was 100 times the 2011 claims needed for
real-time F-35 radar signature). Then within a year, articles appeared
announcing that new radar jamming pods were being delivered for EA-18s
to handle frequencies that could be used to target F-35s.

Posts mentioning F-35 "stealth" and 2011 radar tutorial
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2022e.html#101 The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 21 Apr, 2024
Blog: Facebook

Boeing and the Dark Age of American Manufacturing. Somewhere along the
line, the plane maker lost interest in making its own planes. Can it
rediscover its engineering soul?
https://www.theatlantic.com/ideas/archive/2024/04/boeing-corporate-america-manufacturing/678137/

I took two credit hr intro to fortran/computers and at the end of
semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30
... the univ. was getting 360/67 replacing the 709/1401, a 360/30
temporarily replaced 1401 (getting 360/30 for 360 experience) pending
delivery of 360/67. The 360/67 arrives within a year of my taking intro
class and I'm hired fulltime responsible fo os/360.

Then before I graduate I'm hired fulltime into small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services ... I think Renton datacenter possibly largest in the world
with 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around the machine room. Lots of
politics between Renton director and CFO, who only had a 360/30 up at
Boeing field for payroll, although they enlarge the room for a 360/67
for me to play with when I'm not doing other stuff. 747#3 was flying
skies of Seattle getting FAA flt certification. There was also
disaster plan to replicate Renton up at the new 747 plant in Everett
(Mt. Rainier heats up and the resulting mud slide takes out
Renton). When I graduate, I join IBM science center instead of staying
with Boeing CFO.

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent Boyd post
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16

Boyd told story about being vocal that the electronics across the
trail wouldn't work ... he then is put in command of spook base (about
the same time I'm at Boeing).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd biography has "spook base" a $2.5B windfall for IBM (ten times
Renton).

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html#,a/>

Did Stock Buybacks Knock the Bolts Out of Boeing?
https://lesleopold.substack.com/p/did-stock-buybacks-knock-the-bolts

Since 2013, the Boeing Corporation initiated seven annual stock
buybacks. Much of Boeing's stock is owned by large investment firms
which demand the company buy back its shares. When Boeing makes
repurchases, the price of its stock is jacked up, which is a quick and
easy way to move money into the investment firms' purse. Boeing's
management also enjoys the boost in price, since nearly all of their
executive compensation comes from stock incentives. When the stock
goes up via repurchases, they get richer, even though Boeing isn't
making any more money.

... snip ...

2016, one of the "The Boeing Century" articles was about how the
merger with MD has nearly taken down Boeing and may yet still
(infusion of military industrial complex culture into commercial
operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout

Unlike Boeing, McDonnell Douglas was run by financiers rather than
engineers. And though Boeing was the buyer, McDonnell Douglas
executives somehow took power in what analysts started calling a
"reverse takeover." The joke in Seattle was, "McDonnell Douglas bought
Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution

Sorscher had spent the early aughts campaigning to preserve the
company's estimable engineering legacy. He had mountains of evidence
to support his position, mostly acquired via Boeing's 1997 acquisition
of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft
plant in Long Beach and a CEO who liked to use what he called the
"Hollywood model" for dealing with engineers: Hire them for a few
months when project deadlines are nigh, fire them when you need to
make numbers. In 2000, Boeing's engineers staged a 40-day strike over
the McDonnell deal's fallout; while they won major material
concessions from management, they lost the culture war. They also
inherited a notoriously dysfunctional product line from the
corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern
capitalism. Deregulation means a company once run by engineers is now
in the thrall of financiers and its stock remains high even as its
planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buybacks
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

Recent posts mentioning Boeing CFO, Boeing Computer Services, Renton
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying

some posts mentioning M/D financiers taking over Boeing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT:  Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#39 Crash Course
https://www.garlic.com/~lynn/2019e.html#33 Boeing's travails show what's wrong with modern capitalism
https://www.garlic.com/~lynn/2019d.html#39 The Roots of Boeing's 737 Max Crisis: A Regulator Relaxes Its Oversight
https://www.garlic.com/~lynn/2019d.html#20 The Coming Boeing Bailout?

--
virtualization experience starting Jan1968, online at home since Mar1970

AI-controlled F-16

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI-controlled F-16
Date: 21 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#8 AI-controlled F-16
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

The USAF Pairs Piloted Jets With AI Drones. Has AI spawned the
ultimate "loyal wingman"--or just the next smart weapon?
https://spectrum.ieee.org/military-drones-us-air-force

2021 post/article mentioning loyal wingman/Valkyrie
https://www.garlic.com/~lynn/2021j.html#67 A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A
Valkyrie
https://nationalinterest.org/blog/buzz/mini-f-35-dont-go-crazy-over-air-forces-stealth-xq-58a-valkyrie-46527

While the Air Force refused to disclose specifics of the XQ-58A, the
drone is billed as having long range and a "high subsonic" speed. It
is designed to be "runway independent," which suggests it will be
flown from rough airstrips and forward bases. Still more clues can be
found in a $40.8 million Air Force contract awarded to Kratos in 2016
under the Low-Cost Attritable Strike Unmanned Aerial System
Demonstration program. That contract called for a drone with a top
speed of Mach 0.9 (691 miles per hour), a 1,500-mile combat radius
carrying a 500-pound payload, the capability to carry two GBU-39 small
diameter bombs, and costing $2 million apiece when in mass production
(an F-35 costs around $100 million).

... snip ...

... at one point, F-35 price was so unreasonable they started quoting
plane w/o engine and separate price for the engine.

I was introduced to John Boyd in the early 80s and would sponsor his
briefings. One of Boyd stories was being asked to review the USAF
newest air-to-air missile before Vietnam. They showed him a film where
the missile hit flares on a drone every time. He asked them to rewind
the film and then just before the missile hits, had them stop the film
and asked them what kind of guidance. They eventually say
heat-seeking, he then asks them what kind of heat-seeking and gets
them to eventually say "pin-point". He then asks him where is the
hottest part of a jet plane. They answer the engine ... he says wrong,
it is the plume some 30yrds behind the plane ... aka the missile will
be lucky to hit 10% of the time (they gather up all their material and
leave). Roll forward to Vietnam and Boyd is proved correct. At some
point the USAF commanding general in Vietnam has all the fighters
grounded until the USAF missiles are replaced with Navy Sidewinders
(that have better than twice the hit rate). The general lasts 3months
before he is called on the carpet back in the Pentagon for violating a
cardinal (USAF) Pentagon rule, cutting (USAF) budget (by not using USAF
missiles) and what was much worse, increasing the Navy budget.

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 21 Apr, 2024
Blog: Facebook

Charlie had invented compare&swap when doing CP67 multiprocessor
fine-grain locking support at the science center. When we tried to get
the 370 architecture owners to include compare&swap for 370, they
said that the POK favorite son operating system owners (MVT, then
SVS&MVS) said the (360) test&set" instruction was more than
sufficient, if compare&swap was to be justified had to come up with
justifications that weren't multiprocessor specific; thus were born
the examples for application multithreading/multiprogramming use (like
DBMS).

SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

A decade ago, I was asked to track down the decision to add virtual
memory to all 370s; basically MVT storage management was so bad that
regions had to be specified four times larger than used, so 1mbyte,
370/165 typically ran only four concurrent regions ... insufficient to
keep system busy and justified. Going to 16mbyte virtual address space
("SVS", similar to running MVT in a CP67 16mbyte virtual machine)
could increase concurrently running regions by a factor of four times,
with little or no paging. The 370 virtual memory decision also
resulted in doing VM370, and in the morph of CP67->VM370, they
simplified and/or dropped lots of features (including multiprocessing
support).

archived posts with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (including online sales&marketing
support US HONE was long time customer from CP67 days, which evolves
into world-wide VM370). As internal datacenters were migrating to
VM370, in 1974 I started moving a lot of the CP67 missing features to
a release2-based VM370 production "CSC/VM" ... which included kernel
re-organization for multiprocessing ... but not the actual
multiprocessor support.

The US HONE datacenters were consolidated in silicon valley with the
largest loosely-coupled shared DASD configuration including
load-balancing and fall-over support. Then I added multiprocessor
support to Release3-based VM370 "CSC/VM", initially for US HONE so
they could add a second processor for eight tightly-coupled systems in
a loosely-coupled configuration. I did some tricks with hightly
optimized multiprocessor pathlengths coupled with some processor cache
affinity tricks (improving cache-hit and processor throughput
offsetting multiprocessor pathlengths) showing twice the throughput of
a single processor (this was at the time when MVS documentation was
giving MVS multiprocessor throughput as 1.2-1.5 times the troughput of
a single processor).

CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#csc/vm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: when facebook 1st moves into silicon valley, it is into a new
bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of
releasing no software before its time" when z/VM finally releasing
similar loosely-coupled support.

more trivia: after "future system" imploded (was going to replace all
370s and lack of new 370s during the period is credited with giving
370 clone makers their market foothold)
http://www.jfsowa.com/computer/memo125.htm
I got roped into helping with a 16-processor tightly-coupled,
multiprocessor 370 ... and we con the 3033 processor engineers into
working on it in their spare time (a lot more interesting than remapping
168 logic to 20% faster chips). Everybody thought it was great until
somebody tells the head of POK that it could be decades before the POK
favorite son operating system (MVS) had effective 16-processor support
(with 2-processor only 1.2-1.5 times throughput of single processor
and if not careful, multiprocessor overhead growing non-linear with
increase in processors) The head of POK then directs that some of us
never visit POK again and that the 3033 processor engineers keep
concentrated on 3033 (... and POK doesn't ship a 16-processor system
until after the turn of the century)

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Multiprocessor

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Multiprocessor
Date: 22 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#11 370 Multiprocessor

3033 started out 168 logic remaped to 20% faster (and somewhat more
circuits/chip) ... the 303x channel director was 158 engine with just
the integrated channel microcode (for six channels) and w/o the 370
microcode ... to get full 16 channels would require three channel
director boxes.

A 3031 was two 158 engines... one with only the 370 microcode and a
2nd with just the integrated channel microcode.

A 3032 was 168 using the channel director box for external channels.

Trivia: the (original) 168 external channels were actual faster than
the 303x channel director box (i.e. 158 engine with just the
integrated channel microcode)

final(?) trivia: compare-and-swap was chosen because "CAS" were
Charlie's initials

360 had 2301&2303 "drum"" ... 2305-1 & 2305-2 were fixed head
disks. 2301 was similar to 2303 ... same capacity but read/write four
heads in parallel ... 1/4 no. tracks, each track 4 times larger, 4
times transfer rate

2305-1, 5.4mbytes, avg rotational delay 2.5msecs, 3mbyte/sec transfer
most were 2305-2, 11.2mbytes, avg rotational delay, 5msecs,
1.5mbyte/sec

2305-1 had same number of heads as 2305-2 but heads were paired,
offset 180degrees, read/write simultaneously, transfer on 2-byte
channel. Start of record had only to rotate avg. 1/4 revolution for
record to come under pair of (offset) head pair.

URL still there 2023 ... but now gone "404" ... easiest to just go to
wayback machine
https://web.archive.org/web/20230821125023/https://www.ibm.com/ibm/history/exhibits/storage/storage_2305.html

By 1980, there was no follow-on product. For internal datacenters, IBM
then contracted with vendor for what they called "1655", electronic
disks that would emulate a 2305 ... but had no rotational delay. One
of the issue was that while IBM had fixed-block disks, the company
favorite son batch operating system never supported anything other
than CKD DASD ... so for their use it had to simulate an existing CKD
2305 running over 1.5mbyte I/O channels. However for other IBM systems
that supported FBA ... 1655s could be configured as fixed-block disk
running on 3mbyte/sec I/O channels ... similar to SSD ... but had
standard electronic memory that wasn't persistent w/o power.

posts mentioning DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

past posts mentioning 2301, 2305, and 1655
https://www.garlic.com/~lynn/2022e.html#41 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2012c.html#1 Spontaneous conduction: The music man with no written plan
https://www.garlic.com/~lynn/2011c.html#48 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010q.html#67 ibm 2321 (data cell)
https://www.garlic.com/~lynn/2008s.html#39 The Internet's 100 Oldest Dot-Com Domains
https://www.garlic.com/~lynn/2008n.html#93 How did http get a port number as low as 80?
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2003p.html#46 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2003n.html#52 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003n.html#50 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003j.html#6 A Dark Day
https://www.garlic.com/~lynn/2003j.html#5 A Dark Day
https://www.garlic.com/~lynn/2003h.html#14 IBM system 370
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing and the Dark Age of American Manufacturing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing and the Dark Age of American Manufacturing
Date: 22 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing

Boeing's problems were as bad as you thought. Experts and
whistleblowers testified before Congress today. The upshot? "It was
all about money."
https://www.vox.com/money/2024/4/17/24133324/boeing-senate-hearings-whistleblower-sam-salehpour-congress

Boeing went under the magnifying glass at not one, but two Senate
hearings today examining allegations of deep-seated safety issues
plaguing the once-revered plane manufacturer. Witnesses, including two
whistleblowers, painted a disturbing picture of a company that cut
corners, ignored problems, and threatened employees who spoke up.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Bemer, ASCII, Brooks and Mythical Man Month

From: Lynn Wheeler <lynn@garlic.com>
Subject: Bemer, ASCII, Brooks and Mythical Man Month
Date: 24 Apr, 2024
Blog: Facebook

360s were suppose to be ASCII machines but the ASCII unit record gear
wasn't ready ... so they were (supposedly) going to temporarily use
the (old) BCD unit gear with EBCDIC ... "the biggest computer goof
ever"
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

Unfortunately, the software for the 360 was constructed by thousands
of programmers, with great and unexpected difficulties, and with
considerable lack of controls. As a result, the nearly $300 million
worth of software (at first delivery!) was filled with coding that
depended upon the EBCDIC representation to work, and would not work
with any other! Dr. Frederick Brooks, one of the chief designers of
the IBM 360, informed me that IBM indeed made an estimate of how much
it would cost to provide a reworked set of software to run under
ASCII. The figure was $5 million, actually negligible compared to the
base cost. However, IBM (present-day note: Read "Learson") made the
decision not to take that action, and from this time the worldwide
position of IBM hardened to "any code as long as it is ours".

... snip ...

https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above attributes it to Learson ... however, it was also Learson that
was trying to block the bureaucrats, careerists (and MBAs) from
destroying the Watson Legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
So by the early 90s, it was looking like it was nearly over, 1992 IBM
has one of the largest losses in history of US corporations and was
being re-orged into the 13 "baby blues" in preparation for breaking up
the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup the company. Before we get
started, the board brings in the former president of Amex that
(mostly) reverses the breakup (although it wasn't long before the disk
division is gone).

posts mentioning ASCII & Mythical Man Month
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT

--
virtualization experience starting Jan1968, online at home since Mar1970

360&370 Unix (and other history)

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360&370 Unix (and other history)
Date: 24 Apr, 2024
Blog: Facebook

Trivia: Story was both Amdahl & IBM field support claimed they
wouldn't support customer machines w/o industrial strength EREP
... adding it to UNIX would have been several times the effort of just
doing direct UNIX port to 370. SSUP was stripped down TSS/360 with
just hardware and device support ... and EREP. Amdhal UTS and other
IBM UNIX 370 efforts ran in VM/370 (leveraging its EREP).

possibly more than you asked for

Took two credit hr intro to fortran/computers and end of semester was
hired to rewrite 1401 MPIO in assembler for 360/30. Univ replacing
709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was
replaced with 360/30 (pending availability of 360/67, 360/30 for
starting to get familiar with 360, 360/30 also had microcode 1401
emulation). The univ shutdown datacenter on weekends and I would have
it dedicated, although 48hrs w/o sleep made Monday classes hard. They
gave me a bunch of hardware and software manuals and I got to design
and implement my own monitor, device drivers, interrupt handlers,
storage management, error recovery, etc. and within a few weeks had a
2000 card assembler program.

Then within a year of intro class, the 360/67 comes in and I'm hired
fulltime responsible for OS/360 (tss/360 never really came to
production, so ran as 360/65, I continue to have my 48hr dedicated
datacenter on weekends). Student fortran had run under a second on
709, initially on os/360 ran over a minute. I install HASP and it cuts
the time in half. I then start redoing OS/360 STAGE2 SYSGEN, careful
placing datasets and PDS members to optimize arm seek and multi-track
search, cutting another 2/3rds to 12.9secs. Never got better than 709
until I install Univ. of Waterloo WATFOR.

CSC had come out to install CP67/CMS (precursor to vm370, 3rd
installation after CSC itself and MIT Lincoln Labs) and I mostly
played with it in my weekend dedicated time. Early on the IBM TSS/360
SE was around for a time and we created synthetic benchmark of fortran
edit, compile, & execute. Unmodified CP67/CMS ran 35 simulated users
with better response and throughput than TSS/360 did with four
simulated users.

Initially for CP67, I mostly worked on rewriting pathlengths for
running os/360 in virtual machine. OS/360 test ran 322 secs on "bare
machine", initially 856secs in virtual machine (CP67 CPU 534secs),
after a few months, got CP67 CPU down to 113secs (from 534secs). I
then redid I/O for paging (chained requests for optimized transfer per
revolution) and for all disk optimized ordered arm seek; new optimized
page replacement algorithm, and dynamic adaptive resource management
and scheduling.

CP67 came with 2741&1052 terminal with automagic terminal type support
(SAD CCW to switch port terminal type scanner). The univ. had some
number of TTY/ASCII terminals and I integrated ASCII terminal support
with automagic terminal type support (trivia: ASCII terminal type
support had come in a "HEATHKIT" box for install in the IBM
telecommunication controller). I then wanted a single dialup telephone
number ("hunt group") for all terminals. Didn't quite work, while
could dynamically change terminal type scanner ... IBM had taken a
short cut and hardwired port line speed.

This kicks off a univ project to do clone controller, build a channel
interface board for an Interdata/3 programmed to simulate IBM
telecommunication controller, with addition it could do dynamic line
speed). Later was upgraded to a Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata (and later
Perkin-Elmer) were selling it as clone controller and four of us are
written up for (some part of) clone controller business. Around the
turn of century I run into descendant at large datacenter that was
handling majority of point-of-sale dailup credit card machines east of
the Mississippi.

some more CSC & CP67/CMS history
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

plug compatible 360 controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate I'm hired fulltime into small group in the
Boeing CFO office to help with the formation of Boeing Computer
Services ... I think Renton datacenter possibly largest in the world
with 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around the machine room. Lots of
politics between Renton director and CFO, who only had a 360/30 up at
Boeing field for payroll, although they enlarge the room for a 360/67
for me to play with when I'm not doing other stuff. 747#3 was flying
skies of Seattle getting FAA flt certification. There was also
disaster plan to replicate Renton up at the new 747 plant in Everett
(Mt. Rainier heats up and the resulting mud slide takes out
Renton). When I graduate, I join IBM science center instead of staying
with Boeing CFO.

Charlie had invented compare&swap (mnemonic chosen because "CAS" were
his initials) instruction when he was doing CP67 fine-grain,
multiprocessor locking at the science center. When we tried to get the
370 architecture owners to include compare&swap for 370, they said
that the POK favorite son operating system owners (MVT, then SVS&MVS)
said the (360) test&set" instruction was more than sufficient, if
compare&swap was to be justified had to come up with justifications
that weren't multiprocessor specific; thus were born the examples for
application multithreading/multiprogramming use (like DBMS).

A decade ago, I was asked to track down the decision to add virtual
memory to all 370s; basically MVT storage management was so bad that
regions had to be specified four times larger than used, so 1mbyte,
370/165 typically ran only four concurrent regions ... insufficient to
keep system busy and justified. Going to 16mbyte virtual address space
("SVS", similar to running MVT in a CP67 16mbyte virtual machine)
could increase concurrently running regions by a factor of four times,
with little or no paging. The 370 virtual memory decision also
resulted in doing VM370, and in the morph of CP67->VM370, they
simplified and/or dropped lots of features (including multiprocessing
support).

One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (including online sales&marketing
support US HONE was long time customer from CP67 days, which evolves
into world-wide VM370-based HONE). As internal datacenters were
migrating to VM370, in 1974 I started moving a lot of the CP67 missing
features to a release2-based VM370 production "CSC/VM" ... which
included kernel re-organization for multiprocessing ... but not the
actual multiprocessor support. The US HONE datacenters were
consolidated in silicon valley with the largest loosely-coupled shared
DASD configuration including load-balancing and fall-over
support.

Then I added multiprocessor support to Release3-based VM370 "CSC/VM",
initially for US HONE so they could add a second processor for eight
tightly-coupled systems in a loosely-coupled, shared-DASD
configuration. I did some tricks with highly optimized multiprocessor
pathlengths coupled with some processor cache affinity tricks
(improving cache-hit and processor throughput offsetting
multiprocessor pathlengths) showing twice the throughput of a single
processor (this was at the time when MVS documentation was giving MVS
multiprocessor throughput as 1.2-1.5 times the throughput of a single
processor).

trivia: when facebook 1st moves into silicon valley, it is into a new
bldg built next door to the former US HONE datacenter.

other trivia: around 2010, I made some joke about "from the annals of
releasing no software before its time" when z/VM finally releasing
similar loosely-coupled support.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

I had joined IBM Science Center not long before "Future System"
started (early 70s, completely different and was going to completely
replace 370, lack of new 370 during the period is credited with giving
the 370 clone makers their market foothold). I continued to work on
360&370 all during the Future System period ... even periodically
ridiculing them (like speculating they didn't really know what they
were doing, not exactly career enhancing activity). more background:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

when FS finally implodes, there is mad rush to get stuff back into the
370 product pipelines, including kicking off quick&dirty 3033&3081
efforts in parallel.
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive"

... snip ...

In the wake of the FS implosion, I was also roped into an effort to do
a 16-processor, tightly-coupled, multiprocessor 370 and we con the
3033 processor engineers into working on it in their spare time (a lot
more interesting than remapping 168 logic to 20% faster chips);
everybody thought it was great until somebody tells the head of POK
that it could be decades before the POK favorite son operating system
(MVS) has effective 16-processor support (goes along with
documentation that 2-processor MVS only had 1.2-1.5 throughput of
single processor). Then some of us were invited to never visit POK
again (and the 3033 processor engineers directed to concentrate on
3033 and no more distractions). trivia: POK doesn't ship a
16-processor machine until after the turn of the century.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since
Mar1970

CTSS, Multicis, CP67/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS, Multicis, CP67/CMS
Date: 24 Apr, 2024
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

trivia: I was undergraduate and univ hired me fulltime responsible for
OS/360 (360/67 originally for tss/360, but was being run as
360/65). Then CSC came out to install CP67/CMS (3rd installation after
CSC itself, and MIT Lincoln Labs). I mostly got to play with it during
my 48hr weekend dedicated time (univ. shutdown datacenter on
weekends). CSC had 1052&2741 support, but univ. had some number of
TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC picked
up and distributed with standard CP67 (as well as lots of my other
stuff). I had done a hack with one byte values for TTY line
input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg
across quad from 545). Somebody down at Harvard got an ascii device
with 1200(?) char length ... they modified field for max. lengths
... but didn't adjust my one-byte hack ... crashing system 27 times in
single day.
https://www.multicians.org/thvv/360-67.html

But on that day, a user at Harvard School of Public Health had
connected a plotter to a TTY line and was sending graphics to it, and
every time he did, the whole system crashed. (It is a tribute to the
CP/CMS recovery system that we could get 27 crashes in in a single
day; recovery was fast and automatic, on the order of 4-5
minutes. Multics was also crashing quite often at that time, but each
crash took an hour to recover because we salvaged the entire file
system. This unfavorable comparison was one reason that the Multics
team began development of the New Storage System.)

... snip ...

I had done automated benchmarking system where I could specify
different configurations, types of workloads, number of users, etc
... and then reboot between benchmarks. When I 1st started migration
from CP67 to VM370, the 1st thing I did was automated benchmarking
... but found that VM370 would crash several times before completing
standard set of benchmarks. As a result, the next things I had to
migrate to VM370 was the CP67 kernel serialization mechanism so VM370
could finish a standard set of benchmarks.

There was some friendly rivalry between 4th and 5th flrs ... one area
was federal gov. ... Multics had installation at USAFDS in the
Pentagon
https://www.multicians.org/site-afdsc.html

In 2nd half of 70s, had transferred out to IBM Research in San Jose
and in spring 1979 got a call that a couple people from USAFDS wanted
to come out to talk about getting 20 VM/4341s ... however by the time
they got around to coming out the following fall, it had increased to
210 VM/4341s.

a reference to instead of upgrading UNIX with mainframe EREP, "borrowing" it
https://www.garlic.com/~lynn/2024c.html#4 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024c.html#5 360&370 Unix (and other history)

above also refs adding CP67 multiprocessing to VM370 ... but just
before I did it, somehow AT&T Longlines was able to get a copy of my
CSC/VM with full source ... and over the following years, migrated it
to newest processors and propagated to multiple AT&T
datacenters. Roll-forward to new IBM 3081, which was originally
intended to be multiprocessor *only* and the IBM AT&T corporate
marketing rep tracks me down to help AT&T with this archaic CSC/VM
system (afraid that AT&T would migrate everything to the latest Amdahl
machines ... which had faster single processor that had almost the
throughput of the two processor 3081).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 24 Apr, 2024
Blog: Facebook

IBM Millicode
https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processor
https://public.dhe.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_10_what_and_why_of_system_z_millicode.pdf

IBM high-end machines are horizontal microcode which is really
difficult and time-consuming to program. After Future System implosion
http://www.jfsowa.com/computer/memo125.htm

Endicott cons me into helping with ECPS microcode assist for 138/148
(low&mid range 370) that were vertical microcode ... basically
microprocessor machine language. Then in early 80s, I got permission
to give ECPS presentations at user group meetings, including monthly
BAYBUNCH hosted by Stanford SLAC. Afterwards the Amdahl people would
grill me for more information. They said that they had developed
"MACROCODE" (370-like instructions running in microcode mode for their
high-end horizontal microcode machine) during IBM's 3033 period, to
quickly respond to IBM trivial new (horizontal) microcode functions
were being shipped required for MVS to run. At the time they were in
the process of implementing "HYPERVISOR" (subset of virtual machine
functions running w/o VM370). IBM wasn't able to respond with
LPAR&PR/SM until nearly end of the decade for 3090.

Similar, but different, late last century, the i86 vendors went to a
hardware layer that translated i86 into RISC micro-ops for actual
execution ... largely negating the throughput advantage of RISC
processors (industry standard benchmark program that counts number of
iterations compared to 1MIP reference platform).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     IBM z900 mainframe processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC 440)

2003 max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
    (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configure IBM mainframe z196, 80 processor aggregate 50BIPS
     (625MIPS/proc)
2010 E5-2600 XEON server blade, 16 processor aggregate 500BIPS
     (31BIPS/proc)

360/370 microcode posts https://www.garlic.com/~lynn/submain.html#360mcode 801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts https://www.garlic.com/~lynn/subtopic.html#801 -- virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 25 Apr, 2024
Blog: Facebook

IBM CP-40
https://en.m.wikipedia.org/wiki/IBM_CP-40

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

paper about CP40/CMS ... some amount taken from CTSS
https://www.garlic.com/~lynn/cp40seas1982.txt
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

science center wanted 360/50 to modify with virtual memory, but all
the spare 360/50s were going to FAA ATC project ... and so they had to
settle for a 360/40. When 360/67 becomes available standard with
virtual memory, CP40 morphs into CP67

some more details (univ. I was at, becomes 3rd installation, after CSC itself, and MIT Lincoln Labs)
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2024.html#49 Card Sequence Numbers
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

last product we did at IBM was HA/CMP ... it originally was HA/6000
for the NYTimes to move their newspaper system (ATEX) off VAXCluster
to RS/6000; I rename it HA/CMP when start doing technical/scientific
cluster scaleup with national labs and commercial cluster scaleup with
RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had both
VAXCluster and Unix in the same source base. I did an enhanced
distributed lock manager with VAXCluster API semantics to simplify
their HA/CMP support. Disclaimer: When transferred to IBM Research, I
got roped into doing some work with Jim Gray and Vera Watson on the
original SQL/relational implementation ("System/R") and then helping
with tech transfer to Endicott for SQL/DS ... "under the radar", while
the corporation was preoccupied with the next great DBMS,
"EAGLE". Then when "EAGLE" implodes, there was request for how fast
could System/R be ported to MVS ... which eventually ships as DB2,
originally for decision-support only.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Part of HA/CMP was studying how things fail ... and at one point was
in brought in to latest ATC modernization effort. Turns out it
involved fault-tolerant triple-redundant hardware with guidelines that
since all failures would be masked ... the software didn't have to
worry about such things. However, it turns out that there were some
"business/operational rules" that could have failures ... and the
software effort had to be reset to handle non-hardware related
failures. We then got into the habit of dropping in on staff person in
the office of IBM FSD President.

First part of Jan1992, had Oracle meeting and IBM AWD/Hester told
Oracle CEO that we would have 16-processor clusters by mid92 and
128-processor clusters by ye92 ... and during Jan1992 was keeping FSD
appraised of HA/CMP status and work with national labs. Apparently
during Jan, FSD told Kingston supercomputer project that FSD was going
with HA/CMP for gov. accounts. Then end of Jan, cluster scaleup was
transferred to Kingston for announce as IBM supercomputer (for
technical/scientific *only*) and we were told that we couldn't work on
anything with more than four processors ... we leave IBM a few months
later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

more trivia: never dealt with Fox while in IBM; FAA ATC, The Brawl in
IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement
special. The computer selected will be in the critical loop of making
sure that there are no more mid-air collisions. Many in IBM want to
not bid. A marketing manager with but 7 years in IBM and less than one
year as a manager is the proposal manager. IBM is in midstep in coming
up with the new line of computers - the 360. Chaos sucks into the fray
many executives- especially the next chairman, and also the IBM
president. A fire house in Poughkeepsie N Y is home to the technical
and marketing team for 60 very cold and long days. Finance and legal
get into the fray after that.

... snip ...

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794

After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had
his standard management presentation -to IBM and CIA groups -
published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings
and was translated into Spanish -and has been offered continuously for
sale as a used book on Amazon.com. It is now reprinted -verbatim- and
available from Createspace, Inc - for $15 per copy. The book presents
a total of 22 traits and qualities and their role in real life
situations- and their resolution- encountered during Mr. Fox's 20
years with IBM and with major computer customers, both government and
commercial. The presentation and the book followed a focus and use of
quotations to Identify and characterize the role of the traits and
qualities. Over 400 quotations enliven the text - and synthesize many
complex ideas.

... snip ...

... but after leaving IBM, had a project with Fox and his company that
also had some other former FSD FAA people.

other trivia: doing HA/CMP we started out reporting to executive, who
later went over to head up Somerset ... single RISC chip design effort
for AIM (apple, ibm, motorola), some amount of motorola 88k RISC
features incorporated into power/pc.

trivia: CPS (run under OS/360 ... similar to APL\360, CPS included
microcode assist on the 360/50) was handled by Boston Programming
Center which was on 3rd flr, below Cambridge Scientific Center on 4th
flr (and Multics on 5th flr). With the decision to do CP67->VM/370
some of the science center people went to the 3rd flr taking over the
Boston Programming Center for the VM/370 development group. When the
development group outgrew their half of the 3rd flr (there was a
gov. agency that the bldg register listed as law firm in the other
half), they moved out to the empty SBC bldg at Burlington mall (off
128, SBC had been spun off to another computer company in a legal
matter).

Note: after Future System implosion and mad rush to get stuff back
into 370 product pipelines, including kicking off the quick and dirty
3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm

the head of POK also managed to convince corporate to kill the vm370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (presumably claiming that otherwise MVS/XA wouldn't be
able to ship on time in the 80s). Eventually, Endicott managed to save
the VM/370 product mission (for low-end and mid-range), but had to
recreate a development group from scratch.

they weren't going to tell the people about the shutdown until the
very last minute, to minimize the number that might be able to escape
into the boston area ... however the information manage to leak and
several managed to escape (including to the infant DEC VMS effort,
joke was that head of POK was major contributor to VMS). They did a
hunt for the source of the leak, fortunately for me, nobody gave the
source up.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode

1980 I was con'ed into doing channel-extender support for STL (since
renamed SVL) that was moving 300 people from IMS DBMS group to offsite
bldg with service back to STL datacenter. They had tried "remote
3270", but found human factors unacceptable. Channel-extender allowed
placing channel-attached 3270 controllers at the offsite bldg with no
perceptible difference in human factors between offsite and inside STL
(although some tweaks with channel-extender increased system
throughput by 10-15%, prompting suggestion that all their systems
should use channel-extender, aka they had spread 3270 controllers
across all the channels with DASD and "slow" 3270 controller channel
busy was interfering with DASD I/O, channel-extender boxes were much
faster and reduced channel busy for same amount of 3270 transfer).

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Then some POK engineers playing with some serial stuff, blocked the
release of support to customers. Later in 1988, the IBM branch office
asks if I could help LLNL (national lab) get some serial stuff they
were playing with, standardized. It quickly becomes "fibre-channel"
standard ("FCS", including some stuff I had done in 1980), initially
1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK stuff
(after more than decade) finally gets released with ES/9000 as ESCON
(when it is already obsolete) 17mbyes/sec. Then some POK engineers get
involved in FCS and define a heavy weight protocol that significantly
cuts the native throughput, which eventually ships as FICON (running
over FCS). The latest public benchmark I can find is z196 "Peak I/O"
getting 2M IOPS with 104 FICON. About the same time, an FCS was
announced for E5-2600 server blades claiming over million IOPS (two
such FCS having higher throughput than 104 FICON). Also IBM pubs
recommend limiting SAPs (system assist processors that actually do
I/O) to 70% CPU ... would be around 1.5M IOPS. Further complicating
are CKD DASD, which haven't been made for decades, needing to be
simulated on industry standard fixed-block disks.

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

trivia: channel attached 3272/3277 had .086sec hardware response
... this was in days of studies showing improved productivity with
quarter second response, so to get interactive .25sec, system response
had to be no more than .164sec (several of my internal enhanced
systems were getting .11sec interactive system response). For the
3278, they moved lots of electronics back into controller, so protocol
chatter drove hardware response to .3-.5sec (somewhat dependent amount
of data), making quarter second impossible. A complaint to the 3278
product administrator got a response that 3278 wasn't for interactive
computing but "data entry" (aka electronic keypunch). Later IBM/PC
3277 emulation cards had 4-5 times the upload/download throughput of
3278 cards. Note MVS/TSO users never noticed since their system
response was rarely even 1sec (so any change from 3272/3277 to
3274/3278 wasn't noticed).

other trivia: When I transfer to San Jose Research, I get to wander
around (IBM and non-IBM) datacenters in silicon valley, including disk
engineering (bldg14) and disk product test (bldg15) across the
street. They were running prescheduled, around the clock, stand-alone
mainframe testing. They mentioned that they had recently tried MVS,
but it had 15min mean-time-between-failure (in that environment). I
offer to rewrite I/O supervisor to make it bullet proof and never
fail, enabling any amount of on-demand, concurrent testing, greatly
improving productivity (downside was they started blaming me for any
problems, and I had to spend increasing amount of time playing disk
engineer shooting hardware issues). The engineers were complaining
that bean-counting/accountants had forced the 3880 to have
inexpensive, slow microprocessor (compared to 3830, 3880 had special
hardware path for 3380 3mbyte/sec transfers, but everything else was
much slower, significantly increasing channel busy).

Roll forward to 3090, which had initially configured number of
channels to achieve target throughput, assuming 3880 was same as 3830
but with addition for 3mbyte/sec transfers. When they found out how
bad it really was, they realized they would have to significantly
increase the number of channels (to achieve target throughput), which
required an additional TCM (3090 were semi-facetiously claiming they
would bill the 3880 group for the increase in 3090 manufacturing
cost. Eventually marketing respun the significant increase in number
of channels as 3090 being wonderful I/O machine (rather than
countermeasure to the 3880 channel busy increase).

I wrote (IBM internal) research report about work for disk division
and happened to mention the MVS 15min MTBF ... bringing down the wrath
of the MVS organization on my head.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Millicode

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Millicode
Date: 25 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode

Shortly after joining IBM ... I got roped into help on project for
multithreading 370/195.  195 had 64 instruction pipeline and supported
out-of-order execution .... but didn't have speculative execution or
branch prediction and so conditional branches drained the pipeline
... so most codes ran 195 at half throughput. Multi-threading is
mentioned in this webpage about the end of ACS/360
https://people.computing.clemson.edu/~mark/acs_end.html

aka Amdahl had won the battle to make ACS 360 compatible ... but then
(folklore) is that executives were worried that it would advance the
state-of-the-art too fast and IBM would loosely control of the market
... and kill the project (Amdahl leaves IBM shortly later).

195 multithreading would simulate two processor multiprocessing (two
instructions streams, two sets of registers, etc) ... two instruction
streams, each running processor at half throughput ... would
(possibly) result in keeping the 195 fully busy ... modulo that the
MVT 65/MP support was at least as bad as the MVS two processor support
only 1.2-1.5 times the throughput of a single processor. Then the
decision was to add virtual memory to all 370s (as countermeasure to
the bad/poor MVT storage management) and it was decided to stop all
new work on 370/195 (considered too much effort to add virtual memory
to 195).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

archived post with pieces of email exchange about decision to add
virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

... trivia: original 3380 had 20 track spacings between each data
track, they then cut the spacings in half for double the tracks (&
capacity) and then cut the spacing again for triple the tracks (&
capacity). The father of 801/RISC wanted me to help him with "wide
disk head" .... disks are formated with 16 closely spaced data tracks
with servo track between. A "wide" disk head would transmit 16 data
tracks in parallel, following servo tracks on each side. The problem
was that was 50mbyte/sec transfer and IBM (mainframe) channels were
still 3mbytes/sec. It wasn't until a couple years later that I was
involved with "FCS" and could do 100mbyte/sec concurrently in each
direction ... but was getting FCS for RS/6000 (wasn't until much later
for IBM mainframe).

posts mentioning getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 25 Apr, 2024
Blog: Facebook

I was blamed for online computer conferencing in the late 70s and
early 80s on the internal network (larger than arpanet/internet from
just about beginning until sometime mid/late 80s) ... folklore is that
when corporate executive committee was told, 5of6 wanted to fire
me. One of the outcomes was official sanctioned and moderated online
forums. Early 80s, I got HSDT project ... T1 and faster computer links
(both terrestrial and satellite/TDMA&broadcast). Mid-80s, HSDT was
having some custom hardware built on the other side of the Pacific. On
Friday before leaving for a visit, got an email announcement about new
online forum about computer links from the communication group

low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec

monday morning on wall of conference room on the other side of pacific, there were these definitions:

low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec

HSDT posts https://www.garlic.com/~lynn/subnetwork.html#hsdt online computer conferencing posts https://www.garlic.com/~lynn/subnetwork.html#cmc internal network https://www.garlic.com/~lynn/subnetwork.html#internalnet -- virtualization experience starting Jan1968, online at home since Mar1970

FOILS

From: Lynn Wheeler <lynn@garlic.com>
Subject: FOILS
Date: 25 Apr, 2024
Blog: Facebook

Some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
people went to the 5th flr and MULTICS
https://en.wikipedia.org/wiki/Multics
others went to the 4th flr and IBM Cambridge Science Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was redone for CP67/CMS as "SCRIPT"

GML was invented in 1969 at the science center ("G", "M", "L" are
initials of 3 inventors last name) and GML tag processing added to
SCRIPT ... ref by one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Edson was responsible for CP67 wide-area network which grows into the
corporate network (larger than arpanet/internet from just about the
beginning until until sometimed mid/late 80s) ... also used for the
corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed internet)
references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

... and back to "foils", from IBM Jargon:

foil - n. Viewgraph, transparency, viewfoil - a thin sheet or leaf of
transparent plastic material used for overhead projection of
illustrations (visual aids). Only the term Foil is widely used in
IBM. It is the most popular of the three presentation media (slides,
foils, and flipcharts) except at Corporate HQ, where even in the 1980s
flipcharts are favoured. In Poughkeepsie, social status is gained by
owning one of the new, very compact, and very expensive foil
projectors that make it easier to hold meetings almost anywhere and at
any time. The origins of this word have been obscured by the use of
lower case. The original usage was FOIL which, of course, was an
acronym. Further research has discovered that the acronym originally
stood for Foil Over Incandescent Light. This therefore seems to be
IBM's first attempt at a recursive language.

... snip ..

Overhead projector
https://en.wikipedia.org/wiki/Overhead_projector
Transparency (projection)
https://en.wikipedia.org/wiki/Transparency_(projection)

:frontm.
:titlep.
:title.GML for Foils
:date.August 24, 1984
:author.xxx1
:author.xxx2
:author.xxx3
:author.xxx4
:address.
:aline.T.J. Watson Research Center
:aline.P.O. Box 218
:aline.Yorktown Heights, New York
:aline.&rbl.
:aline.San Jose Research Lab
:aline.5600 Cottle Road
:aline.San Jose, California
:eaddress.
:etitlep.
:logo.
:preface.
:p.This manual describes a method of producing foils automatically
using DCF Release 3 or SCRIPT3I. The foil package will run with the
following GML implementations:
:ul.
:li.ISIL 3.0
:li.GML Starter Set, Release 3
:eul.
:note.This package is an :q.export:eq. version of the foil support
available at Yorktown and San Jose Research as part of our floor
GML. Yorktown users should contact xxx4 for local
documentation. Documentation for San Jose users is available in the
document stockroom.
.*
:p.Any editor can be used to create the foils. Preliminary proofing
can be done at the terminal with final output to one of the printers
supported by the various implementations:
:ul compact.
:li.APS-5
:li.4250
:li.Sherpa
:li.Phoenix
:li.6670
:li.3800
:li.1403
:eul.
:note.:hp2.The FOIL package is distributed and maintained only through
the IBMTEXT conference disk. This project is not part of our real
job. We will enhance it and fix bona fide bugs as time permits. Please
report bugs only via FOIL BUGS on the IBMTEXT disk.:ehp2.

... snip ... ... trivia: 6670 was sort of IBM Copier3 with computer link. San Jose Research then modified 6670 for all-points-addressable (6670APA and later added postscript engine) which becomes Sherpa science center posts https://www.garlic.com/~lynn/subtopic.html#545tech GML, SGML, HTML, etc posts https://www.garlic.com/~lynn/submain.html#sgml internal network posts https://www.garlic.com/~lynn/subnetwork.html#internalnet bitnet/earn posts https://www.garlic.com/~lynn/subnetwork.html#bitnet -- virtualization experience starting Jan1968, online at home since Mar1970

CP40/CMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP40/CMS
Date: 26 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS

... little drift ... Learson tried (and failed) to stop the
bureaucrats, careerists, and MBAs from destroying the Watson
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

20yrs later, appeared to be nearly end of IBM ... IBM has one of the
largest losses in history of US corporations and was being reorganized
into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk if we
could help with breakup. Before we get started, the board hires former
president of AMEX as CEO, who (somewhat) reverses the breakup.

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner

for other drift, a series of "z/VM 50th" postings (50 yrs since VM/370
1972)
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970

TDM Computer Links

From: Lynn Wheeler <lynn@garlic.com>
Subject: TDM Computer Links
Date: 26 Apr, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024c.html#21 TDM Computer Links

communication group ... i.e. SNA communication products division

the communication group mainframe products were cap'ed at 56kbit links
... although they had support for "fat pipes" that could treat
multiple parallel links as a single logical link. About the same time
as the announce for new communication link forum ... they prepared an
analysis for the corporate executive committee that customers weren't
looking for T1 support until sometime in the 90s. They surveyed "fat
pipe" users, showing that use of "fat pipes" for more than six
parallel (56kbit) links had dropped to zero. What they didn't know (or
didn't want to tell the corporate executive committee) was that telco
tariff for T1 link was about the same as six 56kbit links. HSDT
trivial survey found 200 customers that had gone to full T1 with
non-IBM controller and software.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning "fat pipe"
https://www.garlic.com/~lynn/2024b.html#112 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX

post mentioning when I was undergraduate in 60s, univ hires me
fulltime responsible of os/360
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet

I'm not sure when I became aware of name Grace Hopper. While I was at
the univ, the library had gotten an ONR (office of naval research)
https://www.nre.navy.mil/

grant to do online catalog ... and they used some of the money to get
an IBM 2321 (datacell). Other trivia, the library online catalog was
also selected as betatest for the original CICS program product
... and CICS support was added to my tasks. First problem was CICS
wouldn't come up. Eventually figured out that CICS code had some
undocumented hardcoded BDAM options and the library had built the BDAM
files with a different set of options.

cics & bdam posts
https://www.garlic.com/~lynn/submain.html#cics

some recent posts mentioning ONR grant, univ library online catalog,
cics betatest
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024.html#69 NIH National Library Of Medicine
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023.html#108 IBM CICS

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare & Ann Hardy

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare & Ann Hardy
Date: 27 Apr, 2024
Blog: Facebook

Tymshare & Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89

Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

Much more Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Gnosis/KeyKOS trivia: After M/D bought Tymshare, I was brought in to
review Gnosis as part of the spinoff to Key Logic (note following
mentions Augment and Doug Engelbart while at Tymshare)
http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html

The GNOSIS write-up also mentions the SHARE LSRAD study. I had scanned
my copy for putting up on bitsavers
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf
... trivia: note the year it was published, the gov. had increased the
duration of copyright, so I had to spend sometime finding somebody in
SHARE that would approve putting it up on bitsavers

In 1976, Tymshare also started offering their CMS-based online
computer conferencing system to the (IBM mainframe) user group, SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and
later also PCSHARE) files for putting up on internal network and
systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE)
that somebody found on Stanford SAIL PDP10 system and ported to
VM370/CMS ... I got copy and started making it (also) on internal
networks/systems.

virtual machine based commercial online companies
https://www.garlic.com/~lynn/submain.html#online

Posts mentioning GNOSIS and/or Tymshare:
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021j.html#71 book review:  Broad Band:  The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

The Last Thing This Supreme Court Could Do to Shock Us

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Last Thing This Supreme Court Could Do to Shock Us
Date: 27 Apr, 2024
Blog: Facebook

The Last Thing This Supreme Court Could Do to Shock Us. There will be
no more self-soothing after this.
https://slate.com/news-and-politics/2024/04/supreme-court-immunity-arguments-which-way-now.html

For three long years, Supreme Court watchers mollified themselves (and
others) with vague promises that when the rubber hit the road, even
the ultraconservative Federalist Society justices of the Roberts court
would put democracy before party whenever they were finally confronted
with the legal effort to hold Donald Trump accountable for Jan. 6.

... snip ...

... "fake news" dates back to at least founding of the country, both
Jefferson and Burr biographies, Hamilton and Federalists are portrayed
as masters of "fake news". Also portrayed that Hamilton believed
himself to be an honorable man, but also that in political and other
conflicts, he apparently believed that the ends justified the
means. Jefferson constantly battling for separation of church & state
and individual freedom, Thomas Jefferson: The Art of Power,
https://www.amazon.com/Thomas-Jefferson-Power-Jon-Meacham-ebook/dp/B0089EHKE8/
loc6457-59:

For Federalists, Jefferson was a dangerous infidel. The Gazette of the
United States told voters to choose GOD AND A RELIGIOUS PRESIDENT or
impiously declare for "JEFFERSON-AND NO GOD."

... snip ...

.... Jefferson targeted as the prime mover behind the separation of
church and state. Also Hamilton/Federalists wanting supreme monarch
(above the law) loc5584-88:

The battles seemed endless, victory elusive. James Monroe fed
Jefferson's worries, saying he was concerned that America was being
"torn to pieces as we are, by a malignant monarchy faction." 34 A
rumor reached Jefferson that Alexander Hamilton and the Federalists
Rufus King and William Smith "had secured an asylum to themselves in
England" should the Jefferson faction prevail in the government.

... snip ...

posts mention Federalist Society and/or Heritage Foundation
https://www.garlic.com/~lynn/2023d.html#99 Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2023c.html#51 What is the Federalist Society and What Do They Want From Our Courts?
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022g.html#14 It Didn't Start with Trump: The Decades-Long Saga of How the GOP Went Crazy
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2020.html#5 Book:  Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019e.html#127 The Barr Presidency
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019c.html#66 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#34 The Rise of Leninist Personnel Policies
https://www.garlic.com/~lynn/2012c.html#56 Update on the F35 Debate
https://www.garlic.com/~lynn/2012b.html#75 The Winds of Reform
https://www.garlic.com/~lynn/2012.html#41 The Heritage Foundation, Then and Now

--
virtualization experience starting Jan1968, online at home since Mar1970

PDP1 Spacewar

From: Lynn Wheeler <lynn@garlic.com>
Subject: PDP1 Spacewar
Date: 27 Apr, 2024
Blog: Facebook

In 60s, person responsible for the internal network, ported PDP1 space
war
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
to CSC's 1130M4 (included 2250) https://en.wikipedia.org/wiki/IBM_2250
i.e. had 1130 as controller
http://www.ibm1130.net/functional/DisplayUnit.html

I would bring my kids in on weekends and they would play

other drift, one of the inventors of GML at science center in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.


... snip ...

... then science center "wide area network" morphs into the corporate
network (larger than arpanet/internet from just about the beginning
until sometime mid/late 80s), technology also used for corporate
sponsored univ BITNET
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

past posts specifically mentioning pdp1 and 1130/2250 spacewar
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#63 Calma, 3277GA, 2250-4
https://www.garlic.com/~lynn/2021k.html#47 IBM CSC, CMS\APL, IBM 2250, IBM 3277GA
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2018f.html#59 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2014j.html#103 ? How programs in c language drew graphics directly to screen in old days without X or Framebuffer?
https://www.garlic.com/~lynn/2014g.html#77 Spacewar Oral History Research Project
https://www.garlic.com/~lynn/2013g.html#72 DEC and the Bell System?
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012f.html#6 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011o.html#21 The "IBM Displays" Memory Lane (Was: TSO SCREENSIZE)
https://www.garlic.com/~lynn/2011n.html#9 Colossal Cave Adventure
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2003m.html#14 Seven of Nine
https://www.garlic.com/~lynn/2003f.html#39 1130 Games WAS Re: Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home