List of Archived Posts

2023 Newsgroup Postings (10/06 - 11/21)

DataTree, UniTree, Mesa Archival
How U.S. Hospitals Undercut Public Health
Bounty offered for secret NSA seeds behind NIST elliptic curves algo
CP/67, VM/370, VM/SP, VM/XA
GML/SGML separating content and format
Internet
Internet
Video terminals
Internet
Internet
Internet
Internet
Internet
Internet
Video terminals
Audit failure is not down to any one firm: the whole audit system is designed to fail
Internet
Video terminals
MOSAIC
Typing & Computer Literacy
I've Been Moved
Video terminals
We have entered a second Gilded Age
The evolution of Windows authentication
Video terminals
Ferranti Atlas
Ferranti Atlas
Ferranti Atlas
IBM Reference Cards
Univ. Maryland 7094
The Five Stages of Acquisition Grief
The Five Stages of Acquisition Grief
IBM Mainframe Lore
'This is her opportunity': governor Kathy Hochul could forever unmask New York's financial criminals
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
Flying Tank: The A-10 Warthog Is Still Legendary
Rise and Fall of IBM
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
IBM Vintage Series/1
IBM Vintage Series/1
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
Vintage IBM Mainframes & Minicomputers
IBM 360/65 and 360/67
IBM 3350FH, Vulcan, 1655
The Most Important Computer You've Never Heard Of
IBM Vintage Series/1
IBM Vintage 1130
IBM Vintage ASCII 360
Vintage IBM Mainframes & Minicomputers
Vintage IBM 5100
The Most Important Computer You've Never Heard Of
Vintage IBM 370/125
Vintage IBM 5100
Vintage IBM Power/PC
The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
The Most Important Computer You've Never Heard Of
Why Do Mainframes Still Exist
We can't fight the Republican party's 'big lie' with facts alone
Online Computer Conferencing
Vintage TSS/360
Vintage TSS/360
Vintage IBM 3380s
Vintage IBM 3380s
Vintage TSS/360
Vintage RS/6000 Mainframe
Vintage Mainframe PROFS
Vintage RS/6000 Mainframe
A-10 Vs F-35 Close Air Support Flyoff Report Finally Emerges
Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit
Vintage Mainframe PROFS
Vintage Mainframe DCF
Vintage Mainframe PROFS
Vintage Mainframe PROFS
Vintage Mainframe XT/370
Vintage Mainframe 3081D
Vintage Mainframe 3081D
Vintage Mainframe OSI
360 CARD IPL
FAA ATC, The Brawl in IBM 1964
Take aways from the tense testimony of Eric Trump and Donald Trump Jr. in the New York fraud case
FAA ATC, The Brawl in IBM 1964
FAA ATC, The Brawl in IBM 1964
Vintage IBM 709
Vintage IBM 709
Vintage IBM HASP
Vintage 3101
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
Conferences
The End of Milton Friedman's Reign
F-22 Raptor Vs F-35 Lightning: Ultimate Dog Fight Of The Fifth-Gen Fighter Jets
Vintage S/38
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
Microcode Development and Writing to Floppies
MVS versus VM370, PROFS and HONE
360/67 Virtual Memory
360/67 Virtual Memory
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
CSC, HONE, 23Jun69 Unbundling, Future System
Copyright Software
CSC, HONE, 23Jun69 Unbundling, Future System
360/67 Virtual Memory
Copyright Software
IBM RAS
Computer Games

DataTree, UniTree, Mesa Archival

From: Lynn Wheeler <lynn@garlic.com>
Subject: DataTree, UniTree, Mesa Archival
Date: 06 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023e.html#109 DataTree, UniTree, Mesa Archival

Learson trying to block the bureaucrats, careerists, and MBAs from destroying Watson legacy (and failed; two decades later IBM has one of the largest losses in history of US companies and was being re-organized into the 13 "baby blues" in preparation for breaking up the company)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

I had been introduced to John Boyd in the early 80s and would sponsor his briefings ... when he passes, the USAF had pretty much disowned him and it was the Marines at Arlington. Somewhat surprised that the USAF then dedicates a hall to him at Nellis (USAF Weapons School) ... with one of his quotes:

There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question.

... snip ...

Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some past posts:
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#2 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#32 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#60 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022g.html#24 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2023e.html#108 John Boyd and IBM Wild Ducks

--
virtualization experience starting Jan1968, online at home since Mar1970

How U.S. Hospitals Undercut Public Health

From: Lynn Wheeler <lynn@garlic.com>
Subject: How U.S. Hospitals Undercut Public Health
Date: 07 Oct, 2023
Blog: Facebook
How U.S. Hospitals Undercut Public Health
https://www.nakedcapitalism.com/2023/10/how-u-s-hospitals-undercut-public-health.html

Health care in the United States -- the largest industry in the world's largest economy -- is notoriously cost inefficient, consuming substantially more money per capita to deliver far inferior outcomes relative to peer nations. What is less widely recognized is that the health care industry is also remarkably energy inefficient. In an era of tightening connections between environmental destruction and disease, this widely neglected reality is a major cause behind many of the sicknesses our hospitals treat and the poor health outcomes they oversee.

... snip ...

... aggravated by public equity buying up hospitals, health care systems, medical practices, retirement facilities, etc ... and skimming off as much as possible

Private equity changes workforce stability in physician-owned medical practices
https://www.eurekalert.org/news-releases/975889
When Private Equity Takes Over a Nursing Home. After an investment firm bought St. Joseph's Home for the Aged, in Richmond, Virginia, the company reduced staff, removed amenities, and set the stage for a deadly outbreak of COVID-19.
https://www.newyorker.com/news/dispatch/when-private-equity-takes-over-a-nursing-home
Parasitic Private Equity is Consuming U.S. Health Care from the Inside Out
https://www.juancole.com/2022/11/parasitic-private-consuming.html
Patients for Profit: How Private Equity Hijacked Health Care. ER Doctors Call Private Equity Staffing Practices Illegal and Seek to Ban Them
https://khn.org/news/article/er-doctors-call-private-equity-staffing-practices-illegal-and-seek-to-ban-them/
Another Private Equity-Style Hospital Raid Kills a Busy Urban Hospital
https://prospect.org/health/another-private-equity%E2%80%93style-hospital-raid-kills-a-busy-urban-hospital/
How Private Equity Looted America. Inside the industry that has ransacked the US economy--and upended the lives of working people everywhere.
https://www.motherjones.com/politics/2022/05/private-equity-apollo-blackstone-kkr-carlyle-carried-interest-loophole/
Elizabeth Warren's Long, Thankless Fight Against Our Private Equity Overlords. She sponsored a bill to fight what she calls "legalized looting." Too bad her colleagues don't seem all that interested.
https://www.motherjones.com/politics/2022/05/elizabeth-warren-private-equity-stop-wall-street-looting/
Your Money and Your Life: Private Equity Blasts Ethical Boundaries of American Medicine
https://www.nakedcapitalism.com/2022/05/your-money-and-your-life-private-equity-blasts-ethical-boundaries-of-american-medicine.html
Ethically Challenged: Private Equity Storms US Health Care
https://www.amazon.com/Ethically-Challenged-Private-Equity-Storms-ebook-dp-B099NXGNB1/dp/B099NXGNB1/
AHIP advocates for transparency for healthcare private equity firms. Raising prices has been a common strategy after a private equity acquisition, and patient outcomes have suffered as well, the group said.
https://www.healthcarefinancenews.com/news/ahip-advocates-transparency-healthcare-private-equity-firms
Private equity's newest target: Your pension fund
https://www.fnlondon.com/articles/private-equitys-newest-target-your-pension-fund-20220517

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Bounty offered for secret NSA seeds behind NIST elliptic curves algo

From: Lynn Wheeler <lynn@garlic.com>
Subject: Bounty offered for secret NSA seeds behind NIST elliptic curves algo
Date: 07 Oct, 2023
Blog: Facebook
Bounty offered for secret NSA seeds behind NIST elliptic curves algo
https://www.bleepingcomputer.com/news/security/bounty-offered-for-secret-nsa-seeds-behind-nist-elliptic-curves-algo/

trivia: I had got a secure chip right after turn of century with silicon EC/DSA built in and was hoping to get EAL5+ (or even EAL6+) certification ... but then the NIST ECC certification criteria was pulled and I had to settle for EAL4+.

Some pilot chips with software programming were demoed in booth at December Miami BAI retail banking conference (old archived post)
https://www.garlic.com/~lynn/99.html#224

TD to Information Assurance DDI had a panel session in trusted computer track at Intel IDF and asked me to give talk on the chip (gone 404, but lives on at wayback machine).
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

AADS Chip Strawman
https://www.garlic.com/~lynn/x959.html#aadsstraw
X9.59, Identity, Authentication, and Privacy posts
https://www.garlic.com/~lynn/subpubkey.html#privacy
trusted computing posts
https://www.garlic.com/~lynn/submisc.html#trusted.computing

some recent posts mentioning secure chip talk at Intel IDF
https://www.garlic.com/~lynn/2022f.html#112 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#103 AADS Chip Strawman
https://www.garlic.com/~lynn/2021k.html#133 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#17 Data Breach
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#41 IBM Confidential
https://www.garlic.com/~lynn/2021j.html#21 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021h.html#97 What Is a TPM, and Why Do I Need One for Windows 11?
https://www.garlic.com/~lynn/2021h.html#74 "Safe" Internet Payment Products
https://www.garlic.com/~lynn/2021g.html#75 Electronic Signature
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021d.html#87 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#20 The Rise of the Internet
https://www.garlic.com/~lynn/2021b.html#21 IBM Recruiting

some archived posts mentioning EC/DSA and/or elliptic curve
https://www.garlic.com/~lynn/2021j.html#13 cryptologic museum
https://www.garlic.com/~lynn/2017d.html#24 elliptic curve pkinit?
https://www.garlic.com/~lynn/2015g.html#23 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013o.html#50 Secret contract tied NSA and security industry pioneer
https://www.garlic.com/~lynn/2013l.html#55 "NSA foils much internet encryption"
https://www.garlic.com/~lynn/2012b.html#71 Password shortcomings
https://www.garlic.com/~lynn/2012b.html#36 RFC6507 Ellipitc Curve-Based Certificate-Less Signatures
https://www.garlic.com/~lynn/2011o.html#65 Hamming Code
https://www.garlic.com/~lynn/2010m.html#57 Has there been a change in US banking regulations recently
https://www.garlic.com/~lynn/2009r.html#36 SSL certificates and keys
https://www.garlic.com/~lynn/2009q.html#40 Crypto dongles to secure online transactions
https://www.garlic.com/~lynn/2008q.html#64 EAL5 Certification for z10 Enterprise Class Server
https://www.garlic.com/~lynn/2008q.html#63 EAL5 Certification for z10 Enterprise Class Server
https://www.garlic.com/~lynn/2008j.html#43 What is "timesharing" (Re: OS X Finder windows vs terminal window weirdness)
https://www.garlic.com/~lynn/2007q.html#72 Value of SSL client certificates?
https://www.garlic.com/~lynn/2007q.html#34 what does xp do when system is copying
https://www.garlic.com/~lynn/2007q.html#32 what does xp do when system is copying
https://www.garlic.com/~lynn/2007b.html#65 newbie need help (ECC and wireless)
https://www.garlic.com/~lynn/2007b.html#30 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005u.html#27 RSA SecurID product
https://www.garlic.com/~lynn/2005l.html#34 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005e.html#22 PKI: the end
https://www.garlic.com/~lynn/2004b.html#22 Hardware issues [Re: Floating point required exponent range?]
https://www.garlic.com/~lynn/2003n.html#32 NSA chooses ECC
https://www.garlic.com/~lynn/2003n.html#25 Are there any authentication algorithms with runtime changeable
https://www.garlic.com/~lynn/2003n.html#23 Are there any authentication algorithms with runtime changeable key length?
https://www.garlic.com/~lynn/2003l.html#61 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2002n.html#20 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002j.html#21 basic smart card PKI development questions
https://www.garlic.com/~lynn/2002i.html#78 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002g.html#38 Why is DSA so complicated?
https://www.garlic.com/~lynn/2002c.html#31 You think? TOM
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/aadsm27.htm#37 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm24.htm#51 Crypto to defend chip IP: snake oil or good idea?
https://www.garlic.com/~lynn/aadsm24.htm#29 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#28 DDA cards may address the UK Chip&Pin woes
https://www.garlic.com/~lynn/aadsm24.htm#23 Use of TPM chip for RNG?
https://www.garlic.com/~lynn/aadsm23.htm#42 Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS)
https://www.garlic.com/~lynn/aadsm15.htm#30 NSA Buys License for Certicom's Encryption Technolog
https://www.garlic.com/~lynn/aadsm9.htm#3dvulner5 3D Secure Vulnerabilities?
https://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
https://www.garlic.com/~lynn/aepay10.htm#46 x9.73 Cryptographic Message Syntax
https://www.garlic.com/~lynn/aepay6.htm#docstore ANSI X9 Electronic Standards "store"
https://www.garlic.com/~lynn/ansiepay.htm#anxclean Misc 8583 mapping cleanup

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 07 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#90 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#102 CP/67, VM/370, VM/SP, VM/XA

(IBM) Mainframe Hall of Fame (full list)
https://www.enterprisesystemsmedia.com/mainframehalloffame
old addition, 4 new members (gone 404 but lives on at wayback machine)
https://web.archive.org/web/20110727105535/http://www.mainframezone.com/blog/mainframe-hall-of-fame-four-new-members-added/

Knights of VM
http://mvmua.org/knights.html

Old mainframe 2005 article (some details slightly garbled, gone 404 but lives on at wayback machine)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

more IBM (not all mainframe) ... Learson (mainframe hall of fame, "Father of the system/360") tried (& failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

GML/SGML separating content and format

From: Lynn Wheeler <lynn@garlic.com>
Subject: GML/SGML separating content and format
Date: 07 Oct, 2023
Blog: Facebook
note some of the MIT CTSS/7094 had gone to 5th flr to do multics, others went to the IBM science center on the 4th and did internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s, technology also used for corporate sponsored univ BITNET), virtual machines (initially cp40/cms on 360/40 with hardware to add virtual memory, morphs into cp67/cms when 360/67, standard with virtual memory, becomes available). CTSS RUNOFF was redone for CMS as SCRIPT. GML was invented at science center in 1969 and GML tag processing added to SCRIPT (GML chosen because 1st letters of inventors' last name). I've regularly cited Goldfarb's SGML website ... but checking just now, not responding ... so most recent page from wayback machine
https://web.archive.org/web/20230602063701/http://www.sgmlsource.com/

SGML is the International Standard (ISO 8879) language for structured data and document representation, the basis of HTML and XML and many others. I invented SGML in 1974 and led a 12-year technical effort by several hundred people to develop its present form as an International Standard.

... snip ...

SGML history
https://web.archive.org/web/20230402213042/http://www.sgmlsource.com/history/index.htm

Welcome to the SGML History Niche. It contains some reliable papers on the early history of SGML, and its precursor, IBM's Generalized Markup Language, GML.

... snip ...

papers about early GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... see "Conclusions" section in above webpage. ... then decade after the morph of GML into SGML and after another decade, SGML morphs into HTML at CERN. Trivia: 1st webserver in the US is on (Stanford) SLAC's VM370 system (CP67 having morphed into VM370)
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, XML, ... etc, posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 10 Oct, 2023
Blog: Facebook
Co-worker at science center responsible for internal network (larger arpanet/internet from just about beginning until sometime mid/late 80s), technology also used for the corporate sponsored univ. BITNET. Ed Hendricks
https://en.wikipedia.org/wiki/Edson_Hendricks
Ed tried to get IBM to support internet & failed, SJMN article (behind paywall but mostly free at wayback)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
additional correspondence with IBM executives (Ed passed Aug2020, his website at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Didn't get 9-net until following Interop88 ... old email (Ed had already left IBM)
https://www.garlic.com/~lynn/2006j.html#email881216

Great cutover of arpanet (imp/host) to internetworking on 1jan1983, there was approx. 100 IMPS and 255 hosts, while internal network was rapidly approaching 1000 nodes. Old archived post that lists corporate locations that added one or more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8

We had 1st corporate CSNET gateway at SJR in 1982 ... old archived CSNET email about the cutover from imp/host to internetworking:
https://www.garlic.com/~lynn/2000e.html#email821230
https://www.garlic.com/~lynn/2000e.html#email830202

Big inhibitor for ARPANET proliferation was requirement for (tightly controlled) IMPs. Big inhibitor for internal network proliferation was 1) requirement that links had to be encrypted (and gov. resistance, especially when links cross national boundaries, 1985 major link-encryptor vendor claimed that internal network had more than half all link encryptors in the world) and 2) communication group forcing conversion to SNA and limitation to mainframes.

In the early 80s, I had HSDT project (T1 and faster computer links, both terrestrial and satellite) and was working with NSF director, was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and eventually RFP was released (in part based on what we already had running). ... Preliminary Announcement (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

BITNET converted to TCP/IP for "BITNET2" (about the same time the communication group forced the internal network to convert to SNA) ... then merged with CSNET.
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/CSNET

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

inventing the internet
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
some internal politics
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 10 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet

some gov agencies were mandating GOSIP (gov OSI) and elimination of TCP/IP ... there were some number of OSI booths at Interop88

I was on TAB for (Chesson's) XTP and there were some gov. operations involved that believed they needed ISO standard ... so took it to (ISO Chartered) ANSI X3S3.3 for standard as HSP (high-speed protocol) ... eventually they said that ISO required standards that conformed to OSI. XTP didn't qualify because 1) it supported internetworking (a non-existent layer between OSI 3/4, network/transport), 2) skipped interface between layer 3/4, and 3) went directly to LAN/MAC (non-existent interface somewhere in the middle of OSI 3/network layer).

... at the time there was joke that ISO could standardize stuff that wasn't even possible to implement while IETF required at least two interoperable implementations before standards progression.

interop88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
xtp/hsp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

... and old email from somebody I had previously worked with ... who got responsibility for setting up EARN
https://www.garlic.com/~lynn/2001h.html#email840320

bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Video terminals

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Video terminals
Newsgroups: alt.folklore.computers
Date: Tue, 10 Oct 2023 15:35:52 -1000
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

The university had 2741s and TTYs (mostly model 33, but the occasional 35 and even a few 37s). They did have a few 2260s (wow, 12x80 screen!).

Once out in the real world, it was all cards. No terminals of any sort for several years, then Univac's Uniscope 100 and 200 terminals (block mode, synchronous, polled protocol - programming was a nightmare). Our foray into terminal emulators consisted of an ISA-bus card with a synchronous port, plus software that emulated the Uniscope on an MS-DOS box. I envied those DEC shops and their character-mode asynchronous terminals - they were so simple by comparison. I did, however, manage to port the original 350-point Adventure (and later, Zork, once I got my hands on the source code) to our Univac 90/30. It was a lot of work, but being able to lay the game at all was a strong incentive.

As for terminal emulators, I always have at least one open, on both Linux and Windows boxes. Call me strange, but I find that most of the time a command-line interface is so much simpler than that gooey stuff, and I can do things in a dozen keystrokes while J. Random Luser spends five minutes pointing and clicking and dragging and dropp... oh damn, where did I drop that thing...



I took two credit hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO for 360/30. Univ had been sold 360/67 for tss/360 to replace 709 (tape->tape) / 1401 (unit record front end, card reader->tape, tape->printer/punch). Pending arrival 360/67, the 1401 was replaced with 360/30 which had 1401 emulation ... and could run MPIO, 360/30 was to gain 360 experience ... so my job to re-implement MPIO.

I was given a bunch of hardware & software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ. shutdown datacenter on weekends and I would have the place dedicated (although monday morning classes were a little hard after 48hrs w/o sleep). Within a few weeks, I had 2000 card assembler program. Within a year of taking intro class, the 360/67 had arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition, so ran as 360/65).

The univ. had some number of 2741s (originally for tss/360) ... but then got some tty/ascii terminals (tty/ascii port scanner for the IBM telecommunication controller arrived in Heathkit box).

Then some people from science center came out to install (virtual machine) CP67 ... which was pretty much me playing with it on my weekend dedicated time. It had 1052&2741 terminal support and some tricks to dynamically recognize line terminal type and switch the line port scanner type. I added TTY/ASCCI support (including being able to dynamically switch terminal scanner type). I then wanted to have single dail-up number for all terminal types (hunt group):
https://en.wikipedia.org/wiki/Line_hunting

I could dynamically switch port scanner terminal type for each line but IBM had taken short-cut and hardwired the line speed (so didn't quite work). Univ. starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed. It was later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces; Interdata and later Perkin-Elmer sell it as IBM clone controller) ... four of us get written up as responsible for (some part of) clone controller business
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

other trivia: 360 was originally suppose to be ASCII machine ... but ASCII unit record gear wasn't ready, so they were going to (temporarily) extend BCD (refs gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

after graduating and joining IBM at cambridge science center (i got 2741 at home) ... and then transferring to san jose research (home 2741 replaced 1st with cdi miniterm, then a IBM 3101 glass tty) ... I also got to wander around lots of IBM & customer datacenters in silicon valley ... including tymshare
https://en.wikipedia.org/wiki/Tymshare
provided their (vm370) CMS-based online computer conferencing system, "free" to (mainframe user group) SHARE in Aug1976 as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for internal network&systems ... biggest problem was lawyers concerned about internal employees being directly exposed to (unfiltered) customer information. (after M/D bought TYMSHARE in 84, vmshare was moved to different platform)

One TYMSHARE visit, they demo'ed ADVENTURE somebody had found on Stanford AI PDP10 and ported to CMS and I got a copy ... which I made available inside IBM (people that got all points, I would send a copy of source).

TYMSHARE also told story that executive learning that customers were playing games, directed that TYMSHARE was for business and all games had to be removed. He changed his mind after being told that game playing had grown to something like 30% of revenue.

trivia: I had ordered IBM/PC on announce through employee plan (with employee discount). However, by the time it arrived, IBM/PC street price had dropped below the employee discount. IBM provided 2400 baud Hayes compatible modem that supported hardware encryption (for the home terminal emuplation progam). Terminal emulator did software compression and both sides kept cache of couple thousand recently transmitted characters ... and could send index into string cache (rather than compressed string).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
360 plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

recent post mentioning 709/1401/mpio, 360/67, & boeing cfo
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#5 1403 printer

recent post mentioning TYMSHARE, vmshare, and adventure
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 11 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet

note some of the MIT CTSS/7094 had gone to 5th flr to do multics, others went to the IBM cambridge science center on the 4th and did internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s, technology also used for corporate sponsored univ BITNET), virtual machines (initially cp40/cms on 360/40 with hardware to add virtual memory, morphs into cp67/cms when 360/67, standard with virtual memory, becomes available). CTSS RUNOFF was redone for CMS as SCRIPT. GML was invented at science center in 1969 and GML tag processing added to SCRIPT (GML chosen because 1st letters of inventors' last name). I've regularly cited Goldfarb's SGML website ... but checking just now, not responding ... so most recent page from wayback machine
https://web.archive.org/web/20230602063701/http://www.sgmlsource.com/

SGML is the International Standard (ISO 8879) language for structured data and document representation, the basis of HTML and XML and many others. I invented SGML in 1974 and led a 12-year technical effort by several hundred people to develop its present form as an International Standard.

... snip ...

SGML history
https://web.archive.org/web/20230402213042/http://www.sgmlsource.com/history/index.htm
https://web.archive.org/web/20230703135955/http://www.sgmlsource.com/history/index.htm

Welcome to the SGML History Niche. It contains some reliable papers on the early history of SGML, and its precursor, IBM's Generalized Markup Language, GML.

... snip ...

papers about early GML
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... see "Conclusions" section in above webpage. ... then decade after the morph of GML into SGML and after another decade, SGML morphs into HTML at CERN. Trivia: 1st webserver in the US is on (Stanford) SLAC's VM370 system (CP67 having morphed into VM370) ... and CP67's WAN had morphed into company internal network & BITNET).
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, XML, ... etc, posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

trivia: last product we did at IBM before leaving was HA/CMP (now PowerHA)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

it had started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when started doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Meeting early Jan1992, AWD VP Hester tells Oracle CEO Ellison that there would be 16 processor cluster mid-92 and 128 processor cluster ye-92. A couple weeks later cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific only) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Later we are brought into a small client/server startup as consultants, two former oracle people (that we had worked with on cluster scale-up) are there responsible for something called "commerce server" and they wanted to do payment transactions on the server, startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and financial payment networks. Later I did a talk on "Why Internet Isn't Business Critical Dataprocessing" ... based on the compensating processes and software I had to do ... Postel sponsored talk at ISI/USC (sometimes Postel also would let me help with RFCs)
https://en.wikipedia.org/wiki/Jon_Postel

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
financial payment network gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

some recent posts mentioning postel, Internet & business critical dataprocessing
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 11 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet

X.25 topic drift, my wife did a short stint as Amadeus (EU airline system built off the old Eastern Airline System/One) chief architect ... however she sided with Europe on X.25 (instead of IBM SNA). The IBM communication group got her replaced, but it didn't do them much good, Europe went with X.25 anyway and her SNA replacement got replaced.

some amadeus/x.25 posts
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#35 Eastern Airlines 370/195 System/One
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#97 IBM 360
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#75 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 12 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#9 Internet

Mid-80s, the communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe TCP/IP support. Apparently some influential customers got that reversed, and the communication group changed their tactic and said that since they have corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 cpu. I then did RFC1044 support and in some tuning tests at Cray Research between 4341 and a Cray, got sustained channel throughput using only modest amount of 4341 ... something like 500 times improvement in bytes moved per instruction executed.

In any case, contributed to BITNET transition to BITNET2 supporting TCP/IP.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
communication group fiercely fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 12 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#9 Internet
https://www.garlic.com/~lynn/2023f.html#10 Internet

I've got a whole collection of (AUP) policy files. I've conjectured that some of the commercial use restrictions is (tax-free?) contributions to various networks. Scenario is that telcos had fixed infrastructure costs (staff, equipment, etc) that were significantly paid for by bandwidth use charges ... they also had enormous unused capacity in dark fiber. There was chicken&egg, to promote the unused capacity they needed bandwidth hungry applications, to promote the bandwidth hungry applications they needed to drastically reduce bandwidth charges (which could mean operating at a loss for years). Contributing resources for non-commercial use, could promote bandwidth hungry applications w/o impacting existing revenue. Folklore is that NSFnet (suppercomputer access network, evolving into NSFNET backbone as regional networks connect in) got resource/bandwidth contributions that were 4-5 times the winning bid.

Preliminary Announcement (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

NEARnet:

29 October 1990

NEARnet - ACCEPTABLE USE POLICY

This statement represents a guide to the acceptable use of NEARnet for data communications. It is only intended to address the issue of NEARnet use. In those cases where data communications are carried across other regional networks or the Internet, NEARnet users are advised that acceptable use policies of those other networks apply and may limit use.

NEARnet member organizations are expected to inform their users of both the NEARnet and the NSFnet acceptable use policies.

1. NEARnet Primary Goals

1.1 NEARnet, the New England Academic and Research Network, has been established to enhance educational and research activities in New England, and to promote regional and national innovation and competitiveness. NEARnet provides access to regional and national resources to its members, and access to regional resources from organizations throughout the United States and the world.

2. NEARnet Acceptable Use Policy

2.1 All use of NEARnet must be consistent with NEARnet's primary goals.

2.2 It is not acceptable to use NEARnet for illegal purposes.

2.3 It is not acceptable to use NEARnet to transmit threatening, obscene, or harassing materials.

2.4 It is not acceptable to use NEARnet so as to interfere with or disrupt network users, services or equipment. Disruptions include, but are not limited to, distribution of unsolicited advertizing, propagation of computer worms and viruses, and using the network to make unauthorized entry to any other machine accessible via the network.

2.5 It is assumed that information and resources accessible via NEARnet are private to the individuals and organizations which own or hold rights to those resources and information unless specifically stated otherwise by the owners or holders of rights. It is therefore not acceptable for an individual to use NEARnet to access information or resources unless permission to do so has been granted by the owners or holders of rights to those resources or information.

3. Violation of Policy

3.1 NEARnet will review alleged violations of Acceptable Use Policy on a case-by-case basis. Clear violations of policy which are not promptly remedied by member organization may result in termination of NEARnet membership and network services to member.


... snip ...

some past posts mentioning network AUP
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2022h.html#91 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2021k.html#130 NSFNET
https://www.garlic.com/~lynn/2013n.html#18 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2010g.html#75 What is the protocal for GMT offset in SMTP (e-mail) header header time-stamp?
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2010b.html#33 Happy DEC-10 Day
https://www.garlic.com/~lynn/2006j.html#46 Arpa address
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/aadsm12.htm#23 10 choices that were critical to the Net's success

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 12 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#9 Internet
https://www.garlic.com/~lynn/2023f.html#10 Internet
https://www.garlic.com/~lynn/2023f.html#11 Internet

archived post with decade of VAX numbers, sliced and diced by model, year, us/non-us
https://www.garlic.com/~lynn/2002f.html#0

IBM 4300s and VAX sold in the same mid-range market and in similar numbers for small unit numbers. Big difference was large organizations with orders having hundreds of 4300s at a time. As seen in (VAX) numbers, mid-range market started to change (to workstations and large PC servers) in the 2nd half of the 80s.

Jan1979, I got con'ed into doing benchmarks on engineering 4341 (before first customer ship) for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

old email spring 1979 about USAFDS coming by to talk about 20 4341s
https://www.garlic.com/~lynn/2001m.html#email790404b

... but by the time they got around to stopping by in the fall, it had morphed into 210 4341s (part of the leading edge of the coming departmental computing tsunami). USAFDS was MULTICS poster child ... and some rivalry between MULTICS on the 5th flr and IBM Science Center on the 4th flr,
https://www.multicians.org/site-afdsc.html

4341s & IBM 3370 (FBA, fixed-block disks) didn't require datacenter provisioning, able to place out in departmental areas. Inside IBM, departmental conference rooms became scarce with so many being converted into distributed 4341 rooms ... and part of big explosion in the internal network starting in the early 80s, archived post passing 1000 nodes in 1983
https://www.garlic.com/~lynn/2006k.html#8

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

posts mentioning 4300 & leading edge of (cluster supercomputing and departmental computing) tsunamis
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022f.html#92 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#18 IBM Left Behind
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 12 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#6 Internet

Before doing XTP,, Greg Chesson was involved in doing UUCP. In 1993, I got a full (PAGESAT sat) usenet feed at home, in return for doing sat. modem drivers and doing articles for industry trade magazines.

xtp/hsp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

posts mentioning pagesat, uucp usenet
https://www.garlic.com/~lynn/2023e.html#58 USENET, the OG social network, rises again like a text-only phoenix
https://www.garlic.com/~lynn/2022e.html#40 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#7 USENET still around
https://www.garlic.com/~lynn/2022.html#11 Home Computers
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2018e.html#51 usenet history, was 1958 Crisis in education
https://www.garlic.com/~lynn/2017h.html#110 private thread drift--Re: Demolishing the Tile Turtle
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2017b.html#21 Pre-internet email and usenet (was Re: How to choose the best news server for this newsgroup in 40tude Dialog?)
https://www.garlic.com/~lynn/2016g.html#59 The Forgotten World of BBS Door Games - Slideshow from PCMag.com
https://www.garlic.com/~lynn/2015h.html#109 25 Years: How the Web began
https://www.garlic.com/~lynn/2015d.html#57 email security re: hotmail.com
https://www.garlic.com/~lynn/2013l.html#26 Anyone here run UUCP?
https://www.garlic.com/~lynn/2012b.html#92 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2009j.html#19 Another one bites the dust
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2001h.html#66 UUCP email
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS

--
virtualization experience starting Jan1968, online at home since Mar1970

Video terminals

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Video terminals
Newsgroups: alt.folklore.computers
Date: Thu, 12 Oct 2023 16:05:39 -1000
Peter Flass <peter_flass@yahoo.com> writes:

Why didn't you build it on top of BOS?


re:
https://www.garlic.com/~lynn/2023f.html#7 Video terminals

Had Assembler options that generated two versions of MPIO; 1) BPS stand-alone (low-level; device drivers, interrupt handlers, etc) and 2) OS/360 with system services, I/O macros.

Under OS/360 on 360/30, the BPS stand-alone version took 30mins to assemble, while the OS/360 version took an hour to assemble ... nearly all difference was each OS/360 DCB macro took 5mins to assemble.

recent post mentioning MPIO
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#28 Punch Cards
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?

--
virtualization experience starting Jan1968, online at home since Mar1970

Audit failure is not down to any one firm: the whole audit system is designed to fail

From: Lynn Wheeler <lynn@garlic.com>
Subject: Audit failure is not down to any one firm: the whole audit system is designed to fail
Date: 13 Oct, 2023
Blog: Facebook
Audit failure is not down to any one firm: the whole audit system is designed to fail to suit the interests of big business and their auditors
https://www.taxresearch.org.uk/Blog/2023/10/12/audit-failure-is-not-down-to-any-one-firm-the-whole-audit-system-is-designed-to-fail-to-suit-the-interests-of-big-business-and-their-auditors/
KPMG boss says Carillion auditing was 'very bad' as firm is fined record GBP21m
https://www.theguardian.com/business/2023/oct/12/kpmg-fined-record-21m-over-carillion-audit-failures

... rhetoric on floor of congress was that Sarbanes-Oxley would prevent future ENRONs and guarantee that auditors and executives did jailtime (however, there were jokes that congress felt badly that one of the big firms went out of business, and wanted to increase audit business to help them).

Possibly even GAO didn't believe SOX would make any difference and started doing reports of public company fraudulent financial filings, even showing they increased after SOX went into effect (and nobody doing jailtime).

sarbanes-oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
public company fraudulent financial reports
https://www.garlic.com/~lynn/submisc.html#fraudulent.financial.filings

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 13 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#9 Internet
https://www.garlic.com/~lynn/2023f.html#10 Internet
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023f.html#13 Internet

1980 IBM STL (on west coast) and Hursley (in England) was looking at off-shift sharing operation ... via double-hop satellite 56kbit link (west/east coast up/down; east coast/england up/down). It was first connected via VNET/RSCS and it worked fine, then SNA/JES2 bigot executive insisted that it be a JES2 connection and it didn't work, they then moved it back to VNET/RSCS and everything was flowing just fine. Then the executive was quoted as saying that VNET/RSCS was so stupid that it didn't know that it wasn't working (problem was that SNA/JES2 had window pacing algorithm with ACK time-outs and couldn't handle the double-hop round-trip latency) .. I wrote up our dynamic adaptive rate-based pacing algorithm for inclusion in (Chesson's) XTP.

One of my HSDT 1st T1 satellite links was between Los Gatos (on west coast) to Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on east coast) that had a boat load of floating post system boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems

It was T1 tail-circuit on T3 microwave between Los Gatos and IBM San Jose plant site to the T3 satellite 10m dish to the Kingston T3 satellite 10m dish. HSDT then got its own Ku-band TDMA satellite system, 4.5m dishes in Los Gatos and IBM Yorktown Research (on east coast) and 7m dish in Austin with the workstation division. There was LSM and EVE (hardware VLSI logic verification, ran something like 50,000 times faster than logic verification software on largest mainframe) and claims that Austin being able to use the boxes remotely, help bring the RIOS (RS/6000) chipset in a year early. HSDT had two separate custom spec'ed TDMA systems built, one by subsidiary of Canadian firm and another by a Japanese firm.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
xtp/hsp posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

post mentioning Ku-band TDMA system
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2010k.html#12 taking down the machine - z9 series
https://www.garlic.com/~lynn/2008m.html#44 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008m.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008m.html#19 IBM-MAIN longevity
https://www.garlic.com/~lynn/2006k.html#55 5963 (computer grade dual triode) production dates?
https://www.garlic.com/~lynn/2006.html#26 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/94.html#25 CP spooling & programming technology
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice

other posts mentioning Clementi & satellite link
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022h.html#26 Inventing the Internet
https://www.garlic.com/~lynn/2022f.html#5 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#57 ASCI White
https://www.garlic.com/~lynn/2022c.html#52 IBM Personal Computing
https://www.garlic.com/~lynn/2022c.html#22 Telum & z16
https://www.garlic.com/~lynn/2022b.html#79 Channel I/O
https://www.garlic.com/~lynn/2022b.html#69 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#121 HSDT & Clementi's Kinston E&S lab
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021e.html#14 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018f.html#110 IBM Token-RIng
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017h.html#50 System/360--detailed engineering description (AFIPS 1964)

--
virtualization experience starting Jan1968, online at home since Mar1970

Video terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Video terminals
Newsgroups: alt.folklore.computers
Date: Fri, 13 Oct 2023 09:39:26 -1000
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

Yeah, those DCB macros were a bear. Univac adopted a version of them in OS/3 on the 90/30. The assembler looked a lot like the 360 assembler. There was a rumour going around that someone found the source code for the 360 assembler in the trunk of a car. Another rumour is that IBM wanted it found. :-)


re:
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023f.html#14 Video terminals

folklore is that person writing the assembler op-code lookup was told that it had to be done in 256bytes (so assembler could run in minimum memory 360) ... as a result had to sequentially read a dataset of op-codes for each statement. Some time later, assembler got huge speed-up by having op-codes part of the assembler.

Within year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came production fruition so ran as 360/65 with os/360). non-resident SVCs had to fit in 2k ... as a result things like file open/close had to stream through an enormous number of 2k pieces.

student fortran jobs had run well under a second on 709 tape->tape. Initially on os/360 they ran well over a minute. I installed HASP and it cut the time in half. I would reworked STAGE2 SYSGENs so could run in production job stream and ordering of datasets and PDS members optimized for arm seek and multi-track search, cutting elapsed time another 2/3rds to 12.9secs. Student fortran never got better than 709 until I installed Univ. of Waterloo's WATFOR.

archived (a.f.c.) post with part of SHARE presentation I gave on performance work (both os/360 and some work playing with CP/67).
https://www.garlic.com/~lynn/94.html#18

CP/67 I started out cutting pathlengths for running OS/360 in virtual machine. Stand alone ran 322secs, CP67 initially ran 856secs (CP67 534secs CPU). After a couple months I had cut it to 435secs (CP67 113secs CPU, 421secs CPU reduction).

I then redid dasd i/o. Originally FIFO queuing & single 4k page transfer at time. I redid all 2314 I/O to ordered seek queueing and multiple 4k page transfers in single I/O optimized for transfer per revolution (paging "DRUM" was originally about 70 4k transfers/sec, got it up to peak around 270 ... nearly channel transfer capacity).

a few posts mentioning undergraduate work
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#10 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging

--
virtualization experience starting Jan1968, online at home since Mar1970

MOSAIC

From: Lynn Wheeler <lynn@garlic.com>
Subject: MOSAIC
Date: 13 Oct, 2023
Blog: Facebook
I got brought into small client/server startup as consultant. Two former Oracle people (we had worked with on RDBMS cluster scale-up) were there responsible for something they called "commerce server" and they wanted to do payment transactions. The startup had also invented something they called "SSL" they wanted to use ... it is now frequently called "electronic commerce" I had responsibility for everything between webservers and the financial industry payment networks.

Somewhat because of having been involved in "electronic commerce", I was asked to participate in financial industry standards. Gov. agency up at Fort Meade also had participants (possibly because of crypto activities). My perception was that agency assurance and agency collections were on different sides ... most of the interaction was with assurance ... and they seemed to have battles over confusing authentication (financial wanted authenticated transactions) and identification. I won some number of battles requiring high-integrity authentication w/o needing identification (claiming requiring both for a transaction would violate some security principles).

note 28Mar1986 preliminary announce:

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

with national center supercomputer application at
http://www.ncsa.illinois.edu/enabling/mosaic

then some spin-off to silicon valley and NCSA complained about spin-off using the "MOSAIC" term; trivia: from what silicon valley company did they get the term "NETSCAPE"?.

First few months of netscape webserver increasing load ... there was big TCP/IP performance problem with platforms (using BSD TCP/IP) ... HTTP/HTTPS was supposedly atomic web transactions ... but using TCP sessions. FINWAIT list of closing sessions was linear scan to see if there was dangling packet (assumption that there would never be more than a few sessions on FINWAIT list). However with HTTP/HTTPS increasing load, there would be (tens of?) thousands and webservers started spending 95% of the CPU running FINWAIT list. NETSCAPE was increasingly adding SUN webservers as countermeasure to the enormous CPU bottleneck. Finally they were replaced with large SEQUENT multiprocessor which had fixed the FINWAIT problem in DYNIX some time before. Eventually the other BSD-based server vendors shipped a FINWAIT "fix".

internet payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HA/CMP technical/scientific and commercial (RDBMS) cluster scale-up posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
authentication, identification, privacy posts
https://www.garlic.com/~lynn/subpubkey.html#privacy

posts mentioning mosaic, netscape & finwait
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2015h.html#113 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014g.html#13 Is it time for a revolution to replace TLS?
https://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2005c.html#70 [Lit.] Buffer overruns

--
virtualization experience starting Jan1968, online at home since Mar1970

Typing & Computer Literacy

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Typing & Computer Literacy
Date: 13 Oct, 2023
Blog: Facebook
In middle school taught myself to type on an old typewriter I found in the dump. Highschool they were replacing typewriters for typing classes and I managed to get one of the replaced machines (w/o letters on the keys). My father had died when I was in middle school and I was the oldest; I had jobs after school and weekends and during the winter, cut wood after supper and got up early to restart fire. In highschool worked for the local hardware store and would get loaned out to local contractors; concrete (driveways/foundations), framing, flooring, roofing, siding, electrical, plumbing, etc. Saved enough money to attend univ. Summer after freshman year was foreman on construction job, was way behind schedule because really wet spring and job was quickly 80+ hr weeks.

I then took two credit hr fortran/computer intro class ... at the end of the semester I was hired to rewrite 1401 MPIO for 360/30. Univ. had been sold a 360/67 for tss/360 to replace 709/1401. The 1401 was temporarily replaced with 360/30 (pending arrival of 360/67). The univ. shutdown the datacenter over the weekend and I would have it dedicated all to myself (although 48hrs w/o sleep made monday classes hard). I was given lots of hardware&software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage manager, etc ... and within a few weeks, I had a 2000 card assembler program. Then within a year of taking intro course, the 360/67 arrives and I was hired fulltime responsible for OS/360 (tss/360 never came to production fruition, so ran as 360/65 w/os360) ... and I continued to have my datacenter weekend dedicated time.

Then before I graduate, I was hired fulltime into a small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment). I thought Renton datacenter was possibly largest in the world, couple hundred million in IBM gear, 360/65s arriving faster than could be installed, boxes constantly being staged in hallways around machine room. Disaster plan was to replicate Renton at the new 747 plant at Paine field (Mt. Rainier heats up and the resulting mud slide could take out Renton datacenter). Lots of politics between Renton datacenter manager and CFO, who only had a 360/30 up at Boeing field for payroll, although they enlarge it to install a 360/67 that I could play with when I wasn't doing other things. When graduate I join IBM Cambridge Science Center (instead of staying at Boeing) ... which included getting 2741 selectric at home.

Late 70s and early 80s blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s). It really took off spring 1981, when I distributed trip report of visit to Jim Gray at Tandem; claims that upwards of 25,000 were reading even though only about 300 participated (& folklore when the corporate executive committee was told, 5of6 wanted to fire me).

a little topic drift, was introduced to John Boyd in early 80s and would sponsored his briefings. Tome about both Boyd and Learson (Learson failing to block bureaucrats, careerists, and MBAs from destroying Watson legacy and two decades later has one of the largest losses in history of US companies).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Some recent posts mentioning Boeing CFO, BCS, & Renton
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#66 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

--
virtualization experience starting Jan1968, online at home since Mar1970

I've Been Moved

From: Lynn Wheeler <lynn@garlic.com>
Subject: I've Been Moved
Date: 14 Oct, 2023
Blog: Facebook
joined ibm cambridge science center (from west coast, before graduating I was working in small group in Boeing CFO office helping with the formation of Boeing Computer Services), 7yrs later transferred back to west coast at san jose reseach.

Early 80s, Los Gatos lab let me have offices and labs there, about the same time I was transferred to Yorktown Research on the east coast (for various transgressions, including being blamed for online computer conferencing on the internal network, folklore is when corporate executive committee was told, 5of6 wanted to fire me) and left to live in san jose (and kept my SJR and LSG space) but had to commute to YKT a couple times a month. My SJR office was moved up the hill when new Almaden research was built.

I then spent two yrs with AWD in Austin (gave up my ALM office, but kept my LSG space) and then moved back to San Jose and my LSG offices and labs.

In 1992 IBM had one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company; had already left the company (although LSG let me keep space for another couple yrs), but get a call from the bowels of Armonk about helping with the breakup (before get started, new CEO was brought in and sort-of reverses the breakup) ... longer tome:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Video terminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Video terminals
Newsgroups: alt.folklore.computers
Date: Sat, 14 Oct 2023 15:15:02 -1000
Peter Flass <peter_flass@yahoo.com> writes:

I did similar with DOS, and sped up student COBOL jobs immensely. Split-cylinder for compiler work files made a huge difference.


re:
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023f.html#14 Video terminals
https://www.garlic.com/~lynn/2023f.html#17 Video terminals

student fortran tended to be 30-50 statements ... OS/360 3step fortgclg, almost all job step overhead (speeded up by careful ordering of mostly datasets and PDS members), with a little bit of compiled output to linkedit step output to execution step;

WATFOR was single execution step (around 12secs before speedup, 4secs after speedup) for a card tray of batched jobs feed from HASP (say 40-70 jobs) ... which were handled as stream ... WATFOR clocked/rated at 20,000 cards/min on 360/65 (w/HASP), 333cards/sec. So 4secs single jobstep overhead plus 2000/333, around 6-7secs for tray of cards ... or around total 10-11secs avg for batched tray of student fortran (40-70 jobs)

other recent posts mentioning WATFOR
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#10 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2

--
virtualization experience starting Jan1968, online at home since Mar1970

We have entered a second Gilded Age

From: Lynn Wheeler <lynn@garlic.com>
Subject: We have entered a second Gilded Age
Date: 15 Oct, 2023
Blog: Facebook
Robert Reich: Vast inequality. Billionaires buying off Supreme Court justices. Rampant voter supression. Worker exploitation. Unions getting busted. Child labor returning. There is no question about it: We have entered a second Gilded Age.
https://robertreich.substack.com/

The Price of Inequality: How Today's Divided Society Endangers Our Future
https://www.amazon.com/Price-Inequality-Divided-Society-Endangers-ebook/dp/B007MKCQ30/
pg35/loc1169-73:

In business school we teach students how to recognize, and create, barriers to competition -- including barriers to entry -- that help ensure that profits won't be eroded. Indeed, as we shall shortly see, some of the most important innovations in business in the last three decades have centered not on making the economy more efficient but on how better to ensure monopoly power or how better to circumvent government regulations intended to align social returns and private rewards

... snip ...

How Economists Turned Corporations into Predators
https://www.nakedcapitalism.com/2017/10/economists-turned-corporations-predators.html

Since the 1980s, business schools have touted "agency theory," a controversial set of ideas meant to explain how corporations best operate. Proponents say that you run a business with the goal of channeling money to shareholders instead of, say, creating great products or making any efforts at socially responsible actions such as taking account of climate change.

... snip ...

stock buybacks use to be illegal (because it was too easy for executives to manipulate the market ... aka banned in the wake of '29crash)
https://corpgov.law.harvard.edu/2020/10/23/the-dangers-of-buybacks-mitigating-common-pitfalls/

Buybacks are a fairly new phenomenon and have been gaining in popularity relative to dividends recently. All but banned in the US during the 1930s, buybacks were seen as a form of market manipulation. Buybacks were largely illegal until 1982, when the SEC adopted Rule 10B-18 (the safe-harbor provision) under the Reagan administration to combat corporate raiders. This change reintroduced buybacks in the US, leading to wider adoption around the world over the next 20 years. Figure 1 (below) shows that the use of buybacks in non-US companies grew from 14 percent in 1999 to 43 percent in 2018.

... snip ...

Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall Street momo traders. It was actually a stock buyback contraption on steroids. During the five years ending in fiscal 2011, the company spent a staggering $67 billion repurchasing its own shares, a figure that was equal to 100 percent of its net income.


pg465/loc10014-17:

Total shareholder distributions, including dividends, amounted to $82 billion, or 122 percent, of net income over this five-year period. Likewise, during the last five years IBM spent less on capital investment than its depreciation and amortization charges, and also shrank its constant dollar spending for research and development by nearly 2 percent annually.

... snip ...

... with "wild ducks" representing change and innovation ... and "not team players"

Gen. James Mattis, USMC (ret.):

"Take the mavericks in your service, the ones that wear rumpled uniforms and look like a bag of mud but whose ideas are so offsetting that they actually upset the people in the bureaucracy. One of your primary jobs is to take the risk and protect these people, because if they are not nurtured in your service, the enemy will bring their contrary ideas to you."

... snip ...

T.J. Watson, Jr.:

"We are convinced that any business needs its wild ducks. And in IBM we try not to tame them."

... snip ...

Learson trying (and failed) to block the bureaucrats, careerists and MBAs destroying the Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/multi-modal-optimization-old-post-from-6yrs-ago-lynn-wheeler
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
ibm downfall, breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

The evolution of Windows authentication

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The evolution of Windows authentication
Date: 15 Oct, 2023
Blog: Facebook
The evolution of Windows authentication | Windows IT Pro Blog
https://techcommunity.microsoft.com/t5/windows-it-pro-blog/the-evolution-of-windows-authentication/ba-p/3926848

Kerberos, better than ever

For Windows 11, we are introducing two major features to Kerberos to expand when it can be used--addressing two of the biggest reasons why Kerberos falls back to NTLM today. The first, IAKerb, allows clients to authenticate with Kerberos in more diverse network topologies. The second, a local KDC for Kerberos, adds Kerberos support to local accounts.


... snip ...

Sometime after leavining IBM, we are brought into a small client/server startup as consultants, two former Oracle people (that we had worked with on cluster scale-up, before transferred for announce as IBM supercomputer and we are told we couldn't work on anything with more than four processors) are there responsible for something called "commerce server" and they wanted to do payment transactions on the server, startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and financial payment networks. Later I did a talk on "Why Internet Isn't Business Critical Dataprocessing" ... based on the compensating processes and software I had to do ... Postel sponsored talk at ISI/USC (sometimes Postel also would let me help with RFCs)

Later, we do a stint in the Seattle area working with companies involved in electronic commerce ... archived post with press about having booth at 1999 Miami BAI (retail banking show):
https://www.garlic.com/~lynn/99.html#224

One of the companies was commercial security company specializing in Kerberos and had the contract with Microsoft to implement Kerberos in NT, their CEO had been in IBM, at different times, head of POK and Boca.

Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce gateway to financial industry payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
AADS Chip Strawman
https://www.garlic.com/~lynn/x959.html#aadsstraw
X9.59, Identity, Authentication, and Privacy posts
https://www.garlic.com/~lynn/subpubkey.html#privacy
post mentioning kerberos public key
https://www.garlic.com/~lynn/subpubkey.html#kerberos

--
virtualization experience starting Jan1968, online at home since Mar1970

Video terminals

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Video terminals
Newsgroups: alt.folklore.computers
Date: Sun, 15 Oct 2023 08:51:35 -1000
re:
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023f.html#14 Video terminals
https://www.garlic.com/~lynn/2023f.html#17 Video terminals
https://www.garlic.com/~lynn/2023f.html#21 Video terminals

... topic drift, CICS was similar to WATFOR in countermeasure to extreme heavyweight OS/360 overhead. Single step monitor that started, opened all its files, and then did its best to run with minimal further use of OS/360 system services. At the univ. the library got an ONR grant to do online catalog and used part of the money for IBM 2321 datacell. It was also selected to be betatest for the original CICS product and CICS debugging got added to my tasks. CICS also did its own task and storage management.

other CICS lore (gone 404 but lives on at wayback machine)
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
https://web.archive.org/web/20060325095552/http://www.yelavich.com/history/ev196803.htm
https://web.archive.org/web/20060325095234/http://www.yelavich.com/history/ev196901.htm
https://web.archive.org/web/20090106064214/http://www.yelavich.com/history/ev197001.htm
https://web.archive.org/web/20081201133432/http://www.yelavich.com/history/ev197003.htm
https://web.archive.org/web/20060325095346/http://www.yelavich.com/history/ev197901.htm
https://web.archive.org/web/20071021041229/http://www.yelavich.com/history/ev198001.htm
https://web.archive.org/web/20070322221728/http://www.yelavich.com/history/ev199203.htm
https://web.archive.org/web/20060325095613/http://www.yelavich.com/history/ev200401.htm
https://web.archive.org/web/20090107054344/http://www.yelavich.com/history/ev200402.htm

Note one of the OS/360 issues was how terrible storage management was. A decade ago, I was asked if I could track down the decisions to add virtual memory to all 370s. I found a staff member for executive making decision. Basically, typically MVT regions had to be specified four times larger than actually used ... as a result a 1mbyte 370/165 only ran with four concurrent regions, insufficient to keep system busy and justified. Going to a 16mbyte virtual memory for "MVT", would allow increasing concurrent regions by factor of four times with little or no paging (somewhat akin to running MVT in a CP67 16mbyte virtual machine). Old archived (a.f.c.) post
https://www.garlic.com/~lynn/2011d.html#73

Above also references Simpson (HASP fame) doing "RASP" ... a virtual memory MFT II, that also had a page-mapped filesystem. It wasn't picked up, Simpson then leaves IBM and joins Amdahl ... re-implementing "RASP" from scratch in coding "clean-room" (IBM legal action only found a couple short code sequences that could be considered similar).

cics/bdam posts
https://www.garlic.com/~lynn/submain.html#cics
hasp/asp, jes2/jes3, nji/nje posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Ferranti Atlas

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ferranti Atlas
Date: 16 Oct, 2023
Blog: Facebook
Melinda Varian's history
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
from above, Les Comeau has written:

Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work.35


... snip ...

Atlas reference (gone 403?, but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:

Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.

... snip ...

... referencing ATLAS used paging for large virtual memory ... but not multiprogramming (multiple concurrent address spaces). Cambridge had modified 360/40 with virtual memory and associative lookup that included both process-id and page number.
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

CP40 morphs into CP67 when 360/67 becomes available, standard with virtual memory. As an undergraduate in 60s, I redid CP67 page replacement to global LRU (at a time when academic literature was all about "local LRU"), which I deployed at Cambridge after graduating and joining IBM. IBM Grenoble Scientific Center modified CP67 to implement "local" LRU algorithm for their 1mbyte 360/67 (155 page'able pages after fixed memory requirements). Grenoble had very similar workload as Cambridge but their throughput for 35users (local LRU) was about the same as Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and global LRU) ... aka global LRU outperformed "local LRU" with more than twice the number of users and only 2/3rds the available real memory.

Jim Gray had departed IBM SJR for Tandem in fall of 1980. A year later, at Dec81 ACM SIGOPS meeting, he asked me to help a Tandem co-worker get his Stanford PHD that heavily involved global LRU (and the "local LRU" forces from the 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving global LRU). Jim knew I had detailed stats on the Cambridge/Grenoble global/local LRU comparison (showing global LRU significantly outperformed "local LRU"). IBM executives stepped in and blocked me sending a response for nearly a year (I hoped it was part of the punishment for being blamed for online computer conferencing in the late 70s through the early 80s on the company internal network ... and not that they were meddling in the academic dispute).

response eventually allowed to send
https://www.garlic.com/~lynn/2006w.html#email821019
paging algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

posts mentioning Ferranti Atlas
https://www.garlic.com/~lynn/2022h.html#44 360/85
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022b.html#54 IBM History
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2015c.html#47 The Stack Depth
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
https://www.garlic.com/~lynn/2007u.html#77 IBM Floating-point myths
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
https://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance?
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
https://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003.html#72 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2002.html#42 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)

--
virtualization experience starting Jan1968, online at home since Mar1970

Ferranti Atlas

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ferranti Atlas
Date: 17 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas

Comeau context of IBM using Atlas reference for virtual memory online interactive timesharing (tss/360) ... which it wasn't ... implying those IBM players didn't understand. Atlas would be more similar to OS/360 MVT mapped to 16mbyte virtual address space for VS2/SVS (except MVT already multitasking)

periodically reposted, a decade ago I was asked to find the decision to add virtual memory to all 370s ... and found a staff member to executive making decision; basically MVT storage management was so bad that regions had to typically be specified four times larger than used, as a result a 1mbyte 370/165 would only run four concurrent regions, insufficient to keep system busy and justified. Going to a 16mbyte virtual address space would allow concurrently executing regions to be increased by a factor of four times with little or no paging (very similar to running MVT in CP67 16mbyte virtual machine). pieces of email exchange in this archived post:
https://www.garlic.com/~lynn/2011d.html#73

Ludlow was doing initial VS2 implementation on 360/67, adding building virtual address table and being able to handle page faults, page replacement, page I/Os, etc. However, they kept the same application I/O MVT paradigm, passing prebuilt channel program via SVC0 to EXCP. EXCP now had same problem as CP67, making copy of passed channel program replacing virtual addresses with real addresses ... and he hacks a copy of CP67 CCWTRANS into EXCP for the purpose.

I get in a battle with the MVT performance group claiming they've optimized the "LRU" page replacement algorithm to search for non-changed changes before changed pages (seriously damaging "LRU" principle). They eventually say it doesn't really matter since VS2 will never be doing more than 4-5 page operations/sec. In some sense this is analogous to Atlas single virtual memory ... except MVT has been doing multiple application/regions/tasks from its real memory days.

Then VS2 is upgraded to separate 16mbyte virtual address space per application (MVS) and paging starts increasing ... towards the end of the 70s, somebody in POK gets an award for "fixing" the damaged LRU replacement algorithm, pointing out that high-use, shared (non-changed) kernel/LINKPACK pages were being replaced before low-use application (changed) data pages. However, MVS has a new problem. Since MVT heritage is extensive pointer-passing API, a 8mbyte image of the MVS kernel is mapped into every application 16mbyte virtual address space (leaving 8mbyte for application). Then because MVT sub-systems have also been moved into their own separate ("application") 16mbyte virtual address space, the sub-system has been passed a API pointer in the calling application address space. MVS then has to have a shared 1mbyte "Common Segment Area" (in every 16mbyte address space) for passing information back&forth between applications and sub-systems (leaving 7mbytes). Now the CSA requirements are somewhat proportional to number of subsystems and number of concurrently executing applications; by 3033 time CSA has frequently become 5-6 mbyte "Common System Area" (2-3mbytes left for applications), but threatening to become 8mbytes (leaving nothing in each application 16mbyte virtual address space).

IBM Burlington VLSI shop for their large mainframes have 7mbyte Fortran VLSI design application and special MVS builds with single mbyte CSA, however every (even trivial) change have them constantly fighting MVS 7mbyte brickwall. It turns out that 12kbyes of additional OS/360 simulation code is all that is required to enable that major VLSI app to run on VM370/CMS ... removing the 7mbyte restriction and giving them nearly the whole 16mbyte virtual address space. However, it wouldn't be politically correct to replace MVS on all those 168s&3033s (with vm370/cms), it had only been a couple yrs since the head of POK had convinced corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott did manage to save the VM370 product mission but was still rebuilding a VM370 group from scratch). Even VS2/SVS (more like Atlas with single virtual address space) would have given them some relief (for all those machines running the VLSI Fortran app).

Trivia: POK wasn't going to tell the VM370 group (out in old SBC bldg in burlington mall off 128) about the shutdown and move until the very last minute to minimize the numbers that might escape. The information managed to leak and several manage to escape into the Boston area (including to the new DEC VAX/VMS effort, joke that head of POK was major contributor). Then there was witch hunt for the source of the leak, fortunately for me, nobody gave the person up.

paging algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

some posts about MVS CSA bloat:
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?

--
virtualization experience starting Jan1968, online at home since Mar1970

Ferranti Atlas

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ferranti Atlas
Date: 17 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas

In the wake of Future System (was going to replace 370 with something completely different, during FS internal politics was killing off 370 efforts, lack of new 370 is credited with giving clone 370 makers their market foothold) implosion, there is mad rush to get stuff back into the 370 product pipelines, including kicking off 3033&3081 quick&dirt efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... I got sucked into a 370 16 processor SMP effort and we sucked the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was really great until somebody tells the head of POK that it could be decades before POK favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way system until after turn of century) ... and head of POK invites some of us to never visit POK again (and 3033 processor engineers instructed to heads down solely on 3033).

SMP, multiprocessor, tightly-coupled (&/or compare-and-swap) posts
https://www.garlic.com/~lynn/subtopic.html#smp

I transfer out to SJR and get to wander around IBM and customer datacenters in silicon valley, including bldg14&15 across the street (disk engineering and product test). They were running 7x24, pre-scheduled, stand-alone mainframe testing ... had commented recently they had tried MVS, but it had 15min MTBF (in that environment) requiring manual re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and never fail, enabling any amount of on-demand, concurrent testing (greatly improving productivity). Later I write an internal research report happening to mention MVS 15min MTBF, bringing down the wrath of the MVS organization on my head. I was told that when they couldn't have me separated from IBM, they tanked in progress corporate awards for 1) disk engineering/test RAS work and 2) all the enhancements for online sales&marketing support HONE operation up in Palo Alto.

HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

trivia (mentions turn-arounds/day instead of per month): bldg15 does get the first engineering 3033 outside pok from processor engineers and since testing only takes a percent or two of CPU, we scrounge up a 3830 controller and string of 3330s and put up own own private online service. we have early problems with the 303x channel director (requiring manual reset/re-IMPL) and learn if I quickly hit all six channel addresses with CLRCH, it will re-IMPL itself. Note: 303x channel director is 158 engine with just the integrated channel microcode (and no 370 microcode); a 3031 is two 158 engines, one with just the 370 microcode and 2nd with just the channel microcode (and 3032 is 168-3 reworked to use 303x channel director for external channels). Somebody has been running an "air bearing" simulation app on SJR 370/195 (for 3370 thin-film disk head design) but only getting a couple turn-arounds a month. We set him up on bldg15 3033 and they start getting several turn-arounds a day.

Later in the early 80s, 3380 DASD was about to ship and FE had regression test of 57 simulated errors that could be expected ... and MVS was (still) failing in all 57 cases (requiring manual re-ipl) and in 2/3rds of the cases no indication what caused the failures (joke about MVS "recovery", repeatedly covering up failures, until can't find any evidence) and I didn't feel badly at all. Old email in archived post
https://www.garlic.com/~lynn/2007.html#email801015

I also start doing HSDT (T1 and faster computer links, both terrestrial and satellite). Early satellite T1 link between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on east coast) that was acquiring boat loads of floating point system boxes (that had 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems

Its a Collins digital radio microwave tail circuit from Los Gatos to San Jose plant site 10M satellite dish connected to Kingston. HSDT then gets its own custom built TDMA satellite system, with 4.5M dishes in Los Gatos and Yorktown Research and 7M dish in Austin.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Reference Cards

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Reference Cards
Date: 18 Oct, 2023
Blog: Facebook
Lots of univ. were sold 360/67 for TSS/360 ... many just used it as 360/65 for OS/360 and others used it for CP/67 (precursor to VM/370). Stanford and Univ. of Michigan did their own virtual memory operating systems, UofM did MTS
https://en.wikipedia.org/wiki/Michigan_Terminal_System
and Stanford did Orvyl (as well as Wylbur, later ported to MVS).
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR

Somebody had done a "green card" in CMS IOS3270 ... I've subsequently done a quick&dirty port to HTML:
https://www.garlic.com/~lynn/gcard.html

VMSHARE
http://vm.marist.edu/~vmshare
reference card and 360/67 reference card (from one of the "GML" inventors)

67 blue card and vmshare card

IOS3270 trivia: FE support had diagnostic process starting with low level "scope/probes". With the advent of TCMs, it was no longer possible to scope. For 3090, they put a lot of probes into TCMs connected to (3092) "service processor" ... originally built with 4331 running modified version of Release 6 VM370/CMS and all screens in IOS3270 ... upgraded to a pair of 4361s (operating off 3370 FBAs, even for MVS installations that never supported FBA).
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

3092 trivia: I had wanted to demo REX (before renamed REXX and released to customers) wasn't just another pretty scripting language and decided to redo the large problem determination and dump analysis application implemented in assembler, in REX taking three months working half time ... resulting in ten times the function and ten times the performance (coding hacks to get interpreted REXX 10times performance). I finish early and do a library of automated scripts that search for common failure signatures. I figure it would be released, replacing the existing application ... but for whatever reason it wasn't ... even though it was in use by nearly every internal IBM datacenter and customer support PSRs. Eventually I get permission to give presentations to user groups on how the implementation was done ... and a few months later, customer implementations started appearing. Later I get email from 3092 group asking if they can include it as part of the 3092 service processor.
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

some posts mentioning 360/67, MTS, Orvyl, & Wylbur
https://www.garlic.com/~lynn/2023e.html#39 IBM 360/67
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
https://www.garlic.com/~lynn/2016c.html#6 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015f.html#62 3705
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2014i.html#67 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014g.html#106 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014d.html#23 [OT ] Mainframe memories
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2013h.html#76 DataPower XML Appliance and RACF
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012.html#19 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011b.html#44 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2011.html#6 IBM 360 display and Stanford Big Iron
https://www.garlic.com/~lynn/2010k.html#11 TSO region size
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries

--
virtualization experience starting Jan1968, online at home since Mar1970

Univ. Maryland 7094

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Univ. Maryland 7094
Date: 19 Oct, 2023
Blog: Facebook
I took two credit hr fortran/computer intro class ... at the end of the semester I was hired to rewrite 1401 MPIO for 360/30. Univ. had been sold a 360/67 for tss/360 to replace 709/1401 (709 tape->tape, 1401 MPIO unit record front-end, reader->tape, tape->printer/punch). The 1401 was temporarily replaced with 360/30 (pending arrival of 360/67, acquiring 360 experience). The univ. shutdown the datacenter over the weekend and I would have it dedicated all to myself (although 48hrs w/o sleep made monday classes hard). I was given lots of hardware&software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage manager, etc ... and within a few weeks, I had a 2000 card assembler program.

Then within a year of taking intro course, the 360/67 arrives and I was hired fulltime responsible for OS/360 (tss/360 never came to production fruition, so ran as 360/65 w/os360) ... and I continued to have my datacenter weekend dedicated time. Student fortran ran less than sec 709 tape->tape, initially on 360/65/os360 ran over minute. I install HASP and it cuts time in half. I then start redoing STAGE2 SYSGEN, 1) being able to run in production jobstream, 2) placement of datasets and PDS members to optimizing disk arm seek and multi-track search; which cuts time another 2/3rds to 12.9sec. It never gets better than 709 until I install Univ. of Waterloo WATFOR.

12.9secs was for three step FORTGCLG. WATFOR was single step batch processing, single step start and then processing student fortran at 20,000 cards/min on 360/65 (333cards/sec). Typically accumulate tray of student jobs; 30-60cards/job, 40-80jobs/tray, around 4secs for single WATFOR job step plus around 6secs to process tray of cards ... or around 10secs for tray of 40-80 jobs.

HASP/ASP, JES2/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

2023 posts mentioning 709, 1401, MPIO, HASP, WATFOR
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

The Five Stages of Acquisition Grief

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Five Stages of Acquisition Grief
Date: 20 Oct, 2023
Blog: Facebook
The Five Stages of Acquisition Grief
https://news.clearancejobs.com/2023/10/10/the-five-stages-of-acquisition-grief/

Pentagon Wars
https://www.amazon.com/Pentagon-Wars-Reformers-Challenge-Guard-ebook/dp/B00HXY969W/
Pentagon Wars
https://en.wikipedia.org/wiki/The_Pentagon_Wars
related NYT article: Corrupt from top to bottom
https://www.nytimes.com/1993/10/03/books/corrupt-from-top-to-bottom.html

Burton was graduate of 1st USAF Academy class on fast track to general when he says that Boyd destroyed his career by challenging him to do what was right. I talked to Burton at Quantico MCU Boyd conference.

The Pentagon Labyrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
Spinney's website
http://chuckspinney.blogspot.com/
The Domestic Roots of Perpetual War
https://drive.google.com/file/d/1gqgOakOiFsdeH-iIONAW3e20fpXQ3ZdR/view?pli=1

Richard's website:
https://slightlyeastofnew.com/
Boyd (and others) articles
https://slightlyeastofnew.com/439-2/

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

A few posts mentioning Burton, Boyd, Spinney, & Richards
https://www.garlic.com/~lynn/2023e.html#97 My Gun Has A Plane
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2019e.html#83 Collins radio and Braniff Airways 1945
https://www.garlic.com/~lynn/2017j.html#2 WW II cryptography
https://www.garlic.com/~lynn/2017i.html#14 How to spot a dodgy company - never trust a high achiever
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2016d.html#89 China builds world's most powerful computer
https://www.garlic.com/~lynn/2014h.html#36 The Designer Of The F-15 Explains Just How Stupid The F-35 Is
https://www.garlic.com/~lynn/2014f.html#36 IBM Historic computing
https://www.garlic.com/~lynn/2014c.html#83 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'

--
virtualization experience starting Jan1968, online at home since Mar1970

The Five Stages of Acquisition Grief

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Five Stages of Acquisition Grief
Date: 20 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#30 The Five Stages of Acquisition Grief

a few other URL refs from the article:

Has the Navy Lost Faith in its Littoral Combat Ships?
https://news.clearancejobs.com/2021/08/09/has-the-navy-lost-faith-in-its-littoral-combat-ships/
The Inside Story of How the Navy Spent Billions on the "Little Crappy Ship"
https://www.propublica.org/article/how-navy-spent-billions-littoral-combat-ship
Kendall vows Air Force NGAD program won't repeat 'serious mistake' associated with the F-35
https://defensescoop.com/2023/05/22/kendall-vows-air-force-ngad-program-wont-repeat-serious-mistake-associated-with-the-f-35/
Will It Fly?
https://www.vanityfair.com/news/2013/09/joint-strike-fighter-lockheed-martin
F-35 Joint Strike Fighter Faces Mission Readiness Crisis, GAO Warns
https://news.clearancejobs.com/2023/09/25/f-35-joint-strike-fighter-faces-mission-readiness-crisis-gao-warns/
Can the US Navy save money by accepting the LCS as a sunk cost?
https://www.defensenews.com/naval/2023/10/04/can-the-us-navy-save-money-by-accepting-the-lcs-as-a-sunk-cost/

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalismm

a few LCS/littoral posts
https://www.garlic.com/~lynn/2023.html#75 The Pentagon Saw a Warship Boondoggle
https://www.garlic.com/~lynn/2017c.html#27 Pentagon Blocks Littoral Combat Ship Overrun From a GAO Report
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?
https://www.garlic.com/~lynn/2015.html#68 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2014d.html#69 Littoral Warfare Ship
https://www.garlic.com/~lynn/2012n.html#22 Preparing for War with China

a few recent F-35 posts
https://www.garlic.com/~lynn/2023d.html#13 Has the Pentagon Learned from the F-35 Debacle?
https://www.garlic.com/~lynn/2022h.html#81 Air Force unveils B-21 stealth plane. It's not a boondoggle, for a change
https://www.garlic.com/~lynn/2022g.html#55 F-35A fighters unreliable, 'unready 234 times over 18-month period'
https://www.garlic.com/~lynn/2022f.html#58 Secret spending by the weapons industry is making us less safe
https://www.garlic.com/~lynn/2022f.html#25 Powerless F-35s
https://www.garlic.com/~lynn/2022d.html#2 The Bunker: Pentagon Hardware Hijinks
https://www.garlic.com/~lynn/2022c.html#105 The Bunker: Pentagon Hardware Hijinks
https://www.garlic.com/~lynn/2022c.html#78 Future F-35 Upgrades Send Program into Tailspin
https://www.garlic.com/~lynn/2021j.html#67 A Mini F-35?: Don't Go Crazy Over the Air Force's Stealth XQ-58A Valkyrie
https://www.garlic.com/~lynn/2021i.html#55 America's 'White Elephant': Why F-35 Stealth Jets Are USAF's 'Achilles Heel' Amid Growing Chinese Threats
https://www.garlic.com/~lynn/2021i.html#48 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#16 In Pursuit of Clarity: the Intellect and Intellectual Integrity of Pierre Sprey
https://www.garlic.com/~lynn/2021g.html#87 The Bunker: Follow All of the Money. F-35 Math 1.0 Another portent of problems
https://www.garlic.com/~lynn/2021g.html#48 The F-35 Fighter Jet Program Must be Grounded to Protect Pilots and Tax Dollars
https://www.garlic.com/~lynn/2021e.html#46 SitRep: Is the F-35 officially a failure? Cost overruns, other issues prompt Air Force to look for "clean sheet" fighter
https://www.garlic.com/~lynn/2021e.html#35 US Stealth Fighter Jets Like F-35, F-22 Raptors 'No Longer Stealth' In-Front Of New Russian, Chinese Radars?
https://www.garlic.com/~lynn/2021e.html#18 Did They Miss Yet Another F-35 Cost Overrun?
https://www.garlic.com/~lynn/2021d.html#77 Cancel the F-35, Fund Infrastructure Instead
https://www.garlic.com/~lynn/2021d.html#0 THE PENTAGON'S FLYING FIASCO. Don't look now, but the F-35 is afterburnered toast
https://www.garlic.com/~lynn/2021c.html#82 The F-35 and other Legacies of Failure
https://www.garlic.com/~lynn/2021c.html#11 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021c.html#8 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#102 The U.S. Air Force Just Admitted The F-35 Stealth Fighter Has Failed
https://www.garlic.com/~lynn/2021b.html#100 The U.S. Air Force Just Admitted The F-35 Stealth Fighter Has Failed

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Lore

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Lore
Date: 20 Oct, 2023
Blog: Facebook
I had taken two credit hr intro to computers/fortran and within a year, univ hired me fulltime responsible for os/360 (univ was sold 360/67 for tss/360 to replace 709/1401, temporarily they got a 360/30 to replace 1401 pending arrival of 360/67, tss/360 never came to production fruition, so was run at 360/65). Univ. shutdown datacenter over the weekend, and I had the place dedicated, although 48hrs w/o sleep made monday classes difficult). Then some from science center came out to install cp67 (precursor to vm370), and I mostly played with it during my dedicated weekend window.

CP67 had 2741 and 1052 support and dynamic terminal type identification using SAD CCW to switch terminal type port scanner for each line. Univ. had some TTY/ASCII terminals ... and I added TTY support (integrated with dynamic terminal type identification; trivia: TTY terminal port scanner arrived in HEATHKIT box to add to telecommunication controller). I then wanted to have single dial0in number of all terminals ... "hunt group"
https://en.wikipedia.org/wiki/Line_hunting

didn't quite work since IBM had taken a short cut and hard-wired port line speeds. Univ. then starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed.

First bug was hardware redlight. Turns out when controller holds channel interface for too long, the channel is holding the memory bus interface and location 80 timer can't be updated (if an existing timer tic loc 80 memory update is being delayed and the next timer tic happens), the machine "red lights" and hangs.

Later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces) and Interdata is selling box as clone IBM controller. Four of us get written up as responsible for (some part of) clone controller business.
https://en.wikipedia.org/wiki/Interdata

Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... snip ...

plug compatible, clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter largest in the world, 360/65s were arriving faster than they could be installed. Lots of politics between Renton manager and CFO, who only has a 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff). There is a disaster plan to replicate Renton up at the new 747 plant in Everett (Mt Rainier heats up and the resulting mud slide takes out the Renton datacenter).

Note IBM started "Future System" in early 70s, completely different and completely replace 370s (internal politics were shutting down 370 projects, claims that the lack of new 370 during the period is credited with giving clone 370 makers their market foothold). When FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x&3081 projects in parallel
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

For the 303x (external) channel director they use a 158 engine with just the integrated channel microcode (and w/o the 370 microcode). A 3031 is two 158-3 engines, one with just the 370 microcode and a second with just the integrated channel microcode. A 3032 is 168-3 reworked to use the 303x channel director for external channels. A 3033 started out 168-3 logic remapped to 20% faster chips.

1980 IBM STL is bursting at the seams and 300 people from the IMS group are being moved to offsite bldg with dataprocessing back to STL datacenter and I get con'ed into doing channel extender support so they can place channel attached 3270 controllers at off-site bldg (with no difference in online/interactive/response human factors between inside STL and offsite). The hardware vendor tries to get IBM to release my support, but there is a group in POK playing with some serial stuff (afraid that if it was in the market, it would be harder to get their stuff released) and get it vetoed.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, LLNL (national lab) is playing with some serial stuff and the branch office cons me into helping them get it standardized, which quickly becomes Fibre Channel Standard (FCS, initially 1gbit full-duplex, aggregate 200mbyte/sec), including some stuff I had done in 1980. Then the POK people get their stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec).

Then some POK people become involved in FCS and define heavy-weight protocol that drastically cuts the native throughput, which is eventually released as FICON. The most recent, public benchmarks I can find is z196 "peak I/O" getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a native FCS was announced for E5-2600 blades (at the time common in large cloud megadatacenters, typically having 500,000 or more such blades) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

trivia: industry standard cpu benchmark (number of iterations compared to 158-3), had 500BIPS for e5-2600 blade (before IBM sold off its server business, IBM base list price of $1815), ten times max-configured z196 at 50BIPS (priced at $30M).

some 2023 posts mentioning 709, 1401, 360/67, os/360, Boeing CFO and Renton
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#99 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

--
virtualization experience starting Jan1968, online at home since Mar1970

'This is her opportunity': governor Kathy Hochul could forever unmask New York's financial criminals

From: Lynn Wheeler <lynn@garlic.com>
Subject: 'This is her opportunity': governor Kathy Hochul could forever unmask New York's financial criminals
Date: 20 Oct, 2023
Blog: Facebook
'This is her opportunity': governor Kathy Hochul could forever unmask New York's financial criminals. A first-in-the-nation transparency act on Hochul's desk would name those involved in financial crimes and potential money laundering
https://www.theguardian.com/us-news/2023/oct/22/kathy-hochul-new-york-finance-real-estate-shell-corporations-llcs

A corporate lobbying group backed by Koch Industries is quietly pressing the Democratic New York governor, Kathy Hochul, not to sign a landmark transparency bill unmasking the owners of shell corporations involved in financial crimes, wage theft and tenant abuses.

... snip ...

capialism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering

posts mentioning Koch Brothers, Koch Industries, etc
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022g.html#34 Alarm as Koch bankrolls dozens of election denier candidates
https://www.garlic.com/~lynn/2022g.html#1 A Second Constitutional Convention?
https://www.garlic.com/~lynn/2022e.html#106 Price Wars
https://www.garlic.com/~lynn/2022e.html#74 The Supreme Court Is Limiting the Regulatory State
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022c.html#59 Rags-to-Riches Stories Are Actually Kind of Disturbing
https://www.garlic.com/~lynn/2022c.html#58 Rags-to-Riches Stories Are Actually Kind of Disturbing
https://www.garlic.com/~lynn/2022c.html#35 40 Years of the Reagan Revolution's Libertarian Experiment Have Brought Us Crisis & Chaos
https://www.garlic.com/~lynn/2021k.html#20 Koch Funding for Campuses Comes With Dangerous Strings Attached
https://www.garlic.com/~lynn/2021j.html#43 Koch Empire
https://www.garlic.com/~lynn/2021i.html#98 The Koch Empire Goes All Out to Sink Joe Biden's Agenda -- and His Presidency, Too
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021f.html#13 Elizabeth Warren hammers JPMorgan Chase CEO Jamie Dimon on pandemic overdraft fees
https://www.garlic.com/~lynn/2021c.html#77 Meet the "New Koch Brothers"
https://www.garlic.com/~lynn/2021c.html#51 In Biden's recovery plan, an overdue rebuke of trickle-down economics
https://www.garlic.com/~lynn/2021.html#27 We must stop calling Trump's enablers 'conservative.' They are the radical right
https://www.garlic.com/~lynn/2021.html#20 Trickle Down Economics Started it All
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2018f.html#8 The LLC Loophole; In New York, where an LLC is legally a person, companies can use the vehicles to blast through campaign finance limits
https://www.garlic.com/~lynn/2018e.html#107 The LLC Loophole; In New York, where an LLC is legally a person

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
I had taken two credit hr intro to computers/fortran and within a year, univ hired me fulltime responsible for os/360 (univ was sold 360/67 for tss/360 to replace 709/1401, temporarily they got a 360/30 to replace 1401 pending arrival of 360/67. At the end of the semester, the univ. hired me to rewrite 1401 MPIO (unit record frontend for 709) for 360/30 (m30 had 1401 emulation so continued to run MPIO, but apparently was part of getting 360 experience). I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Univ. shutdown datacenter over the weekend, and I had the place dedicated (although 48hrs w/o sleep made monday classes difficult). After a few weeks I had a 2000 cards assembler program. I also quickly learned coming in sat. morning 1st thing I do is clean all the tape drives; disassemble 2540 reader/punch, clean, reassemble; clean 1403. Sometimes production had finished early and datacenter is dark when I come in Sat. morning. Periodically 360/30 hangs during power-on ... and with some trial&error, I learn to put controllers in CE-mode, power-on individual controllers, power-on 360/30, take all controllers out of CE-mode.

Within a year of taking intro class, the 360/67 had arrived and univ. hires me fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65 with os/360). 709 tape->tape did student fortran jobs in less than second. Initially w/os360, they ran over a minute. I install HASP cutting the time in half. I then start redoing STAGE2 SYSGEN; 1) enabling to be run in production jobstream and 2) placing datasets and PDS members for optimized arm seek and (PDS directory) multi-track search, cutting time by another 2/3rds to 12.9secs (3 step FORTGCLG). OS/360 never got better than 709 for student fortran until I install Univ. of Waterloo WATFOR. WATFOR was single step batch monitor, rated at 20,000 cards/min (333/sec) on 360/65. Typically, a tray of student jobs was collected to feed WATFOR run (30-60cards/job, 40-80jobs/tray), around 4secs for single WATFOR job step overhead, plus around 6secs to process tray of cards ... or around 10secs for tray of 40-80 jobs.

Then some people from science center came out to install CP/67 (precursor to VM/370) and I mostly got to play with it in my weekend dedicated time. I rewrite a lot of CP67 to improve OS/360 running in virtual machine. I do SHARE presentation about both OS/360 and CP/67 work ... portion in this archive post:
https://www.garlic.com/~lynn/94.html#18
OS/360 test stream ran in 322sec "stand-alone" and originally 856sec w/CP67 ... CP67 CPU 534sec. After a few months, OS/360 test stream w/CP67 435secs ... CP67 CPU 113secs (improvement/reduction 534-113=421secs).

I then start redoing the I/O; CP67 did FIFO single disk I/O, I replaced it with ordered seek queuing. Page I/O was FIFO single 4k transfers, I replaced it with rotational ordered chained page transfers; all queued requests for same disk cyl (2311&2314), and all queued requiests for 2301 fixed-head drums (2301 would peak around 70/sec; chaining could peak closer to 270/sec, nearly channel transfer rate). trivia: 2301 & 2303 drums were similar, but 2301 would transfer on four heads in parallel, with four times the transfer rate and 1/4th the tracks.
https://en.wikipedia.org/wiki/IBM_drum_storage#IBM_2301

Besides redoing CP67 I/O performance at the univ, I redid page replacement algorithm using reference bits (for global "LRU") and dynamic adaptive working set controls to help manage page thrashing. Then did dynamic adaptive resource management (later referred to as "wheeler" scheduler)

CP67 had 2741 and 1052 support and dynamic terminal type identification using SAD CCW to switch terminal type port scanner for each line. Univ. had some TTY/ASCII terminals ... and I added TTY support (integrated with dynamic terminal type identification; trivia: TTY terminal port scanner arrived in HEATHKIT box to add to telecommunication controller). I then wanted to have single dial-in number of all terminals ... "hunt group"
https://en.wikipedia.org/wiki/Line_hunting

didn't quite work since IBM had taken a short cut and hard-wired port line speeds. Univ. then starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed. First bug was hardware redlight. Turns out when controller holds channel interface for too long, the channel is holding the memory bus interface and location 80 timer can't be updated (if an existing timer tic loc 80 memory update is being delayed and the next timer tic happens), the machine "red lights" and hangs.

Later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces) and Interdata is selling box as clone IBM controller. Four of us get written up as responsible for (some part of) clone controller business.
https://en.wikipedia.org/wiki/Interdata

Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... snip ...

The Univ library got an ONR grant to do online catalog and some of the money went for a 2321 "datacell"
https://www.ibm.com/ibm/history/exhibits/storage/storage_2321.html

The online catalog was also selected for betatest for the (original, charged-for) CICS product and CICS support/debugging was added to my tasks. One of the 1st problems, was CICS couldn't open the datasets at startup. It turns out that CICS had some (undocumented) hard-coded BDAM options and the library had built the datasets with a different set of options. other CICS lore
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

I was in a datacenter around the turn of the century that had banner above a mainframe that proclaimed some 130 (concurrently running) CICS "instances".

HASP/ASP, JES2/JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement and dynamic adaptive working set posts
https://www.garlic.com/~lynn/subtopic.html#clock
360 plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

some 709/1401, MPIO, 360/30, OS/360, WATFOR, 360/67, and CP/67 posts
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2019e.html#19 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#104 OS/360 PCP JCL
https://www.garlic.com/~lynn/2009b.html#71 IBM tried to kill VM?
https://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter largest in the world, 360/65s were arriving faster than they could be installed. Lots of politics between Renton manager and CFO, who only has a 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff). There is a disaster plan to replicate Renton up at the new 747 plant in Everett (Mt Rainier heats up and the resulting mud slide takes out the Renton datacenter). When I graduate, I join IBM science center (instead of staying at Boeing).

Some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
had gone to the 5th flr to do Multics
https://en.wikipedia.org/wiki/Multics
others had gone to the science center on the 4th flr
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
and did virtual machines, internal network (technology also used for corporate sponsored univ BITNET), online/interactive applications, performance & capacity planning, CTSS RUNOFF redone for CMS as "SCRIPT", invented GML in 1969 (and GML tag processing added to SCRIPT, after a decade GML morphs into ISO standard SGML, and after another decade morphs into HTML at CERN)

Melinda's history
http://www.leeandmelindavarian.com/Melinda#VMHist
CSC, CP67, VM370 history
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
CSC initially modified 360/40 with virtual memory and did CP40/CMS
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
... and when 360/67 becomes available standard with virtual memory, morphs into CP67/CMS (precursor to VM370/CMS).

One of my hobbies after joining IBM, was enhanced production operating systems for internal datacenters ... and the online sales&marketing support HONE systems were long time customers.

Note IBM started "Future System" in early 70s, completely different and was to completely replace 370s (internal politics were shutting down 370 projects, claims that the lack of new 370 during the period is credited with giving clone 370 makers their market foothold). When FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x&3081 projects in parallel
http://www.jfsowa.com/computer/memo125.htm

For the 303x (external) channel director they use a 158 engine with just the integrated channel microcode (and w/o the 370 microcode). A 3031 is two 158-3 engines, one with just the 370 microcode and a second with just the integrated channel microcode. A 3032 is 168-3 reworked to use the 303x channel director for external channels. A 3033 started out 168-3 logic remapped to 20% faster chips.

1980 IBM STL is bursting at the seams and 300 people from the IMS group are being moved to offsite bldg with dataprocessing back to STL datacenter and I get con'ed into doing channel extender support so they can place channel attached 3270 controllers at off-site bldg (with no difference in online/interactive/response human factors between inside STL and offsite). The hardware vendor tries to get IBM to release my support, but there is a group in POK playing with some serial stuff (afraid that if it was in the market, it would be harder to get their stuff released) and get it vetoed.

In 1988, LLNL (national lab) is playing with some serial stuff and the branch office cons me into helping them get it standardized, which quickly becomes Fibre Channel Standard (FCS, initially 1gbit full-duplex, aggregate 200mbyte/sec), including some stuff I had done in 1980. Then the POK people get their stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then some POK people become involved in FCS and define heavy-weight protocol that drastically cuts the native throughput, which is eventually released as FICON. The most recent, public benchmarks I can find is z196 "peak I/O" getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a native FCS was announced for E5-2600 blades (at the time common in large cloud megadatacenters, typically having 500,000 or more such blades) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

trivia: industry standard cpu benchmark (number of iterations compared to 158-3), had 500BIPS for e5-2600 blade (before IBM sold off its server business, IBM base list price of $1815), ten times max-configured z196 at 50BIPS (priced at $30M).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
script, GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some Boeing CFO & renton posts
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers

After the failure of FS, I got involved in working on 16-processor 370 and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system (MVS) had effective 16-way support, then the head of POK invites some of us to never visit POK again and directs the 3033 processor engineers "heads down, only on 3033" (note POK doesn't ship a 16-processor machine until after turn of the century).

I transfer out to San Jose Research on the west coast and get to wander around lots of IBM and customer datacenters on the west coast, includg bldg14&15 (disk engineering and product test) across the street. They are doing around, the clock 7x24, stand-alone mainframe testing. They said they had recently tried MVS, but said it had 15min MTBF (in that environment) requiring (manual) re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and never fail, enabling any amount of on-demand concurrent testing, greatly improving productivity. I then write an (internal only) research report on the work, mentioning MVS 15min MTBF, bringing down the wrath of the MVS organization on my head (told offline they tried to separate me from the company, when that didn't work, they tried to make other things unpleasant including blocking corporate awards for the RAS work and enhancements to the world-wide online sales&marketing support HONE systems). A couple years later, when 3380 disk drives were about to ship, FE had a series of 57 test errors that they expected would occur and MVS was failing in all 57 cases (requiring manual re-ipl) and in 2/3rds of the cases, no indication of what caused the failure (I didn't feel badly, had a MVS "recovery" joke, repeatedly covering up until no evidence of the original problem could be found).

Note trout/3090 had originally designed channel configuration assuming 3880/3380 was similar to 3830 except with 3mbyte/sec transfer. However, 3830 had much faster microprocessor for handling channel program overhead processing. When they found out how slow the 3880 actually was, they realized they would have to significantly increase the number of channel (to offset the significant 3880 channel busy overhead). The increase in number of channels required an extra TCM. The 3090 office semi-facetiously claimed they would bill the 3880 office for the increase in 3090 manufacturing cost. Marketing eventually respun all the extra 3090 channels as wonderful I/O machine (rather than compensating for the high 3880 channel busy overhead).

I got in dust-up with the ironwood/3880-11 guys in Tuscon ... it only had a 8mbyte cache for paging ... connected to mainframes that frequently had larger main memories. I showed that if page wasn't in mainframe memory they would have to do a page-in operation, which met that it had to read off disk leaving a copy in the 3880-11 cache. Since the mainframe paging memory was frequently larger than the 3880-11 cache, something in the cache would almost always also be in mainframe memory (I claimed "duplicate"), therefor the 3880-11 cache would never be used. I showed that to make any use of the 3880-11 cache, had to go to "no-dup" strategy, on read always do a "no-cache" read ... and just do "cache+disk" writes when page was being replaced in main memory (3880-11 becomes an auxiliary extension to main memory, not a duplicate ... treating it more akin to 3090 expanded storage).

Early 80s, at SJR we had done a super efficient disk record I/O traces of lots of different production operations and used the information to feed into I/O simulation modeling ... being able to simulate disk level caches, controller caches, channel level caches, and system level caches (all of various sizes and speeds). One thing identified was lots of commercial dataprocessing was serial processed involving groups of datasets on daily, weekly, monthly, etc. intervals. Full-track I/O speeded up the processing (also production groups could be archived and brought back as needed). For paging operations, for any fixed amount of electronic cache memory, it was more efficient to use it as a single system level cache (global LRU) as opposed to dividing it into smaller pieces spread out ("local LRU").

smp, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
paging posts (also apply to LRU managed caches)
https://www.garlic.com/~lynn/subtopic.html#clock

some posts mentioning 3090 extra channels/TCM offset 3880 busy
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O

some ironwood, 3880-11, dup/no-dup posts
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2015e.html#18 June 1985 email
https://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller chache
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers

some "vintage" posts from last year
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

archived post refs to z/VM-50th:
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023d.html#25 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#6 Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#74 Al Gore Inventing The Internet
https://www.garlic.com/~lynn/2023c.html#70 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#69 NSFNET (Old Farts)
https://www.garlic.com/~lynn/2023c.html#58 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#40 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#36 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#28 Punch Cards
https://www.garlic.com/~lynn/2023c.html#27 What Does School Teach Children?
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#22 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#100 5G Hype Cycle
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#72 Schooling Was for the Industrial Era, Unschooling Is for the Future
https://www.garlic.com/~lynn/2023b.html#69 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#65 HURD
https://www.garlic.com/~lynn/2023b.html#51 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023b.html#12 Open Software Foundation
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#117 IBM 5100
https://www.garlic.com/~lynn/2023.html#110 If Nothing Changes, Nothing Changes
https://www.garlic.com/~lynn/2023.html#109 Early Webservers
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#88 Northern Va. is the heart of the internet. Not everyone is happy about that
https://www.garlic.com/~lynn/2023.html#83 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#70 GML, SGML, & HTML
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
https://www.garlic.com/~lynn/2022h.html#124 Corporate Computer Conferencing
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022h.html#77 The Internet Is Having Its Midlife Crisis
https://www.garlic.com/~lynn/2022h.html#72 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022h.html#59 360/85
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022h.html#43 1973 ARPANET Map
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#16 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#8 Elizabeth Warren to Jerome Powell: Just how many jobs do you plan to kill?
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#74 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022g.html#72 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022g.html#65 IBM DPD
https://www.garlic.com/~lynn/2022g.html#60 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#47 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#44 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#43 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#40 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#120 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#110 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#108 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022f.html#82 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#79 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#71 COMTEN - IBM Clone Telecommunication Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#61 200TB SSDs could come soon thanks to Micron's new chip
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#47 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#37 Vintage IBM Mainframes & Minicomputers

IBM had contract with Stratus to sell box as S/88
https://en.wikipedia.org/wiki/Stratus_Technologies

last product we did at IBM before leaving was HA/CMP (now PowerHA)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

it had started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Meeting early Jan1992, AWD VP Hester tells Oracle CEO Ellison that there would be 16 processor cluster mid-92 and 128 processor cluster ye-92. A couple weeks later cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific only) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later).

The S/88 Product Administrator had started taking us around to some of their customers. Also got me to write a section for the IBM corporate continuous availability strategy document ... however it got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't meet the requirements

One candidate was 1-800 phone system ... Stratus planned shutdown for software maintenance took the equivalent of a century of outages. We/HACMP could do rolling outage maintenance across systems so there was no service outage

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

some past posts mentioning Stratus & S/88
https://www.garlic.com/~lynn/2022b.html#55 IBM History
https://www.garlic.com/~lynn/2009q.html#26 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2008j.html#16 We're losing the battle
https://www.garlic.com/~lynn/2007q.html#67 does memory still have parity?
https://www.garlic.com/~lynn/2003d.html#10 Low-end processors (again)
https://www.garlic.com/~lynn/2001k.html#11 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#10 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#9 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'

--
virtualization experience starting Jan1968, online at home since Mar1970

Flying Tank: The A-10 Warthog Is Still Legendary

From: Lynn Wheeler <lynn@garlic.com>
Subject: Flying Tank: The A-10 Warthog Is Still Legendary
Date: 23 Oct, 2023
Blog: Facebook
Flying Tank: The A-10 Warthog Is Still Legendary
https://worldofaircraft.blog/flying-tank-the-a-10-warthog-is-still-legendary/

I was introduced to Boyd in the early 80s and use to sponsor his briefings
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
PDF->kindle, loc1783-88:

Boyd's collaboration with associate Pierre Sprey on the development of the A-10 close air support (CAS) aircraft sparked his exploration of history. The project was Sprey's, with Sprey consulting Boyd on performance analysis, E-M Theory, and views on warfare in general. When designing the A-10, Sprey had to determine what aircraft features provided the firepower and loiter time required by ground forces, while also granting survivability against the enemy ground fire that would inevitably be directed against it.4The German Wehrmacht had pioneered both the design and employment of dedicated CAS aircraft in World War II.

... snip ...

Pierre Sprey
https://en.wikipedia.org/wiki/Pierre_Sprey
John Boyd
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
Thomas P Chrstie
https://en.wikipedia.org/wiki/Thomas_P._Christie

Pentagon Wars
https://www.amazon.com/Pentagon-Wars-Reformers-Challenge-Guard-ebook/dp/B00HXY969W/
Pentagon Wars
https://en.wikipedia.org/wiki/The_Pentagon_Wars
related NYT article: Corrupt from top to bottom
https://www.nytimes.com/1993/10/03/books/corrupt-from-top-to-bottom.html

Burton was graduate of 1st USAF Academy class on fast track to general when he says that Boyd destroyed his career by challenging him to do what was right.

The Pentagon Labyrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
Spinney's website
http://chuckspinney.blogspot.com/
The Domestic Roots of Perpetual War
https://drive.google.com/file/d/1gqgOakOiFsdeH-iIONAW3e20fpXQ3ZdR/view?pli=1

Burton would say that he got the 30mm shell (used by A10) down to $13 (from nearly $100). Note that desert storm was 43days and only the last 100 hrs was land war. GAO desert storm air effectiveness study had A10s doing over million 30mm DU shells (@$13/shell, $13m total) and 5000 Maverick missiles (@$144,000, $72M). The 30mm shells were so effective that Iraqi crews were walking away from their tanks (as sitting ducks, later description of fierce tank battles with coalition forces taking no damage, don't mention if Iraqi tanks had anybody home). There was also a problem with Mavericks that accounted for some number of friendly fire deaths (friendly fire deaths from precision bombing also in the current wars).
http://www.gao.gov/products/NSIAD-97-134

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

Rise and Fall of IBM

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Rise and Fall of IBM
Date: 23 Oct, 2023
Blog: Facebook
Rise and Fall of IBM, references major motivation of Future System was countermeasure to clone controllers ... making the interface so complex that clone makers couldn't keep up:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems.

... snip ...

claim that the VTAM/NCP interface was attempt to still meet that FS objective. trivia: as undergraduate, I was involved in bldg box that was sold as clone controller, mentioned in this recent post:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
more FS detail
http://www.jfsowa.com/computer/memo125.htm
other discussion of FS in this post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

note a decade ago, I was asked to track down the decision to add virtual memory to all 370s; I found staff that worked for executive making decision. Basically MVT storage management was so bad that regions had to be specified four times larger than typically used. As a result a 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (similar to running MVT in CP/67 16mbyte virtual machine) would allow increasing number of concurrently running regions by a factor of four times with little or no paging. Archive post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

I mentioned Ludlow was doing initial prototype on 360/67 ... able to have MVT build the 16mbyte virtual memory tables and handle page faults and page I/O. Biggest issue was for SVC0/EXCP, similar to CP/67 having to make copy of passed channel programs, replacing virtual addresses with real addresses required by channels. He crafts a copy of CP/67 CCWTRANS into EXCP.

trivia: My wife was co-author of AWP39, "Peer-to-peer Network" in the same time frame as SNA. They had to qualify it with "peer-to-peer" since SNA had co-opted "Network" (even tho it wasn't network, joke was SNA wasn't a "System", wasn't a "Network", and wasn't an "Architecture").

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

a few recent posts mentioning AWP39
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#43 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#37 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and (online sales&marketing support) HONE systems was long time customer.

After 23Jun1969 unbundling announcement, starting to charge for software, SE services, maint, etc; they were having a hard time figuring out how not to charge for SE training ... part of which was like journeyman program as part of large SE group in customer datacenter. The result was a number of virtual machine CP67 systems with online access from branch offices ... for SEs to practice with guest operating systems. The science center had also ported APL\360 to CP67/CMS for CMS/APL ... redoing storage management from 16kbyte swapped workspaces to large demand page virtual memory ... and added API for system services (like file I/O) ... enabling a lot of real-word applications. HONE started deploying CMS/APL based sales&marketing support applications ... which came to dominate all HONE processing (original use for guest operating systems evaporated). I then get tasked for some of the 1st non-US HONE deployments ... which eventually came to spread world-wide. Mid-70s, the US HONE datacenters were consolidated in silicon valley (trivia: when facebook moves into silicon valley, it is a new bldg built next door to the former consolidated US HONE datacenter) ... very quickly eight 168s, single-system image, loosely-coupled with large disk farm and load balancing and fall-over across the complex (I consider largest such complex in the world).

In the initial morph of CP67 to VM370, lots of feature/function was dropped or greatly simplified. I then spend some part of 1974, moving CP67 stuff into VM370 release 2, as "CSC/VM" for internal datacenters, especially HONE complex. Then in 1975, initially for US HONE, I add tightly-coupled multiprocessor support to release 3 base, so US HONE can add a 2nd CPU to each system (16processors, single-system image, load-balancing and fall-over, now really largest such complex in the world)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
23june1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
SMP, tightly-coupled, multiprocessor support
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 23 Oct, 2023
Blog: Facebook
long winded post comment about HONE & science center redid APL\360 for CP67/CMS CMS\APL, become the primary delivery for HONE branch office online apps ... making HONE the largest APL deployment. Eventually, customer orders were required to be run through HONE applications before submitting them
https://www.garlic.com/~lynn/2023f.html#41 Vintage IBM Mainframes & Minicomputers

PASC then did APL\CMS for VM370/CMS (and the APL "microcode" assist for 370/145). Note the APL purists complained that the CMS\APL API semantics violated APL philosophy ... eventually replacing the API with "shared variable" semantics (for accessing system services) ... become APLSV ... evolving into VSAPL.

When US HONE datacenters were consolidated in silicon valley, it was across the back parking lot from PASC. APL microcode assist would execute APL on 145 as fast as 168, however the HONE APL-based applications required both the 168 processing power and memory size of 168 (which 145 didn't have).

However PASC did other things for HONE. One was "SEQUOIA" which was around 500kbyte APL application that provided a tailored interactive online environment for branch office sales&marketing ... which would appear in every workspace. PASC reworked SEQUOIA, placing it in the shared memory image of the APL interpreter (so there only had to be one copy/system ... rather than a copy for each online user).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

posts mentioning HONE, APL, & SEQUOIA
https://www.garlic.com/~lynn/2022.html#103 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2022.html#4 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2021k.html#34 APL
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#33 HONE story/history
https://www.garlic.com/~lynn/2019b.html#26 This Paper Map Shows The Extent Of The Entire Internet In 1973
https://www.garlic.com/~lynn/2019b.html#14 Tandem Memo
https://www.garlic.com/~lynn/2012.html#14 HONE
https://www.garlic.com/~lynn/2011e.html#72 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010i.html#13 IBM 5100 First Portable Computer commercial 1977
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2007h.html#62 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002j.html#3 HONE, Aid, misc
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage Series/1

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage Series/1
Date: 24 Oct, 2023
Blog: Facebook
Folklore is that EDX port was done by summer intern at San Jose Research and RPS was done by some people that transferred from Kingston trying to re-invent OS/360 MFT.

Austin had 801/RISC ROMP for displaywriter follow-on. when that got canceled, they decided to retarget for unix workstation market ... hiring the company that had done at&t port to ibm/pc as pc/ix ... it ships as aix for pc/rt

IBM Palo alto was working with ucb porting their bsd unix ... and ucla porting their locus unix ... initially to series/1.

later IBM ships ucla locus port as aix/370 and aix/386. ... and ucb bsd ships as AOS on pc/rt

801/risc, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning ucb bsd and ucla locus ports, series/1, aix/370, aix/386, aos
https://www.garlic.com/~lynn/2022d.html#79 ROMP
https://www.garlic.com/~lynn/2021b.html#51 CISC to FS to RISC, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014c.html#21 The PDP-8/e and thread drifT?
https://www.garlic.com/~lynn/2012.html#66 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011f.html#35 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2010i.html#28 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2009f.html#62 How did the monitor work under TOPS?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage Series/1

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage Series/1
Date: 24 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#43 IBM Vintage Series/1

Early 80s, I have HSDT project, T1 and faster computer links (both terrestrial and satellite) ... one of the first is T1 satellite between Los Gatos lab abd Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on east coast) that was acquiring boat loads of floating point system boxes (that had 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems

It was Collins digital radio microwave tail circuit from Los Gatos to San Jose plant site 10M satellite dish connected to Kingston. HSDT then gets its own custom built TDMA satellite system, with 4.5M dishes in Los Gatos and Yorktown Research and 7M dish in Austin.

Along the way, HSDT gets some corporate funding with some strings that I have to show at least some IBM content. In the 60s, IBM had 2701 that supported T1 links, but nothing since. In the 80s some of the gov. 2701 boxes were failing and IBM FSD finally does Series/1 T1 Zirpel card for the gov. market. So I order a few Series/1 ... but find that there is a one year backlog on S/1 deliveries. Turns out that IBM had bought ROLM ... and to have some IBM content, they make a large Series/1 order (apparently only IBM box that they could use). I had earlier worked with the manager of ROLM datacenter (when they were at IBM) and do some horse trading ... I will help ROLM with development testing if I can get a couple of their S/1 delivery positions.

Then some Series/1 people con me into turning out a baby bell VTAM/NCP implementation done on Series/1 as a IBM type-1 product (with follow-on ported to RS/6000). It has all resources "owned" by the distributed S/1 environment and simulates cross-domain to host VTAMs. It has enormous feature and price/performance over standard IBM product. Part of presentation I gave in Raleigh at fall86 SNA ARB meeting:
https://www.garlic.com/~lynn/99.html#67

I took a production baby bell operation and fed it into the HONE 3725 configurator (for comparison). The VTAM group kept saying the comparison was invalid ... but were unable to say why ... since it was the communication group's own 3725 configurator. Also part of "baby bell" presentation at spring '86 COMMON/S1 user group conference
https://www.garlic.com/~lynn/99.html#70

The Series/1 people were apparently well familar with the communication group reputation for internal politics and put in place countermeasures for what they might possibly try. What the communication group did next to tank the project can only be described as truth is stranger than fiction.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
801/risc, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 24 Oct, 2023
Blog: Facebook
APL reply in thread about APL & REXX
https://www.garlic.com/~lynn/2023f.html#42 Vintage IBM Mainframes & Minicomputers

early in REX days (before renamed REXX and released to customer), i wanted to show REX wasn't just another pretty scripting language, redoing large problem/dump analysis assembler application, doing it working half time over 3months implementing ten times the function with ten times the performance. I finished early, so did a library of automated scripts that searches for common failure signatures. Also interpreted REX 10 times performance of assembler took some coding hacks.

I thought it would be shipped to customers, since nearly all internal datacenters and PSRs were using it ... but for whatever reason, it didn't. I eventually get approval to give presentations on how it was implemented at mainframe user group meetings ... and within a few months similar implementations started appearing.

Later the 3090 service processor (3092) group contacts me about releasing it with the 3092. 3092 started out as heavily modified VM370 release 6 on 4331 with all the service screens done in CMS IOS3270 ... later upgraded to pair of redundant 4361s.

DUMPX posts
https://www.garlic.com/~lynn/submain.html#dumprx

some recent 3092 & dumprx posts
https://www.garlic.com/~lynn/2023f.html#28 IBM Reference Cards
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#74 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 24 Oct, 2023
Blog: Facebook
The author of VMSG also did Parasite/Story (in the 70s before IBM/PC). VMSG was email client and a very early version was picked up by the PROFS group for use as the PROFS email client. The author tried to offer PROFS a much enhanced version and they tried to get him fired (apparently they took credit for everything in PROFS). He then demonstrated that his initials appeared in every PROFS email (in non-displayed field) ... and everything quiets down. After that he only shares his source with me and one other person. Parasite was CMS app that creates virtual 3270 screens and "story" was a HLLAPI-like app. Old archived posts with example story scripts (like automagically connect over internal network to CCDN, use CCDN to connect to RETAIN, log into RETRAIN and download information)
https://www.garlic.com/~lynn/2001k.html#35
https://www.garlic.com/~lynn/2001k.html#36

Predating that was SPM, originally done by the IBM Pisa science center for CP/67 ... started being used for automated operator and service virtual machines ... and later ported to VM/370 in POK ... sort of superset combination of later VMCF/IUCV/SMSG. The author of REXX used it for multi-user spacewar game. CMS 3270 spacewar clients communicated with a game server via SPM and since VNET/RSCS supported SPM .... game clients could play over the internal network. Trivia: Very early "robot" clients started appearing and with their faster responses were beating human players. server was then modified to increase power usage non-linearly for responses dropping below normal human response.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some other posts mentioning vmsg, profs, and parasite/story
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That

posts mentioning spm pisa spacewar robot
https://www.garlic.com/~lynn/2021c.html#51 In Biden's recovery plan, an overdue rebuke of trickle-down economics
https://www.garlic.com/~lynn/2021.html#18 Trickle Down Economics Started it All
https://www.garlic.com/~lynn/2019e.html#152 US lost more tax revenue than any other developed country in 2018 due to Trump tax cuts
https://www.garlic.com/~lynn/2019e.html#150 How Trump Lost an Evangelical Stalwart
https://www.garlic.com/~lynn/2019d.html#23 A Deadly Heat Wave After the Hottest June On Record: How the Climate Crisis Is Creating 'a New Normal'
https://www.garlic.com/~lynn/2018c.html#32 Old word processors

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 24 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#37 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#41 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#42 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#45 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers

note a decade ago, I was asked to track down the decision to add virtual memory to all 370s; I found staff that worked for executive making decision. Basically MVT storage management was so bad that regions had to be specified four times larger than typically used. As a result a 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (similar to running MVT in CP/67 16mbyte virtual machine) would allow increasing number of concurrently running regions by a factor of four times with little or no paging. Archive post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

I mentioned Ludlow was doing initial prototype on 360/67 ... able to have MVT build the 16mbyte virtual memory tables and handle page faults and page I/O. Biggest issue was for SVC0/EXCP, similar to CP/67 having to make copy of passed channel programs, replacing virtual addresses with real addresses (required by channels). He crafts a copy of CP/67 CCWTRANS into EXCP.

Early use of the internal network effort was joint project between Endicott and Science Center ... to add 370 w/DAT virtual machine support to CP/67 ... and part of the effort developed CMS multi-level source update. CP67-L was the production system running on real 360/67, CP67-H was modified CP67 running in 360/67 virtual machine that provided 370 virtual machine (since science center also had staff, profs, and students from Boston univ. on CP67-L, they wanted extra level of security to prevent unannounced 370 DAT details leaking), and CP67-I was modified CP67 running in 370 virtual machine. This was in regular use a year before the 1st engineering 370 (a 370/145) with DAT was running, in fact CP67-I was used as regression test for the engineering machine.

Then the 165 engineers started complaining that the virtual memory announce would have to slip six months if they had to implement the full 370 virtual memory architecture ... and eventually it was decided to regress to the 165 subset ... and other models had to revert to the 165 subset (and any software already developed for the full architecture had to be redone).

Then three engineers came out from San Jose and added 2305 & 3330 device support to CP67-I for CP67-SJ. With DAT for all 370s, the decision had been made to do VM370 and some of the science center staff split off taking over the Boston Programming Center on the 3rd flr to do VM370 (the morph from CP67->VM370 simplified and/or dropped a lot of CP67 function). However, CP67-SJ was the production system for lots of internal 370 w/DAT systems ... even long after VM370 came available.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning CP67-L, H, I, SJ and multi-level source:
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2009s.html#17 old email

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/65 and 360/67

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/65 and 360/67
Date: 24 Oct, 2023
Blog: Facebook
from bitsavers pdf/ibm/360
http://bitsavers.org/pdf/ibm/360/

Originally announced 360/60, 360/62 & 360/70 ... later replaced with 360/65 & 360/75 with faster 750ns memory (1964 System Summary)
http://bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-0_360sysSummary64.pdf

single processor 360/65 & 360/67 differed pretty much by addition of DAT box. Multiprocessors were lot different. 360/65 was pretty much two processors hung with access to common memory bus with processor specific external channels. 360/67 MP had multi-ported memory and channel director ... capable of concurrent memory transfer involving both processors as well as channel, and all processors capable of accessing all channels (360/65 simulated multiprocessor channel environment with twin-tailed controllers at same channel address on the dedicated processor specific channels).

Original 360/67 planned for up to four processors, the 360/67 control registers reflected the configuration switches on the channel controller
http://bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf
http://bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf

there was one three processor built with the addition that it was possible to change the configuration switches under software control, by changing the control register values. The MP multi-ported specs had extra memory latency because of the multi-ported memory implementation. A "half-duplex" configuration (aka single processor but with multi-ported memory) with purely CPU intensive workload would have lower throughput. However both heavy I/O and heavy CPU processing concurrently could have higher throughput than a similar "single processor" configuration (because of multi-ported memory being able to do concurrent operations).

360/65 functional characteristics, 4th edition, Sept1968 .... has Appendix A, multiprocessor system, pg30
http://bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6884-3_360-65_funcChar.pdf

Note when Charlie was doing fine-grain multiprocessor locking work for CP/67 at the science center, he invented the compare-and-swap instruction ("CAS" mnemonic chosen because they are Charlie's initials). Attempts to get it added to 370 architecture were initially rebuffed, being told that the POK favorite son operating system (MVT) people claimed that the "test&set" instruction was sufficient for multiprocessor operation. We were told in order to justify CAS for 370s, uses had to found other than kernel multiprocessor locking. Thus was born the use for various kinds of multi-threading/multi-programming serialization (useful in both single and multiple processor environments), use examples added to principles of operation.

In Sep1975, GA22-700-4
http://bitsavers.org/pdf/ibm/370/princOps/GA22-7000-4_370_Principles_Of_Operation_Sep75.pdf

The COMPARE AND SWAP and COMPARE DOUBLE AND SWAP instructions can be used in multiprogramming or multiprocessing environments to serialize access to counters, control words, and other common storage areas.

... snip ...

cambridge scientific center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
smp, multiple processing, tightly-coupled, and/or compare-and-swap instruction posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3350FH, Vulcan, 1655

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3350FH, Vulcan, 1655
Date: 24 Oct, 2023
Blog: Facebook
I tried to get multiple-exposure support (multiple subchannel addresses) for 3350s with fixed head feature (3350FH) ... similar to 2305 fixed head disk ... so could queue up multiple channel programs ... effectively allow data transfer from FH area while moveable arm disk seek was in operation (being able to use it for higher performance paging operations). However there was the VULCAN electronic disk group in POK that get it vetoed (concerned that it would impact VULCAN sales as paging device). Then before VULCAN is actually announced and ships, VULCAN is killed (told that IBM is already selling every memory chip it is making at higher markeup/profit) ... but it is too late to resurrect 3350FH multiple exposure.

In place of VULCAN, IBM contracts for vendors for electronic disks simulating 2305 for internal datacenters ("1655"). However could (also) get 3mbyte FBA mode from the vendors with much higher throughput (than simulated 2305 CKD ... but only VM370 had the FBA support).

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

a few posts specifically mentioning 3350FH, VULCAN and 1655
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks
https://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?
https://www.garlic.com/~lynn/2017e.html#36 National Telephone Day
https://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future

--
virtualization experience starting Jan1968, online at home since Mar1970

The Most Important Computer You've Never Heard Of

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Most Important Computer You've Never Heard Of
Date: 24 Oct, 2023
Blog: Facebook
The Most Important Computer You've Never Heard Of
https://getpocket.com/explore/item/the-most-important-computer-you-ve-never-heard-of

I was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM ... more detailed ref:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

One of his (many) stories was about being very vocal that the electronics across the trail wouldn't work, then possibly as punishment he is put in command of spook base ... about the same time I'm at Boeing ... recent Boeing ref
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers

He claimed that it had the largest air conditioned bldg in that part of the world ... detailed ref (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
above claims a picture of 2250M4s (i.e. 2250+1130) displays inside the datacenter ... but picture looks just like the SAGE picture (not 2250s). Mentions all its 360/65s systems were transferred back to CONUS in Oct1975.

I had thought that Boeing Renton datacenter was largest in the world ... something like couple hundred million in IBM gear ... but Boyd biography claims "spook base" was $2.5B "windfall" for IBM (ten times Renton).

Both Boeing and IBM branch office people told story about day 360 was announced, Boeing walked into the marketing rep's office and made a large 360 order, it was in the days that IBM sales was still on commission, and (supposedly?) the market rep was the highest paid employee that year. The following year IBM changed to "quotas" (rather than straight commission) .... and end of Jan. Boeing walked in with another large order, making the marketing rep's quota for the year. His quota then was "adjusted" and the marketing rep left IBM shortly later.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

recent posts mentioning IBM sales changing from straight commission to quota
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022e.html#73 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#48 IBM Quota

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage Series/1

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage Series/1
Date: 24 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#43 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1

IMS hot-standby trivia: My wife was in gburg JES group and one of the catchers for ASP/JES3 and co-author of JESUS (JES Unified System, all the features of JES2 & JES3 that the respective customers couldn't live without, for whatever reason, never came to fruition). Then she was con'ed into going to POK, responsible for loosely-coupled architecture where she did peer-coupled shared data architecture. She didn't remain long because 1) periodic battles with the communication group trying to force her into using VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby. She has story asking Vern Watts who he was going to ask for permission to do IMS hot-standby, he replies "nobody, he will just do it and tell them when it is all done".

hasp/asp, jes2/jes3, nji/nje posts
https://www.garlic.com/~lynn/submain.html#hasp
loosely-coupled, peer-coupled shared dasd posts
https://www.garlic.com/~lynn/submain.html#shareddata

IMS hot-standby was interested in getting the baby-bell Series/1 implementation. In the 3090 time-frame, IMS could fall-over in minutes, but VTAM had a problem with re-establishing all the sessions in large configuration taking well over an hour (even on large 3090) ... session creation overhead increased significantly as the number of sessions increased. The Series/1 implementation could do VTAM "shadow sessions" for the IMS hot-standby system, essentially achieving VTAM "hot-standby" also.

a few archived posts mentioning IMS hot-standby and shadow sessions
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2019d.html#114 IBM HONE
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2016e.html#85 Honeywell 200
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd
https://www.garlic.com/~lynn/2013l.html#46 Teletypewriter Model 33
https://www.garlic.com/~lynn/2009r.html#21 Small Server Mob Advantage

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage 1130

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage 1130
Date: 25 Oct, 2023
Blog: Facebook
				
When I 1st join IBM at Cambridge Science Center, they had 2250m4
(i.e. 2250 with 1130 as controller).

https://en.wikipedia.org/wiki/IBM_2250
somebody had ported spacewar from pdp1 to the 2250M4
https://en.wikipedia.org/wiki/Spacewar%21
I would bring my kids in on weekends to play. Control was the 2250 keyboard with keys split it half for two players. 2250m4 also mentioned in this recent post
https://www.garlic.com/~lynn/2023f.html#50 The Most Important Computer You've Never Heard Of

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent posts mentioning pdp1 space war and science center spacewar
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#63 Calma, 3277GA, 2250-4
https://www.garlic.com/~lynn/2021k.html#47 IBM CSC, CMS\APL, IBM 2250, IBM 3277GA
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use

Boeing Huntsville had gotten a duplex (two-processor) 360/67 for TSS/360 with several 2250m1 (for CAD/CAM). TSS/360 never came to production fruition, so they configured it as two 360/65 systems running MVT. As mentioned several times, MVT had a bad storage management problem, exacerbated by long running apps (like CAD/CAM, CICS addressed the problem by getting large block of storage at startup, and doing its own storage management). Boeing addressed the problem by modifying MVT13 to run in virtual memory ... didn't do any paging ... but could re-arrange page tables to increase appearance of contiguous memory (one of MVT storage management issues was storage fragmentation). This could be considered precursor to decision to add virtual memory to all 370s and MVT becoming VS2.
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers

posts mentioning Boeing Huntsville 360/67 2250s mvt13
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2010c.html#4 Processes' memory
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2009r.html#43 Boeings New Dreamliner Ready For Maiden Voyage
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2001m.html#55 TSS/360

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Vintage ASCII 360

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Vintage ASCII 360
Date: 25 Oct, 2023
Blog: Facebook
360 originally was suppose to be ASCII machine ... but the ASCII unit record gear wasn't ready ... so temporarily they were going to use (old) BCD unit record gear (ref gone 404, but lives on at way back machine)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

recent post that talks about having to do ASCII/TTY terminal support in 60s
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers

I then wanted to have the IBM 360 telecommunication controller do something it couldn't do ... kicking off at clone controller project at the univ ... adding channel interface board to Interdata mini-computer programmed to emulate IBM controller (later sold by vendor as clone telecommunication controller). trivia: early bug ... turns out the IBM controller convention was to place leading bit coming in off line went into low-order position ... resulting terminal ASCII arriving in 360 memory is bit reversed bytes; initial implementation didn't do the bit reversed bytes ... so the IBM supplied ASCII translate tables produced garbage ... quickly identified the problem and updated Interdata code to follow IBM bit reversed byte convention.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

some recent ascii 360 machine posts
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Mainframes & Minicomputers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Mainframes & Minicomputers
Date: 25 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#37 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#41 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#42 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#45 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers

23jun1969 unbundling trivia: the announcement started to charge for (application) software but managed to make the case that kernel software should be still be free. In the distraction of "Future System", the rise of clone 370 makers and then the FS implosion, the decision was made to start transition to charging for kernel software ... and some of the code I was doing for internal datacenters was chosen for initial guinea pig (I got to spend a lot of time with lawyers and business people about kernel software charging politics).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

some posts mentioning charged for resource manager
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2011e.html#79 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
https://www.garlic.com/~lynn/2011c.html#87 A History of VM Performance
https://www.garlic.com/~lynn/2008c.html#78 CPU time differences for the same job
https://www.garlic.com/~lynn/2006y.html#17 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2006.html#25 DCSS as SWAP disk for z/Linux

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 5100

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 5100
Date: 26 Oct, 2023
Blog: Facebook
Early/mid 70s, 5100 done at palo alto science center (Los Gatos was ASDD location and then morphed into VLSI center)
https://en.wikipedia.org/wiki/IBM_5100
recent post mentioning other APL work at PASC
https://www.garlic.com/~lynn/2023f.html#42 Vintage IBM Mainframes & Minicomputers

HONE posts, also mentioning PASC and APL work
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: I was at SJR (ALM after move up the hill) and YKT (continued to live in San Jose, but had to commute to YKT a couple times a month), but lsg let me have part of a wing with offices and labs.

... as to distributed computing ... possible to deploy vm/4341+3370FBA in non-datacenter environments, early 80s, large corporations were making orders for hundreds of vm4341s for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, departmental conference rooms were becoming scarce, so many done over for vm/4341 rooms.

late 80s, a senior disk engineer got a talk scheduled at world-wide, internal, annual communication group conference ... supposedly on 3174 performance ... but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales ... with data fleeing customer datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions that were constantly being vetoed by the communication group. The communication group had stranglehold on datacenters with their corporate strategic responsibility for everything that cross datacenter walls and were fiercely fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm. As partial countermeasure, the GPD/ADstar VP of software was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to drop by his investments to see if we could lend a hand.

communication group dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

It wasn't just disk and a couple years later, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company. Dec92 ref:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM, but get a call from the bowels of Armonk asking if we can help with the breakup. Business units were using supplier contracts in other units via MOUs. After the breakup, many of these business units would be in different companies ... those MOUs needed to be cataloged for turning into their own contracts. Before we get started, the board brings in a new CEO who (partially) reverses the breakup (only slightly delaying the demise of the IBM disk division).

other ref "Rise and Fall of IBM"
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

ibm downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

The Most Important Computer You've Never Heard Of

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Most Important Computer You've Never Heard Of
Date: 26 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#50 The Most Important Computer You've Never Heard Of

Note OS/360 had fixed address adcons in the program ... but RLD pointers where the loader would "relocate them" to the loaded addresses. TSS/360 came up with a different mechanism that didn't require the program contents to be modified when loaded. When I implemented paged-mapped filesystem for CMS (1st for cp67/cms then for vm370/cms), which used OS/360 compilers/assemblers and had to support the OS/360 RLD paradigm ... I was constantly fighting to modify programs to make them more like the TSS/360 convention. Part of the objective was to allow shared segment images to appear at different locations in different address spaces (rather than requiring same system wide location for each shared program image).

Note TSS/360 downside was single level store with synchronized page operations (not overlapping execution and i/o). The (failed) Future System effort had similar design. One of the last nails in FS coffin was analysis by Houston Science Center that 370/195 applications ported to FS machine made out of fastest available technology would have throughput of 370/145 (approx. 30 times slowdown).

S/38 has been described as greatly simplified FS. 1) there was sufficient hardware performance headroom for the low-end S/38 market, 2) S/38 address space was so large that files/programs were assigned unique addresses when they were brought into the system and kept that same unique address (and didn't have to be "relocated").

CMS paged mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
(relocatable) "adcon" posts
https://www.garlic.com/~lynn/submain.html#adcon
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 370/125

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 370/125
Date: 26 Oct, 2023
Blog: Facebook
After "Future System" imploded, the 125 group cons me into doing multiprocessor implementation. The Boeblingen got their hands slapped for the 115/125 design ... which had nine memory bus positions for microprocessors. The 115 had all identical microprocessors with different microprogramming; the microprocessor with 370 microprogramming got about 80KIPS 370. The 125 was the same, except the microprocessor running 370 microcode was 50% faster (about 120KIPS 370). I would support up to five microprocessor running 370 microcode (leaving four positions for microprocessors running controller microcode).

About the same, Endicott cons me into helping with ECPS (VM microcode assist) for "virgil/tully" (aka 138/148). I was told that they had 6kbytes for moving 370 instructions into microcode and would move on approx. byte-for-byte (6k bytes of 370 would translate into approx. 6k bytes microcode). Original analysis used for selected 370 instruction selection for moving into microcode:
https://www.garlic.com/~lynn/94.html#21
i.e. top 6k bytes of 370 executed instructions account for 79.55% of kernel CPU use ... moving to microcode would execute ten times faster.

Then Endicott complained about the five CPU 125 project ... and in the escalation meeting, I had to argue both sides ... but Endicott managed to get the 5-CPU 125 project killed.

trivia: 115/125 when original shipped had 370 microcode bug for new 370 "long" instructions. 360 instructions would check starting and ending storage location address and not execute if there was problem. The new 370 "long" instructions were suppose to incrementally executed. IBM VM370 had an IPL gimmick that used MVCL to clear storage with length of 16mbyte ... and program check when it reached end of memory. 115/125 would use 360 rules, find that ending address exceeded memory size and immediately program check without executing anything.

Endicott found that there was a lot of clone maker competition in the 138/48 mid-range market and convinced me to occupy them to the business forecast meetings for 138/148 (lot of travel around the world). US Region business people had been instructed to forecast based on previous model plus some percent (independent of competition and features). Outside US, world trade countries were forecasting zero 138/148 because competition had better 370 clone price/performance and IBM had to show exclusive features that were better than clone competition.

Learned that US Regional forecasts tended to be driven by hdqtrs strategic direction ... and if they were wrong, the plant sites had to eat the mistakes. Non-us, country forecasts turned into firm orders to manufacturing plants (and it was the country problem not the manufacturing plants), lot more accountability in non-US compared to US Regions. As a result, manufacturing plants redid US Regional forecasts.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
370/125 five processor multiprocessor/smp posts
https://www.garlic.com/~lynn/submain.html#bounce
360 & 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
smp, multiprocessor, tightly-coupled, compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning us regional and non-us forecasting (for virgil/tully, i.e. 138/148)
https://www.garlic.com/~lynn/2023c.html#24 IBM Downfall
https://www.garlic.com/~lynn/2022.html#33 138/148
https://www.garlic.com/~lynn/2021c.html#62 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#54 IBM Quota
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018d.html#81 IBM 138/148 & Forecasting
https://www.garlic.com/~lynn/2017i.html#80 WW II cryptography
https://www.garlic.com/~lynn/2017b.html#2 IBM 1970s
https://www.garlic.com/~lynn/2016e.html#92 How the internet was invented
https://www.garlic.com/~lynn/2015b.html#39 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014l.html#88 IBM sees boosting profit margins as more important than sales growth
https://www.garlic.com/~lynn/2012k.html#8 International Business Marionette
https://www.garlic.com/~lynn/2011m.html#37 What is IBM culture?
https://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?
https://www.garlic.com/~lynn/2005g.html#16 DOS/360: Forty years

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 5100

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 5100
Date: 27 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100

Note whole disk technology market was moving to fixed-block ... but MVS group was blocking ... so while underlying technology was continuously moving to fixed-block, there was on going requirement for CKD. In 1980, I offered FBA support to MVS group and was told that even if it was fully integrated and tested, I needed an extra $26M for training and documents (incremental sales of $200m-$300m), but since IBM was selling every disk it could make, FBA support would just translate into same amount of FBA instead of CKD (and I wasn't allowed to use lifetime savings in the business case). Can see it in 3380 formulas for records/track where record size had to be rounded up to fixed cell size. Real CKD disks haven't been made for decades all being simulated on industry standard fixed-block disks (San Jose had faction that was heavily entangled in MVS).

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

3370FBA trivia: when I first transfer to SJR, I get to wander around IBM and non-IBM datacenters in silicon valley, including bldg14&15 (disk engineering and product test) across the street. They were doing 7x24, prescheduled, stand-alone mainframe test, had mentioned they had recently tried MVS, but MVS had 15min MTBF (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor and never fail so they can do any amount of on-demand, concurrent testing (greatly improving productivity). Bldg15 gets 1st engineering 3033 (outside POK flr) and since testing only takes percent or two of CPU, we scrounge a 3830 and string of 3330s for private online service. Somebody was doing air-bearing simulation as part of (3370FBA) thin-film head design on SJR's 370/195, but only getting couple turn-arounds/month. We set him up on the 3033 and they can get multiple turn-arounds/day.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

Other disk trivia: Original 3380 had 20 track spacings between data tracks. This was cut in half for 3380E doubling tracks and then cut again for 3380K for triple tracks. In mid-80s, "father" of 801/risc drags me into helping him with "wide" disk heads, that handled 18 tracks, 16 closely spaced data tracks transferring data in parallel, with servo tracks on each side. Disk formatted with 16 data tracks and servo track. Problem was it resulted in data transfer of around 50mbytes/sec (at 3600, double that for smaller disks at 7200) ... and IBM still 5yrs away from mainframe ESCON at 17mbytes/sec.

In 1988, IBM branch office asks me to help LLNL (national lab) help standardize some serial stuff they were playing with which quickly becomes Fibre Channel standard (initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate). By 1990 we were having FCS cards and FCS non-blocking switches built for RS/6000. Late 80s, had also got sucked into doing HA/6000, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Informix, Ingres, SYBASE, that have VAXcluster support in same source base with UNIX). I do a cluster API that simulates VAXcluster semantics to simplify the ports. Early Jan1992, we have meeting with Oracle CEO where IBM AWD VP Hester tells Ellison we would have 16-way clusters by mid-92 and 128-way clusters by ye-92. A couple weeks later, cluster scale-up is transferred for announce as IBM supercomputer and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later). Possibly contributing was mainframe DB2 were complaining that if we were allowed to proceed, it would be at least 5yrs ahead of them. We had also been trying to get 9333
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#9330_family_of_disk_drives
to evolve into fractional/interoperable FCS for the HA/CMP low-end, instead it becomes (incompatible) SSA
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
and FCS at HA/CMP high end
https://en.wikipedia.org/wiki/Fibre_Channel

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

RDBMS trivia: when I 1st transferred to SJR in 70s, I'm con'ed into working with Jim Gray and Vera Watson on the original SQL/relational, System/R ... then involved in tech transfer to Endicott for SQL/DS ("under the RADAR" while the company was preoccupied with next great DBMS, EAGLE). When EAGLE implodes, there is a request about how fast could System/R be ported to MVS, eventually is announced as DB2 (originally for decision/support only). When Jim Gray was leaves SJR for Tandem fall 1980, he cons me into IMS consulting and help with BofA (early System/R installation with 60 distributed VM/4341s). Oracle VP in the HA/CMP meeting had previously been at IBM STL and involved in the doing the port of System/R to MVS for DB2.

original SQL/relational RDBMS posts
https://www.garlic.com/~lynn/submain.html#systemr

GPD/Adstar triva: the VP of software also funded project that added POSIX (aka UNIX compatibility) to MVS (communication group couldn't actually veto, while distributed computing related, it didn't directly involve something that crossed the datacenter walls).

communication group fighting off client/server and distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

EBCDIC/ASCII happened much earlier, 360s were suppose to be ASCII machines but the ASCII unit record gear wasn't ready ... so they were (supposedly) going to temporarily use the (old) BCD unit gear with EBCDIC
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above attributes it to Learson ... however, it was also Learson that was trying to block the bureaucrats, careerists (and MBAs) from destroying the Watson Legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
So by the early 90s, it as looking like it was nearly over.

ibm downfall, breakup, etc posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM Power/PC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM Power/PC
Date: 27 Oct, 2023
Blog: Facebook
"Somerset" ... the IBM executive that we reported to when doing HA/CMP product, had previously worked at Motorola ... went over to head up Somerset group (designing AIM chips) before SGI hired him as president of MIPS.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

I had HSDT starting in early 80s, T1 & faster computer links, both terrestrial and satellite. recent post about sat. T1 between Los Gatos and Clementi's E&S lab in Kingston with boatload of Floating Point Systems
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1

then got our own TDMA system with dishes in Los Gatos, Yorktown and Austin. It allowed AWD to use the VLSI logic simulator/validation in San Jose, LSM in Los Gatos and EVE in bldg 86 for RIOS chip designs, claims that it was part of what allowed bringing in the RIOS chips a year early.

recent TDMA, LSM, EVE posts
https://www.garlic.com/~lynn/2023f.html#16 Internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

other posts mentioning HSDT, Clementi, Kingston, Floating Point Systems
https://www.garlic.com/~lynn/2022h.html#26 Inventing the Internet
https://www.garlic.com/~lynn/2022f.html#5 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#88 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#57 ASCI White
https://www.garlic.com/~lynn/2022c.html#52 IBM Personal Computing
https://www.garlic.com/~lynn/2022c.html#22 Telum & z16
https://www.garlic.com/~lynn/2022b.html#79 Channel I/O
https://www.garlic.com/~lynn/2022b.html#69 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#121 HSDT & Clementi's Kinston E&S lab
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput

other posts mentioning Somerset, TDMA, EVE, & LSM
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#5 LSM - Los Gatos State Machine
https://www.garlic.com/~lynn/2018b.html#84 HSDT, LSM, and EVE
https://www.garlic.com/~lynn/2014b.html#67 Royal Pardon For Turing
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD -
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler

--
virtualization experience starting Jan1968, online at home since Mar1970

The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
Date: 28 Oct, 2023
Blog: Facebook
The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://hackaday.com/2023/10/27/the-many-ways-to-play-colossal-cave-adventure-after-nearly-half-a-century/

... after transferring to SJR in the 70s, I got to wander around IBM and non-IBM datacenters in silicon valley. One was TYMSHARE that I would also periodically see at the monthly BAYBUNCH user group meetings hosted at Stanford SLAC
https://en.wikipedia.org/wiki/Tymshare
... note in Aug1976, TYMSHARE started providing their CMS-based online computer conferencing system (precursor to social media), "free" to the mainframe user group SHARE
https://www.share.org/
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I had cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE files for putting up on internal network & systems (including the online, world-wide sales&marketing system HONE; trivia: one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was long time customer). One of the problems was with the lawyers who were concerned that internal employees would be contaminated being exposed to what customers were saying.

On one TYMSHARE visit they demo'ed the ADVENTURE game
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
https://en.wikipedia.org/wiki/Adventure_game

that one of their people had found on the STANFORD SAIL PDP10 and ported to VM370/CMS. I got a copy and made available for internal users ... and would make source available to those that could demonstrate that they had got all points.

after I had distributed a few souce copies ... pli versions and versions with large number of more points started appearing

One year, SJR was having a corporate audit and the auditors directed that all games had to be removed from the systems and we resisted. Most internal 3270 logon screens had "For Business Purposes Only" ... however SJR had "For Management Approved Uses Only".

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

a few old posts
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2016g.html#66 Is the IBM Official Alumni Group becoming a ghost town? Why?

--
virtualization experience starting Jan1968, online at home since Mar1970

The Most Important Computer You've Never Heard Of

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Most Important Computer You've Never Heard Of
Date: 28 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#56 The Most Important Computer You've Never Heard Of

It is possible to journal changes for all sort of filesystems ... even single-level-store (RS/6000 did journal for the unix filesystem, large unix systems could require multi-hr filesystem check after failure). Folklore is that when FS imploded ... Rochester did a much simplified implementation for S/38 ... one was all disks in a single filesystem with possible scatter allocation of a file spread across multiple disks. No difference for small single disk S/38 systems, but problems enormously grew as number of disks increased. To backup had to shutdown and backup the whole filesystem (across all disks, not just a single disks). A single disk failure required replacing the failed disk and then do a whole filesystem restore (had been known to take 24hrs). One of the San Jose disk engineers had gotten patent on what came to be called "RAID" ... and S/38 was early adopter ... because single disk failure was so disastrous for larger complexes.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
801/risc, iliad, romp, rios, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Story by one of the people that worked on CP67 system for MIT Urban lab in tech sq (and also Multics) ... datacenter across the quad from 545tech sq that had IBM science center on 2nd&4th flr and Multics on 5th flr. They had made a CP67 modification that resulted in 27 system crashes in single day (but CP67 would automatically dump and come back up and be immediately live with no human intervention) ... by contrast Multics could spend an hour or so do file system check on their single-level-store (folklore is that UNIX inherited the file system check from MULTICS). After the example CP67 fast fail and automatically back up, MULTICS enhanced their single-level-store
http://www.multicians.org/thvv/360-67.html
(It is a tribute to the CP/CMS recovery system that we could get 27 crashes in in a single day; recovery was fast and automatic, on the order of 4-5 minutes. Multics was also crashing quite often at that time, but each crash took an hour to recover because we salvaged the entire file system. This unfavorable comparison was one reason that the Multics team began development of the New Storage System.)
https://www.multicians.org/nss.html

science center & 545tech sq posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Why Do Mainframes Still Exist

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why Do Mainframes Still Exist
Date: 29 Oct, 2023
Blog: Facebook
... well ... 1980 IBM STL is bursting at the seams and 300 people from the IMS group are being moved to offsite bldg with dataprocessing back to STL datacenter and I get con'ed into doing channel extender support so they can place channel attached 3270 controllers at off-site bldg (with no difference in online/interactive/response human factors between inside STL and offsite). The hardware vendor tries to get IBM to release my support, but there is a group in POK playing with some serial stuff (afraid that if it was in the market, it would be harder to get their stuff released) and get it vetoed.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, LLNL (national lab) is playing with some serial stuff and the IBM branch office cons me into helping them get it standardized, which quickly becomes Fibre Channel Standard (FCS, initially 1gbit full-duplex, aggregate 200mbyte/sec), including some stuff I had done in 1980.

Then the POK people get their stuff released in 1990 with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then some POK people become involved in FCS and define heavy-weight protocol that drastically cuts the native throughput, which is eventually released as FICON. The most recent, public benchmarks I can find is z196 "peak I/O" getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a native FCS was announced for E5-2600 blades (at the time common in large cloud megadatacenters, typically having 500,000 or more such blades) claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel
other Fibre Channel: Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

... also remember that no CKD DASD have been made for decades, all being simulated on industry fixed-block disks (while blades are doing native I/O directly to fixed-block disks ... compared to "mainframe" may being doing CKD emulation).

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

... other detail not seen since z196 time-frame, was that max configured z196 with max number of SAPs (actually doing I/O) recommended SAPs operate no more 70% CPU ... which would have caped around 1.5M IOPS

... note some of the mainframe I/O folklore came with marketing respinning the increased number of 3090 channels. Trout/3090 had configured number of channels for target throughput, assuming 3880 controller was same as 3830 controller, but with 3mbyte/sec transfer. However, 3880 had a very slow processor handling channel commands that significantly increased channel busy. As a result in order to achieve the target throughput, they would have to significantly increase the number of channels (to offset the enormous increase in channel busy overhead). The big increase in number of channels required an additional TCM and 3090 group semi-facetiously said they would bill the 3880 group for the increase in 3090 manufacturing cost. Marketing then respun the big increase in channels as wonderful I/O machine ... rather than to offset the big increase in disk controller channel busy overhead.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

We can't fight the Republican party's 'big lie' with facts alone

From: Lynn Wheeler <lynn@garlic.com>
Subject: We can't fight the Republican party's 'big lie' with facts alone
Date: 30 Oct, 2023
Blog: Facebook
We can't fight the Republican party's 'big lie' with facts alone
https://www.theguardian.com/us-news/commentisfree/2023/oct/29/you-cant-fight-the-republican-partys-big-lie-with-facts-alone

I was reminded of Wedeen's research when the US Congress finally selected a speaker after weeks of chaos. Their choice, Congressman Mike Johnson of Louisiana, is best known for ardently supporting ex-president Donald Trump's baseless claims that the 2020 election, which Trump lost to Joe Biden, was rigged. Johnson rallied more than 100 House Republicans to question the integrity of the election. He constructed spurious legal arguments that tried to discredit the vote, though his proposals were thrown out by the US supreme court. He raised the unfounded theory that the voting machines used in the election were tampered with.

... snip ...

Dark Towers
https://www.amazon.com/Dark-Towers-Deutsche-Donald-Destruction-ebook/dp/B07NLFHHJ3/
pg315/loc3541-46:

This sort of provocative bombast would come to define Trump's candidacy and then his presidency. But even before his dig at supposed Mexican rapists, he had made racism a crucial part of his public shtick. More than any major American politician in decades, Trump had recognized that there was nothing stopping him from mining the potent seams of race and ethnicity for his political advantage. That is why he had spent years spreading the lie that Barack Obama wasn't born in the United States and therefore was an illegitimate president. It didn't matter that the assertion was false. The point was to grab attention and to inflame passions, and Trump--the star of his own popular reality-TV show--had an undeniable knack for doing exactly that.

... snip ...

Big lie
https://en.wikipedia.org/wiki/Big_lie

A big lie (German: grose Luge) is a gross distortion or misrepresentation of the truth primarily used as a political propaganda technique.[1][2] The German expression was first used by Adolf Hitler in his book Mein Kampf (1925) to describe how people could be induced to believe so colossal a lie because they would not believe that someone "could have the impudence to distort the truth so infamously".

... snip ...

Michael Cohen: Trump moves are 'right out of "Mein Kampf"'
https://thehill.com/blogs/blog-briefing-room/4129073-michael-cohen-trump-moves-are-right-out-of-mein-kampf/
Donald Trump and The Big Lie
https://medium.com/stories-ive-been-meaning-to-tell-you/the-big-lie-a490c3b441f8
Trump's false or misleading claims total 30,573 over 4 years
https://www.washingtonpost.com/politics/2021/01/24/trumps-false-or-misleading-claims-total-30573-over-four-years/
Trump's Habit of Lying About Everything All the Time May Cost Him Trump Tower
https://www.vanityfair.com/news/2023/09/trumps-lying-about-everything-may-cost-trump-tower
Fact check: Trump lies that Senate Democrats stole the 2020 election, baselessly accuses NBC's owner of treason
https://www.cnn.com/2023/09/25/politics/fact-check-trump-treason-nbc-senate-democrats/index.html
The 15 most notable lies of Donald Trump's presidency
https://www.cnn.com/2021/01/16/politics/fact-check-dale-top-15-donald-trump-lies/index.html

rascism posts
https://www.garlic.com/~lynn/submisc.html#rascism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

specific posts
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022d.html#3 DHS watchdog says Trump's agency appears to have altered report on Russian interference in 2020 election
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021h.html#21 A Trump bombshell quietly dropped last week. And it should shock us all
https://www.garlic.com/~lynn/2021h.html#19 Wealthiest Netted Billions From Trump Tax Cut They Helped Write: Report
https://www.garlic.com/~lynn/2021h.html#10 Analysis: Why Refusing the COVID-19 Vaccine isn't Just Immoral - it's 'un-American'
https://www.garlic.com/~lynn/2021g.html#78 Fox Hosts Hit Peak Bizarro World: Tucker Lies, Says Fauci 'Created' Covid
https://www.garlic.com/~lynn/2021g.html#58 The Storm Is Upon Us
https://www.garlic.com/~lynn/2021f.html#82 Giuliani's Law License Is Suspended Over Trump Election Lies
https://www.garlic.com/~lynn/2021f.html#1 'Madman ... racist, sexist pig': new book details Obama's real thoughts on Trump
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2021e.html#42 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#79 Racism's Loud Echoes in America
https://www.garlic.com/~lynn/2021c.html#43 Just 15% of Americans say they like the way that Donald Trump conducts himself as president
https://www.garlic.com/~lynn/2021c.html#24 US intelligence report finds Saudi Crown Prince responsible for approving operation that killed Khashoggi
https://www.garlic.com/~lynn/2021.html#44 American Fascism
https://www.garlic.com/~lynn/2021.html#40 National Guard deployment in DC
https://www.garlic.com/~lynn/2021.html#30 Trump and Republican Party Racism
https://www.garlic.com/~lynn/2021.html#29 How the Republican Party Went Feral. Democracy is now threatened by malevolent tribalism
https://www.garlic.com/~lynn/2021.html#24 Trump Tells Georgia Official to Steal Election in Recorded Call
https://www.garlic.com/~lynn/2020.html#21 Saudi ruler aimed to 'silence' Washington Post

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 30 Oct, 2023
Blog: Facebook
I had been blamed for online computer conferencing (precursor to social media) on the internal network in the late 70s and early 80s. It really took off spring of 1981 when I distributed a trip report visiting Jim Gray at Tandem (had left San Jose Research fall1980, pawning some things off on me). Claims that upwards of 25,000 were reading but only about 300 directly participated ... folklore is when corporate executive committee was told, 5of6 wanted to fire me. One of the outcomes was officially sanctioned forum software and moderated discussion groups ... also used for internal software/tools distribution.

Note when I transferred to San Jose Resarch, got to wander around IBM and non-IBM datacenters in silicon valley ... including TYMSHARE (which I would also see at the monthly BAYBUNCH meetings hosted by Stanford SLAC).
https://en.wikipedia.org/wiki/Tymshare
... in Aug1976, TYMSHARE started providing their CMS-based online computer conferencing system (precursor to social media), "free" to the mainframe user group SHARE
https://www.share.org/
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I cut a deal to get monthly tape dump of all files for placing on internal network and systems (including world-wide, online sales&marketing HONE system). Biggest problem was with lawyers that were afraid that internal employees could be contaminated exposed to (unfiltered) customer comments.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

from IBM Jargon:

Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 30 Oct, 2023
Blog: Facebook
lots of cambridge science center, virtual memory, cp40, cp67, tss/360, 360/67 from melinda in vm370 history
http://www.leeandmelindavarian.com/Melinda#VMHist

lots of univ were sold 360/67 for tss/360 ... but tss/360 was late & had lots of problems. Lots of places just used 360/65 for os/360 ... and Stanford and Univ. of Michagan implemented their own virtual memory systems for 360/67 (I was at one had been been hired fulltime responsible for os/360 after having taken two credit hr intro to fortran/computers). The univ shutdown datacenter for the weekend and I had the place (mostly) dedicated (although 48hrs w/o sleep made Monday classes hard).

Cambridge came out to install CP/67 (3rd install after cambridge itself and mit lincoln albs) and I mostly played with it in my weekend window. I do some benchmarks, mostly running os/360 in virtual machine ... but also a few simulated CMS interactive users. The TSS/360 SE was still around and we implemented a fortran edit, compile executed script for simulated usres. CP67/CMS with 35 simulated users had better interactive response and higher throughput than TSS/360 with four users.

I then rewrote a lot of CP67 to cut CP67 processing running OS/360. The 360/67 replaced a 709/1401 that was running student fortran jobs in less than second. Initially ran over a minute on OS/360 with 3step FORTCLG and I install HASP which cuts it in half. I then start redoing SYSGEN STAGE2 to place datasets and PDS members to optimize seek and multi-track search cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

I'm then doing OS/360 benchmark that takes 322secs, original in CP67, 856secs, 534secs CP67 CPU; After a few months I've rewritten CP67 code that gets it reduced to 435secs, 113secs CP67 CPU (534-113=421secs). CP67 did disk I/O FIFO queuing for all I/O (virtual machine and CP) and single 4k page transfers. I redo it so have ordered seek queuing and multiple chained 4k page transfers (ordered to maximize transfers/revolution). 2301 fixed-head drum peaked about 70/sec ... I could get it up to about 270/sec. I then redo page replacement algorithm (to global LRU) and the scheduling algorithm ... looked a little like CTSS (& MULTICS) ... to dynamic adaptive resource management (other univs/share referred to as "wheeler scheduler").

After graduating, I join cambridge science center ... and one of the things I do is a page-mapped filesystem for CMS ... note TSS/360, MULTICS, and "Future System" were all doing "single level store" ... and I would claim I learned what not to do in a "single level store" implementation from TSS/360.

... note cp40 was done on 360/40 that was modified to have virtual memory. cp40 morphs into cp67 when 360/67 standard with virtual memory becomes available.

observation when tss/360 was "terminated" there was about 1200 people in the group (and 12 people in cp67/cms group). the tss group cut back to 20 people and quality greatly improved ... including porting to 370.

note some of the ctss/7094 people had gone to the 5th flr and multics ... others had gone to ibm science center on 4th flr, did cp40, cp67, internal network, lot of performance and interactive apps. in the 60s there were two cp67 spinoffs of the science center that provided commercial online services.

before I graduate (and join the science center) I was hired fulltime into a small group in the Boeing cfo office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment... including offering services to non-boeing entities). I think Renton datacenter possibly largest in world ... couple hundred million in 360s ... 360/65s arriving faster than could be installed... boxes constantly staged in hallways around machine room. lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the room and install a 360/67 for me to play with when I'm not doing other stuff)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
paged mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
posts about shared segment with paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#adcon

some posts mentioning 709/1401, fortgclg, watfor
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2015.html#51 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base
https://www.garlic.com/~lynn/2013.html#24 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?

posts mentioning tss/360, tss/370, ssup
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017d.html#76 Mainframe operating systems?
https://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2013n.html#24 Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2012o.html#34 Regarding Time Sharing
https://www.garlic.com/~lynn/2012f.html#28 which one came first
https://www.garlic.com/~lynn/2012.html#67 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011o.html#14 John R. Opel, RIP
https://www.garlic.com/~lynn/2011f.html#85 SV: USS vs USS
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons
https://www.garlic.com/~lynn/2010o.html#0 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010l.html#2 TSS (Transaction Security System)
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
https://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted
https://www.garlic.com/~lynn/2007k.html#43 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2006p.html#22 Admired designs / designs to study
https://www.garlic.com/~lynn/2006m.html#30 Old Hashing Routine
https://www.garlic.com/~lynn/2005b.html#13 Relocating application architecture and compiler support

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 31 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360

After the 1st two CP67 commercial online service spin-offs from the science center in the 60s ... not only increasing number of IBM and customer CP67 installations but also other CP67 commercial online service providers. Science center has some friendly rivalry with Multics on the 5th flr ... one issue was never saw any MULTICS commercial online services ... just CP67. Both of the commercial spinoffs quickly move up the value stream specializing in wallstreet/financial customers (one of the 60s MIT people at one of the spinoffs, a decade later joins with Bricklin to implement the 1st spreadsheet).

recent post about (CP67) Urban lab (online service for urban planners) and CP67 / MULTICS friendly rivalry
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of

note folklore some of the Multics "bell lab" people go back home and do UNIX as simplified MULTICS (and unix filesystem check, FSCK and other things, were inherited from MULTICS). trivia: above Urban lab 27 crashes was their modifications to the TTY terminal support code ... which can be called my fault ... since I had done the original ASCII/TTY terminal code at the univ. ... and I had done some tricky code (for the fun of it) for 1byte line lengths (since tty terminals were 72char lengths). Some urban planners down at Harvard had got a ascii/tty device with 1200char lengths.

post about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2023f.html#2011.html#73 Vintage IBM Mainframes & Minicomputers

and VM370 group is spun off from the science center. in the morph from CP67->VM370 a lot of stuff was simplified or dropped. As internal datacenters transition internal 370s from CP67SJ to VM370 ... I spend some amount of 1974 adding stuff back into VM370 for my internal distribution for CSC/VM
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers

My internal distribution peaks around 120+ CSC/VM systems ... before lots of the stuff is picked up for inclusion in standard product, but I can needle MULTICS people that the total maximum MULTICS installations was only 84 (compared to my internal 120+)
https://www.multicians.org/sites.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 3380s

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 3380s
Date: 31 Oct, 2023
Blog: Facebook
note, original 3380 had 20 track spacings between each data track. that was cut in half for 3380E for double the tracks (and capacity) ... and cut again for 3380K for triple the number of tracks (and capacity) .. but no change in transfer rate.

mid-80s, the father of 801/risc cons me into helping him with "wide-head" design ... 16 closely spaced tracks with servo track on each side ... transferring data on all 16 tracks in parallel ... for approximately 16 times capacity and 16 times transfer rate (50mbyte/sec) of origin 3380. Going to smaller size disk could spin at twice (7200rpm) for 100mbytes/sec. Problem was mainframe channels were still stuck at 3mbytes/sec and we couldn't make any headway.

1980, STL (since renamed SVL) was bursting at seams and 300 people were being moved to offsite bldg with service back to STL datacenter and I got con'ed into doing channel-extender support, so channel-attached 3270 controllers could be placed at offsite bldg, resulting in no perceived difference between 3270 human factors (back in days of quarter sec or better response) between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there was group in POK playing with some serial stuff and gets it vetoed (afraid if it was in the market, it would be harder to get their stuff released).

In 1988, LLNL (national lab) is playing with some serial stuff and IBM branch office cons me into help them get it standardized which quickly becomes Fibre Channel Standard (FCS) including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec. Then in 1990, the POK group gets their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec)

Then some POK engineers become involved in FCS and define a heavy-weight protocol that drastically reduces native protocol, eventually released as FICON. The latest, public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time, there is a FCS announced for E5-2600 blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON). Also there were IBM pubs about SAPs (system assist processors that do actual I/O) recommending SAP CPUs at no more than 70%, which would cap the IOPS around 1.5M.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some past posts mentioning system assist processors
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2018.html#0 Intrigued by IBM
https://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014k.html#19 High CPU Utilized
https://www.garlic.com/~lynn/2014h.html#57 [CM] Mainframe tech is here to stay: just add innovation
https://www.garlic.com/~lynn/2014c.html#22 US Federal Reserve pushes ahead with Faster Payments planning
https://www.garlic.com/~lynn/2013m.html#33 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013j.html#86 IBM unveils new "mainframe for the rest of us"
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013h.html#79 Why does IBM keep saying things like this:
https://www.garlic.com/~lynn/2013h.html#40 The Mainframe is "Alive and Kicking"
https://www.garlic.com/~lynn/2013h.html#3 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#4 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2013c.html#62 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#6 mainframe "selling" points
https://www.garlic.com/~lynn/2013.html#10 From build to buy: American Airlines changes modernization course midflight
https://www.garlic.com/~lynn/2012o.html#46 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012o.html#25 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#21 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012o.html#6 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#72 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#46 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012m.html#67 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 3380s

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 3380s
Date: 31 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/~lynn/2023f.html#67 Vintage IBM 3380s

3370 thin-film, recent post that talks about air bearing simulation for design 3370 thin film heads (later also used for 3380); also disk technology all moves to fixed-block disks
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
recent post mentioning s/38 filesystem and was early RAID adopter
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of

note in the 80s, 3370 FBA was only new mid-range disk that could be deployed in non-datacenter areas ... large corporations were ordering hundreds of vm/4341s (w/3370s) at a time for deployment out in departmental areas ... sort of the leading edge of the coming distributed computing tsunami. Inside IBM, departmental conference rooms were becoming scarce because so many were converted to distributed vm/43431s.

MVS was looking at large distributed computing uptake and complained about lack of new CKD mid-range disks ... so there was CKD emulation for the 3375. It didn't do them much good, distributed computing was looking at tens of vm/4341s per support person while mvs/4341s was still tens of support people per mvs system. Note new CKD disk have not been made for decades, all CKD being simulated on industry standard fixed-block disks.

note 3380 was already on its way to fixed block, in calculations for records/track where record size had to be rounded up to cell size. 1980 I offered mvs group fba support. I was told that even if it was fully tested and integrated ... I still needed $26M ($200M-$300M in incremental sales) for publications/documentation and education/training ... and I wasn't allowed to use lifetime savings in business case. Also since IBM was selling every disk it was making, any MVS FBA support would just translate into same amount disks (FBA instead of CKD)

posts mentioning dasd, ckd, fba, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd

a few posts mentioning San Jose engineer filing patent on what becomes RAID
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
https://www.garlic.com/~lynn/2014m.html#115 Mill Computing talk in Estonia on 12/10/2104
https://www.garlic.com/~lynn/2014b.html#68 Salesmen--IBM and Coca Cola

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage TSS/360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage TSS/360
Date: 01 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#66 Vintage TSS/360

Note CERN gave presentation at SHARE on analysis comparing MVS/TSO with VM370/CMS ... inside IBM the copies were labeled "IBM Confidential - Restricted" (2nd highest security classification), "on a need to know basis only" (minimize internal employees knowing what customers actually thought)

MVS song sung at SHARE HASP sing-along (I was at 1st SHARE performance)
http://www.mxg.com/thebuttonman/boney.asp

with respect to comment about Q/FS in boney finger ...

a decade ago, I was ask to track down decision to add virtual memory to all 370s ... basically MVT storage management was so bad that regions typically had to be specified four times larger than used, as a result a 1mbyte 370/165 would only be running four concurrent regions, insufficient to keep processor busy and justified. Going to MVT mapped into 16mbyte virtual memory (similar to running MVT in CP67 16mbyte virtual machine) would enable running four times as many regions with little or no paging. Pieces of email exchange in this archived post
https://www.garlic.com/~lynn/2011d.html#73

Ludlow was doing initial implementation on 360/67 for VS2/SVS ... a little bit of code to build virtual memory tables and handle page faults and page I/O. His biggest code was similar to CP67 ... where channel programs with virtual addresses are passed to SVC0/EXCP to do the I/O; copies of the channel programs were made, substituting real addresses for virtual ... and he borrowed CP67 CCWTRANS to craft into EXCP.

Then VS2/SVS is upgraded to VS2/MVS with multiple 16mbyte virtual memories ... and was suppose to be the bases for Future System. FS was completely different and was to completely 370, during FS internal politics were shutting down 370 efforts and the lack of new 370 is credited with giving clone 370 makers their market foothold. When FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts. Some more FS background
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

CERN MVS/TSO : VM370/CMS analysis at SHARE
https://www.garlic.com/~lynn/2023e.html#66 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives
https://www.garlic.com/~lynn/2007t.html#40 Why isn't OMVS command integrated with ISPF?
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2003k.html#13 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2001l.html#20 mainframe question

other posts mentioning SHARE boney finger
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#34 Vintage Computing
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2014f.html#56 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2009q.html#14 Electric Light Orchestra IBM song, in 1981?
https://www.garlic.com/~lynn/2009q.html#11 Electric Light Orchestra IBM song, in 1981?
https://www.garlic.com/~lynn/99.html#117 OS390 bundling and version numbers -Reply

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage RS/6000 Mainframe

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage RS/6000 Mainframe
Date: 01 Nov, 2023
Blog: Facebook
re: rs/6000, well one day Nick Donofrio
https://www.amazon.com/If-Nothing-Changes-Donofrio-Story-ebook/dp/B0B178D91G/

stopped in Austin and all the local executives were out of town. My wife put together hand drawn charts and estimates for doing NYTimes project for Nick ... and he approved it. Possibly contributed to offending so many people in Austin that suggested that we do the project in San Jose. It started out HA/6000 for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres who had VAXcluster support in same source base with UNIX, i did cluster API that implemented VAXCluster semantics to simplify the ports).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

Early Jan1992, had meeting with Oracle CEO where AWD VP Hester told Ellison that we would have 16-way cluster mid-92 and 128-way cluster ye-92. Within a couple weeks of the Ellison/Hester meeting, cluster scale-up was transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told that we couldn't work on anything with more than four processors (we leave IBM a few months later). Contributing was mainframe DB2 was complaining if we were allowed to continue, we would be at least five years ahead of them.

At the time, the S/88 product administrator was also taking us around to some of their customers as well as getting us to write a section for the corporate continuous available strategy document (but it gets pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the objectives).

We were planning on FCS and FCS non-blocking switches for high-end HA/CMP and also working with Hursley on 9333 ... hoping we could migrate it to fractional, interoperable FCS for low/mid-range HA/CMP ... instead 9333 morphs into (incompatible) SSA
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

1980, STL (since renamed SVL) was bursting at seams and 300 people were being moved to offsite bldg with service back to STL datacenter and I got con'ed into doing channel-extender support, so channel-attached 3270 controllers could be placed at offsite bldg, resulting in no perceived difference between 3270 human factors (back in days of quarter sec or better response) between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there was group in POK playing with some serial stuff and gets it vetoed (afraid if it was in the market, it would be harder to get their stuff released).

In 1988, LLNL (national lab) is playing with some serial stuff and IBM branch office cons me into help them get it standardized which quickly becomes Fibre Channel Standard (FCS) including some stuff I had done in 1980, initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec. Then in 1990, the POK group gets their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec).

Then some POK engineers become involved in FCS and define a heavy-weight protocol that drastically reduces native protocol, eventually released as FICON. The latest, public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time, there is a FCS announced for E5-2600 blades claiming over million IOPS (two such native FCS having higher throughput than 104 FICON). Also there were IBM pubs about SAPs (system assist processors that do actual I/O) recommending SAP CPUs at no more than 70%, which would cap the IOPS around 1.5M.

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel
other Fibre Channel: Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch

re: DB2; When I first transfer to SJR in the 70s, I did some work with Jim Gray and Vera Watson on the original SQL/relational implementation, System/R. Then helped with technology transfer to Endicott for SQL/DS ("under the radar" while the corporation was pre-occupied with the next new DBMS, "EAGLE"). When "EAGLE" imploded, there was a request for how fast could System/R be ported to MVS ... eventually released as DB2 for decision/support only. At the time, the Oracle VP in the Ellison meeting was at STL and working on the port to MVS.

re: DASD; Original 3380 had 20 track spacings between every data track, that was cut in half for double the number of tracks for 3380E, and cut again for triple the tracks for 3380K (all still 3mbyte/sec transfer). Mid-80s "father" of 801/risc (precursor to RS/6000) gets me to help him with "wide-track" disk head, handles 16 closely-packed data tracks with a servo track on each side, transfers all 16 tracks in parallel, almost 20 times capacity and 50mbytes/sec, smaller disks at 7200RPM could be 100mbytes/sec. Didn't make much head-way since IBM POK/mainframe was still stuck at 3mbytes/sec.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
original SQL/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some disk "wide-head" posts
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018f.html#33 IBM Disks
https://www.garlic.com/~lynn/2018d.html#17 3390 teardown
https://www.garlic.com/~lynn/2018d.html#12 3390 teardown
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2012e.html#103 Hard Disk Drive Construction

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe PROFS

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe PROFS
Date: 01 Nov, 2023
Blog: Facebook
In 70s, we had lots of discussions after work about what we could do to convince the large numbers of computer illiterate employees to use computers. Then there was rapidly spreading rumor that the corporate executive committee was using email to communicate ... and saw a epidemic of middle management redirecting project 3270 deliveries to their desks ... which would be powered on and possibly logged on (trying to give appearance of computer literacy) ... but not used, with the image being burned into the 3270 screen (any email handled by staff). Note this was in period when 3270 orders were part of yearly budget and required justification with VP sign-off.

Note co-worker at science center was responsible for the internal network (larger than aparnet/internet from just about the beginning until sometime mid/late 80s), technology also used for the corporate sponsored university BITNET.

The PROFS group was picking up various internal apps and developing their own ... for wrapping 3270 menus ... for more ease of use, especially for the computer illiterate. They had picked up a very early version of VMSG for the email client. Then when the VMSG author tried to offer them a much enhanced version, they tried to get him fired (apparently they had taken credit for everything in PROFS). The whole thing quieted down when the VMSG author showed that his initials were in every PROFS email (in non-displayed field). After that the VMSG author only shared his source with me and one other person.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
ibm downfall, breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

a few past posts mentioning PROFS, VMSG, 3270s
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2018c.html#15 Old word processors

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage RS/6000 Mainframe

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage RS/6000 Mainframe
Date: 02 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe

1990-era RS/6000 had POK mainframe beat in processing and I/O (with FCS) and comparable in hardware reliability and even two-way HA/6000 beat POK mainframe in reliability, implied by being taken around by IBM S/88 (IBM logo'ed Stratus box) product administrator and POK getting HA/6000 section pulled from the corporate continuous availability strategy document. The big issue is all the customer 360/370 commercial applications.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Note in early 70s, attended a Amdahl talk in large MIT auditorium (shortly after he had left IBM and formed his clone 370 company). He was asked what analysis did he use with investors. He replied that even if IBM was to completely walk away from 370, there was ("already") enough IBM customer software sufficient to keep him in business through the end of the century. It sort of implied he knew about the IBM Future System project which was completely different from 370 and was going to completely replace 370 (also internal politics during FS was killing off 370 projects and the lack of new 370 during FS period is credited with giving the clone 370 makers their market foothold, when FS finally imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081). Amdahl was asked about FS in later years and claimed he knew nothing about it.

More information about FS:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Amdahl had left when ACS/360 was killed and before FS started. Supposedly IBM executives killed ACS/360 because they felt it would advance the state of the art too fast and IBM would loose control of the market ... following also lists some ACS/360 features that show up more than two decades later with ES/9000:
https://people.cs.clemson.edu/~mark/acs_end.html

trivia: Jim Gray had left IBM in fall 1980 for Tandem. At Tandem, he did a analysis of system/service outages and found that hardware reliability had advanced to the point that nearly all outages came from environmental (floods, earthquakes, power outages, hurricane/tornado, etc) and human mistakes. Copy of his foils:
https://www.garlic.com/~lynn/grayft84.pdf
also
https://jimgray.azurewebsites.net/papers/TandemTR86.2_FaultToleranceInTandemComputerSystems.pdf

one of the reasons we had done the HA/6000 work and I coined the terms disaster survivability and geographic survivability when out marketing. availability posts
https://www.garlic.com/~lynn/submain.html#available

other triva: after leaving IBM, the guy running FEDWIRE liked us to stop by NYFED and talk technology, FEDWIRE ran on IMS hot-standby with two locally and 3rd remote and he would claim that IMS hot-standby and automated operator were responsible for FEDWIRE 100% availability over the previous decade.

Note my wife had been in the gburg JES group and one of the catchers for ASP/JES2 and co-author of JESUS (JES Unified System) specification, all features of JES2 and JES3 that the respective customers couldn't live without (for various reasons never came to fruition). She was then con'ed into going to POK responsible for (mainframe) "loosely-coupled" architecture ... where she did peer-coupled shared data architecture. She didn't remain long because 1) constant battles with the communication group trying to force her into using VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX & Parallel SYSPLEX) except for IMS hot-standby. She has story asking Vern Watts who he was going to ask permission to do IMS hot-standby. He replies "nobody, he would just tell them when it was all done".

peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

A-10 Vs F-35 Close Air Support Flyoff Report Finally Emerges

From: Lynn Wheeler <lynn@garlic.com>
Subject: A-10 Vs F-35 Close Air Support Flyoff Report Finally Emerges
Date: 02 Nov, 2023
Blog: Facebook
A-10 Vs F-35 Close Air Support Flyoff Report Finally Emerges. The report was buried until now, more than four years after secretive comparative testing between the A-10 and F-35 concluded.
https://www.thedrive.com/the-war-zone/a-10-vs-f-35-close-air-support-flyoff-report-finally-emerges

military-industrial(-congressional) complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

posts mentioning a-10 and f-35:
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2021i.html#88 IBM Downturn
https://www.garlic.com/~lynn/2021d.html#77 Cancel the F-35, Fund Infrastructure Instead
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#74 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2018c.html#2 FY18 budget deal yields life-sustaining new wings for the A-10 Warthog
https://www.garlic.com/~lynn/2017i.html#38 Bullying trivia
https://www.garlic.com/~lynn/2016d.html#89 China builds world's most powerful computer
https://www.garlic.com/~lynn/2016b.html#105 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#10 What Will the Next A-10 Warthog Look Like?
https://www.garlic.com/~lynn/2016.html#57 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2015f.html#43 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#42 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015c.html#3 How Russia's S-400 makes the F-35 obsolete
https://www.garlic.com/~lynn/2015b.html#59 A-10
https://www.garlic.com/~lynn/2015.html#16 NYT on Sony hacking
https://www.garlic.com/~lynn/2015.html#10 NYT on Sony hacking
https://www.garlic.com/~lynn/2014i.html#102 A-10 Warthog No Longer Suitable for Middle East Combat, Air Force Leader Says
https://www.garlic.com/~lynn/2014h.html#61 Are you tired of the negative comments about IBM in this community?
https://www.garlic.com/~lynn/2014h.html#52 EBFAS
https://www.garlic.com/~lynn/2014h.html#31 The Designer Of The F-15 Explains Just How Stupid The F-35 Is
https://www.garlic.com/~lynn/2014f.html#90 A Drone Could Be the Ultimate Dogfighter
https://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014c.html#40 F-35 JOINT STRIKE FIGHTER IS A LEMON

--
virtualization experience starting Jan1968, online at home since Mar1970

Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit
Date: 02 Nov, 2023
Blog: Facebook
Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit
https://www.cnn.com/2023/11/02/politics/israel-aid-irs-funding-deficit/index.html
Why are Changes to IRS Funding Always Scored as Increasing the Deficit?
https://budgetmodel.wharton.upenn.edu/issues/2023/11/1/why-irs-funding-scored-as-increasing-deficit

2002, republican house lets fiscal responsibility act lapse (spending couldn't exceed tax revenue, on its way to eliminating all federal deficit). CBO 2010 report that 2003-2009, tax revenue was cut $6T and spending increased $6T for $12T gap (compared to fiscal responsible budget), first time taxes were cut to not pay for two wars. Sort of confluence of the Federal Reserve and Too-Big-To-Fail wanted huge federal debt, special interests wanted huge tax cut, and military-industrial complex wanted huge spending increase. 2005, US Comptroller General started including in speeches that nobody in congress was capable of middle school arithmetic (for how badly they were savaging the budget). The following administration managed to lower some annual deficits (mostly some reduction in spending), but tax revenue has yet to be restored.

2009, IRS press said that it was going after $400B owed by 52,000 wealthy Americans on trillions illegally stashed overseas. Then spring 2011, the new speaker of the house said it was cutting the budget for the IRS department responsible for recovering the $400B (and fines) from the 52,000 wealthy Americans. After that there was some press about a couple overseas banks (that facilitated the tax evasion) have been fined a few billion ... but nothing about recovering the $400B (and fines).

2018 administration had more huge tax cuts for large corporations claiming that it would go for employee bonuses and hiring. The website for the "poster child" corporation for worker bonuses said that workers would receive up to a $1000 bonus. NOTE: if every worker actually received the full $1000 bonus, it would be less than 2% of the tens of billion from its corporate tax cut (the rest going for stock buybacks and executive compensation).

there are jokes about US congress being the most corrupt institution on earth, in large part from the way members of certain house committees are able to collect "donations" from special interests.

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
tax fraud, tax evasion, tax loopholes, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

lots of posts mentioning congress is most corrupt institution on earth:
https://www.garlic.com/~lynn/2022h.html#79 The GOP wants to cut funding to the IRS. We can't let that happen
https://www.garlic.com/~lynn/2022h.html#5 Elizabeth Warren to Jerome Powell: Just how many jobs do you plan to kill?
https://www.garlic.com/~lynn/2022g.html#78 Legal fights and loopholes could blunt Medicare's new power to control drug prices
https://www.garlic.com/~lynn/2022c.html#44 IRS, Computers, and Tax Code
https://www.garlic.com/~lynn/2021j.html#61 Tax Evasion and the Republican Party
https://www.garlic.com/~lynn/2021i.html#22 The top 1 percent are evading $163 billion a year in taxes, the Treasury finds
https://www.garlic.com/~lynn/2021i.html#13 Companies Lobbying Against Infrastructure Tax Increases Have Avoided Paying Billions in Taxes
https://www.garlic.com/~lynn/2021g.html#54 Republicans Have Taken a Brave Stand in Defense of Tax Cheats
https://www.garlic.com/~lynn/2021f.html#61 Private Inequity: How a Powerful Industry Conquered the Tax System
https://www.garlic.com/~lynn/2021f.html#49 The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax
https://www.garlic.com/~lynn/2021f.html#38 Microsoft's Irish subsidiary paid zero corporation tax on $315bn profit
https://www.garlic.com/~lynn/2021e.html#93 Treasury calls for doubling IRS staff to target tax evasion, crypto transfers
https://www.garlic.com/~lynn/2021e.html#29 US tax plan proposes massive overhaul to audit high earners and corporations for tax evasion
https://www.garlic.com/~lynn/2021e.html#1 Rich Americans Who Were Warned on Taxes Hunt for Ways Around Them
https://www.garlic.com/~lynn/2019e.html#134 12 EU states reject move to expose companies' tax avoidance
https://www.garlic.com/~lynn/2019b.html#65 The Government is Hard at Work Keeping Tax Preparation Complicated and Expensive
https://www.garlic.com/~lynn/2019b.html#55 Most Corrupt Institution on Earth
https://www.garlic.com/~lynn/2019b.html#32 The American Empire Is the Sick Man of the 21st Century
https://www.garlic.com/~lynn/2019.html#86 Trump's tax law threatens charities. The poor will pay
https://www.garlic.com/~lynn/2018e.html#12 Companies buying back their own shares is the only thing keeping the stock market afloat right now
https://www.garlic.com/~lynn/2018c.html#88 The G.O.P. Tax Cut Is Draining the Treasury Even Faster Than Expected
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2018b.html#17 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018b.html#7 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017j.html#77 U.S. Corporate Tax Reform
https://www.garlic.com/~lynn/2017j.html#4 Who Is The Smallest Government Spender Since Eisenhower? Would You Believe It's Barack Obama?
https://www.garlic.com/~lynn/2017i.html#71 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017f.html#4 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#2 Single Payer
https://www.garlic.com/~lynn/2017c.html#37 New phone scams
https://www.garlic.com/~lynn/2017b.html#77 Corporate Tax Rate
https://www.garlic.com/~lynn/2017b.html#41 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#97 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#7 Malicious Cyber Activity
https://www.garlic.com/~lynn/2016h.html#103 Minimum Wage
https://www.garlic.com/~lynn/2016f.html#76 GLBA & Glass-Steagall
https://www.garlic.com/~lynn/2016f.html#55 Congress, most corrupt institution on earth
https://www.garlic.com/~lynn/2016d.html#7 Study: Cost of U.S. Regulations Larger Than Germany's Economy
https://www.garlic.com/~lynn/2016c.html#41 Qbasic
https://www.garlic.com/~lynn/2016.html#32 I Feel Old
https://www.garlic.com/~lynn/2016.html#24 1976 vs. 2016?
https://www.garlic.com/~lynn/2016.html#22 I Feel Old
https://www.garlic.com/~lynn/2015h.html#80 Corruption Is as Bad in the US as in Developing Countries
https://www.garlic.com/~lynn/2015h.html#48 Protecting Social Security from the Thieves in the Night
https://www.garlic.com/~lynn/2015f.html#13 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#10 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#96 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#80 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#48 These are the companies abandoning the U.S. to dodge taxes
https://www.garlic.com/~lynn/2015.html#53 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2015.html#52 Report: Tax Evasion, Avoidance Costs United States $100 Billion A Year
https://www.garlic.com/~lynn/2014m.html#1 weird apple trivia
https://www.garlic.com/~lynn/2014l.html#3 HP splits, again
https://www.garlic.com/~lynn/2014j.html#81 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014c.html#50 Broadband pricing
https://www.garlic.com/~lynn/2013k.html#32 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013j.html#78 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013i.html#94 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013i.html#79 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013h.html#55 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2013h.html#25 'Big four' accountants 'use knowledge of Treasury to help rich avoid tax'
https://www.garlic.com/~lynn/2013g.html#86 How Wall Street Defanged Dodd-Frank
https://www.garlic.com/~lynn/2013g.html#81 Ireland feels the heat from Apple tax row
https://www.garlic.com/~lynn/2013g.html#80 'Big four' accountants 'use knowledge of Treasury to help rich avoid tax'
https://www.garlic.com/~lynn/2013f.html#69 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#87 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013e.html#70 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#35 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012m.html#38 General Mills computer
https://www.garlic.com/~lynn/2012m.html#36 General Mills computer
https://www.garlic.com/~lynn/2012m.html#35 General Mills computer
https://www.garlic.com/~lynn/2012m.html#32 General Mills computer
https://www.garlic.com/~lynn/2012l.html#55 CALCULATORS
https://www.garlic.com/~lynn/2012i.html#86 Should the IBM approach be given a chance to fix the health care system?
https://www.garlic.com/~lynn/2012i.html#41 Lawmakers reworked financial portfolios after talks with Fed, Treasury officials
https://www.garlic.com/~lynn/2012h.html#61 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#33 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#27 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012h.html#14 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#47 How Selecting Voters Randomly Can Lead to Better Elections
https://www.garlic.com/~lynn/2012f.html#61 Zakaria: by itself, Buffett rule is good
https://www.garlic.com/~lynn/2012f.html#17 Let the IRS Do Your Taxes, Really
https://www.garlic.com/~lynn/2012e.html#58 Word Length
https://www.garlic.com/~lynn/2012b.html#0 Happy Challenger Day
https://www.garlic.com/~lynn/2012.html#5 We are on the brink of a historic decision [referring to defence cuts]
https://www.garlic.com/~lynn/2011p.html#137 The High Cost of Failing Artificial Hips
https://www.garlic.com/~lynn/2011o.html#66 Civilization, doomed?
https://www.garlic.com/~lynn/2011o.html#4 The men who crashed the world
https://www.garlic.com/~lynn/2011n.html#80 A Close Look at the Perry Tax Plan
https://www.garlic.com/~lynn/2011m.html#20 Million Corporation march on Washington
https://www.garlic.com/~lynn/2011l.html#68 computer bootlaces
https://www.garlic.com/~lynn/2011k.html#18 What Uncle Warren doesn't mention
https://www.garlic.com/~lynn/2011j.html#18 Congressional Bickering
https://www.garlic.com/~lynn/2011i.html#20 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2011d.html#64 The first personal computer (PC)
https://www.garlic.com/~lynn/2011.html#55 America's Defense Meltdown
https://www.garlic.com/~lynn/2010p.html#53 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2010p.html#16 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2010p.html#14 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2010m.html#73 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010l.html#69 Who is Really to Blame for the Financial Crisis?
https://www.garlic.com/~lynn/2010k.html#58 History--automated payroll processing by other than a computer?
https://www.garlic.com/~lynn/2010k.html#36 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010j.html#88 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010f.html#40 F.B.I. Faces New Setback in Computer Overhaul

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe PROFS

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe PROFS
Date: 03 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS

IBM has one of the largest losses in history of US corporations:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
and was being reorganized into the 13 "baby blues" in preparation for breaking up the company. We had already left IBM but get a call from the bowels of Armonk asking us to help with the breakup. Before we get started the board brings in the former president of AMEX as CEO ... who reverses the breakup. IBM picks Lotus in 1995 ... while it was still trying to recover

I.B.M. WINS LOTUS AS OFFER IS RAISED ABOVE $3.5 BILLION
https://www.nytimes.com/1995/06/12/us/ibm-wins-lotus-as-offer-is-raised-above-3.5-billion.html

Not just customer execs prejudiced by MVS misinformation (internal execs were drinking the same koolaid)

After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was long time customers (back to their CP67 days, through transition to online sales&marketing support applications, including mainframe orders needed 1st to be run through HONE applications, US HONE datacenters consolidated in Palo Alto and HONE clones being cloned all over the world). The consolidated US HONE enhanced to have eight systems, in "single-system image", loosely-coupled complex with large disk farm and load-balancing and fall-over support. Then I added multiprocessor support to VM370 release 3 so 2nd CPU could be added to each system, enabling sixteen 370/168-3 processors.

In later 70s, a branch manager was promoted to DPD executive that had HONE reporting to him and he was horrified to learn that HONE was VM370 based. He apparently thought he could make his IBM career by directing all of HONE resources to work on moving HONE to MVS. After about a year, it was apparent it wasn't going to work and the individual was promoted (in parts of IBM, heads rolled uphill). Unfortunately the process was repeated a couple more times. In the early 80s, somebody decided that the reason that HONE couldn't be ported to MVS was because it ran my enhanced systems which could be solved in 2-step process, where HONE was first directed to convert to unmodified system (because what would happen to IBM sales&marketing if I was hit by a bus).

After FS implosion, the head of POK also managed to convince corporate to kill the VM370 product, shutdown the VM370 development group, transfer all the people to POK for MVS/XA (other supposedly otherwise MVS/XA wouldn't ship on time; note Endicott eventually manages to save VM370 product mission for the mid-range, but had to reconstitute a development group from scratch). They weren't going to notify the people until the very last minute to minimize the number that might escape into the Boston area. The information managed to leak and several managed to escape (including to the new DEC VAX/VMS effort, joke was that head of POK was major contributor). Then there was hunt for the source of the leak, fortunately for me, nobody gave the person up.

Early 80s, there was also presentation by POK executive to HONE where he told them that VM370 would no longer run on the newer high-end 370s (supporting justification that HONE needed to migrate to MVS). HONE raised such an uproar that the executive had to come back and explain that they had misheard what he had said.

akin to internal classification of CERN analysis, was when I cut deal with TYMSHARE to get monthly tape dump of all VMSHARE files for put up on the internal network & systems (include HONE systems), the lawyers were concerned about internal employees being exposed to (unfiltered) customer information ... vmshare archives
http://vm.marist.edu/~vmshare

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downfall, breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe DCF

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe DCF
Date: 04 Nov, 2023
Blog: Facebook
CTSS RUNOFF (aka some of the CTSS people went to 5th flr for MULTICS, others went to the science center on 4th flr and did CP40, CP67, CMS, internal network, lots of other stuff)
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
redone in mid-60s for (CP67/)CMS as "SCRIPT". Then at science center in 1969 invented GML (1st letters of the inventors last name) and GML tag processing added to SCRIPT. SGML history seems to have recently gone 404, but lives on at wayback machine:
https://web.archive.org/web/20230402213042/http://www.sgmlsource.com/history/index.htm
The roots of SGML
https://web.archive.org/web/20230402212623/http://www.sgmlsource.com/history/roots.htm

after a decade, GML morphs into ISO standard, SGML, and after another decade morphs into HTML at CERN.

Early SCRIPT trivia ... first IBM mainstream document, the 370 architecture "RED BOOK" (distributed in dark red 3ring binder) ... CMS SCRIPT command line option either formated the full "RED BOOK" or the "Principles of Operation" subset (w/o things like engineering notes, justification, alternatives, etc).

other trivia: IBM SE in LA in late70s had done script (NewScript and Allwrite) for TRS80

6670 trivia: SJR got lot of 6670s (IBM Copier3 with computer interface) for placing out in departmental areas. colored paper was placed in the alternate paper drawer used for printing the separator page; since the page was mostly blank ... modified the driver to print random entries from the IBM Jargon file (and a couple other sources). Then SJR modified 6670 for SHERPA/APA6670 (all points addressable)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
script, gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml

posts mentioning runoff, script, gml, sgml, "red book"
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2018c.html#15 Old word processors
https://www.garlic.com/~lynn/2013o.html#84 The Mother of All Demos: The 1968 presentation that sparked atech revolutio
https://www.garlic.com/~lynn/2010p.html#60 Daisywheel Question: 192-character Printwheel Types
https://www.garlic.com/~lynn/2009s.html#1 PDP-10s and Unix
https://www.garlic.com/~lynn/2008m.html#90 z/OS Documentation - again
https://www.garlic.com/~lynn/2006h.html#55 History of first use of all-computerized typesetting?
https://www.garlic.com/~lynn/2003k.html#52 dissassembled code
https://www.garlic.com/~lynn/2002h.html#69 history of CMS

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe PROFS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe PROFS
Date: 04 Nov, 2023
Blog: Facebook

https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#75 Vintage Mainframe PROFS

Summer of 1982, I spent much of the summer in Europe teaching classes and making customer calls ... including two weeks in Orleans for mostly BOIS (Branch Office Information System ... sort of adjunct to HONE ... previous decade, I was asked to go along for the first HONE install outside US in La Defense, Paris, aka one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and HONE was long time customer). Random email

Date: 09/15/82 09:34:22
From: RESPOND xxxxxx
To: SJRLVM1 wheeler

Thanks for the copy of your trip report, which I'm reading with great interest - especially the section on Orleans - I was involved in the early days of the project, in trying to get them a sensible interface to the European CON/370 network. The odd thing is that BOIS obviously fills a need not catered for by thorough-designed systems like AAS, but the developers of AAS (ie Respond) don't have any intention to add "Information Center" products to their output: they still believe in the cycle of specification, design, coding release and improvement request, in spite of the fact that the cycle can last two years.


... snip ... top of post, old email index

In the later half of the 80s in the US lots of VM/370s were appearing on the internal network that were branch office "vmic" machines (branch office "information center")

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe PROFS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe PROFS
Date: 04 Nov, 2023
Blog: Facebook

https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#75 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#77 Vintage Mainframe PROFS

note that 3272/3277 had .086sec hardware response ... but for 3274/3278 they moved a lot of the terminal electronics back to the controller (making it cheaper to manufacture the terminal) significantly driving up the coax protocol chatter ... resulting in driving hardware response to .3-.5secs (depending on amount of data). This was in the days of human factors/productivity studies showing need for .25sec response; 3277 required system response to be .164sec (or less) to meet .25sec (and of course, it was impossible to meet requirement with 3278). Letter was written to 3278 product administrator about 3278 being worse than 3277 for interactive computing. The response was 3278 wasn't for interactive computing, but data entry (i.e. electronic keypunch). MVS/TSO users wouldn't notice the difference since it was a very rare TSO that had even 1sec system response.

In the days of IBM/PC 3270 emulation cards, 3277 cards had 3-4 times the transfer throughput of 3278 cards.

Note communication group was fiercely fighting off client/server and distributed computing and had severely performance kneecaped the PS2 microchannel cards (part of trying to preserve their dumb terminal paradigm and install base). AWD workstation division had done their own 4mbit token-ring card (16bit AT bus) for the PC/RT. However for the RS/6000 with microchannel, AWD was told they couldn't do their own cards, but had to use PS2 microchannel cards. An example was the PC/RT 4mbit T/R card had higher throughput than the PS2 16mbit T/R microchannel card (joke was RS/6000 forced to use PS2 microchannel cards wouldn't have any better throughput than PS2/486 for many things, appeared to show that 4mbit T/R was faster than 16mbit T/R and 16bit AT bus was faster than 32bit microchannel).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

a few posts mentioning 3272/3277 & 3274/3278 hardware response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput

Late 80s, a senior disk engineer gets a talk scheduled at an internal, world-wide, communication group conference, supposedly on 3174 performance, but opens the talk with the statement that the communication group was going to responsible for the demise of the disk division. The issue was seeing a drop in disk sales with data fleeing datacenters to more distributed computing platforms. The communication group had stranglehold on datacenters with their corporate strategic ownership of everything that crossed datacenter walls and was vetoing disk division solutions (in the communication group battle with client/server and distributed computing). It wasn't just disks and in a couple years, IBM had one of the largest losses in history of US corporations and was being re-orged into the 13 "baby blues" in preparation to breaking up the company:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

communication group trying to protect their status queue and datacenter stranglehold
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

We had already left the company, but get a call from the bowels of Armonk asking us to help with the breakup. However before we could get started the board brings in the former AMEX president as CEO, who reverses (some of) the breakup (still wasn't long before the disk division was gone).

ibm downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe XT/370

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe XT/370
Date: 05 Nov, 2023
Blog: Facebook
I had got CP67 kernel stripped down for 256kbyte 360/67 ... and in transition from CP67->VM370, VM370 was increasingly bloated. They sent me a pre-production XT/370 with 384kbytes (370) memory and I did a lot of paging tests. VM370 kernel was bloated and several of the CMS applications had gotten increasingly memory bloated and increasingly filesystem intensive. The paging and filesystem I/O was aggravated with the VM370 I/O done as processor to processor request to the 8088 which then did each physical I/O to the XT harddisk that took 100ms, which returned back to VM370. I did some work to improve paging and filesystem I/O and somewhat tuned page replacement algorithm for constrained environment ... but also showed that 512k 370 memory made a difference over the 384k. Endicott then blamed me for 6month product slip while they upgraded XT/370 memory from 384k to 512k. But there were several XT/PC applications (tuned for minimizing filesystem I/O) still easily outperformed their corresponding CMS applications (which were much more filesystem intensive).

At the time, I also had HSDT project, T1 and faster computer links (both terrestrial and satellite). Corporate required that links used for company data had to be encrypted ... 56kbit link encryptors were easily available, but I hated what I had to pay for T1 link encryptors and higher speed encryptors were really hard to find. I wanted link encryptors capable of 3mbytes/sec (not 1.5mbits/sec) and cost no more than $100 to build. Then the corporate security/crypto group blocked it claiming it severely weakened DES. It took me 3months how to figure how to explain to them what it was doing, rather than significantly weaker than DES, it was significantly stronger (and could do 3mbytes/sec). It was hollow victory, they said that only one entity in the world was allowed to use such crypto, I could make as many as I wanted, but they all had to be sent to them. It was when I realized that there was 3kinds of crypto: 1) the kind they don't care about, 2) the kind you can't do, 3) the kind that you can do only for them

Along the way I demoed that one 3081k processor (around 50times the xt/370 processor) ran (370 software) DES around 150kbytes/sec ... i.e. it would require both 3081k processors dedicated to perform full-duplex T1 link encryption.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Things got better with AT/370 because AT had faster hard disk. It was much better when A74 (7437) workstation shipped, PS2 with much faster 370 processor, larger memory and faster disk.

PC-based 370
https://en.wikipedia.org/wiki/PC-based_IBM-compatible_mainframes

some posts mention xt/370, 384k, at/370, a74 workstation
https://www.garlic.com/~lynn/2018d.html#19 68k, where it went wrong
https://www.garlic.com/~lynn/2017c.html#7 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2013l.html#30 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013h.html#18 "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2012p.html#8 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2011m.html#64 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
https://www.garlic.com/~lynn/2007j.html#41 z/VM usability
https://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370

some a74 specific
https://www.garlic.com/~lynn/2015d.html#71 30 yr old email
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2015d.html#8 30 yr old email

three kinds of crypto posts
https://www.garlic.com/~lynn/2023b.html#5 IBM 370
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022.html#125 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2018d.html#33 Online History
https://www.garlic.com/~lynn/2018.html#10 Landline telephone service Disappearing in 20 States
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June
https://www.garlic.com/~lynn/2017c.html#69 ComputerWorld Says: Cobol plays major role in U.S. government breaches
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
https://www.garlic.com/~lynn/2016h.html#0 Snowden
https://www.garlic.com/~lynn/2016e.html#31 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#40 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2016.html#101 Internal Network, NSFNET, Internet
https://www.garlic.com/~lynn/2015h.html#3 PROFS & GML
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2014j.html#77 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA
https://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
https://www.garlic.com/~lynn/2013m.html#10 "NSA foils much internet encryption"
https://www.garlic.com/~lynn/2013i.html#69 The failure of cyber defence - the mindset is against it
https://www.garlic.com/~lynn/2013d.html#1 IBM Mainframe (1980's) on You tube
https://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
https://www.garlic.com/~lynn/2011k.html#67 Somewhat off-topic: comp-arch.net cloned, possibly hacked
https://www.garlic.com/~lynn/2010o.html#43 Internet Evolution - Part I: Encryption basics
https://www.garlic.com/~lynn/2009l.html#14 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
https://www.garlic.com/~lynn/2008h.html#87 New test attempt
https://www.garlic.com/~lynn/aadsm23.htm#1 RSA Adaptive Authentication

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe 3081D

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe 3081D
Date: 05 Nov, 2023
Blog: Facebook
When Future System imploded (completely different and was to completely replace 370, internal politics during FS was killing off 370 products and the lack of new 370s is credited with giving clone makers their market foothold), there was mad rush to get new stuff back into the 370 product pipeline, including kicking off quick&dirty 3033&3081 efforts in parallel. Note 370/165 had avg of 2.1machine cycles/370 instructions, the microcode was optimized for 168 to 1.6machine cycles/370 instruction. 3033 started out 168 logic remapped to 20% faster chips and microcode further optimized to one machine cycle/370 instruction ... to about 4.5MIPs.

Lots more detail about FS ... but also 3081 with warmed over FS technology and
http://www.jfsowa.com/computer/memo125.htm

The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

... supposedly 3081D was two 5mip processors ... but there were several benchmarks on single 3081D processor was less than 3033UP. Then 3081K came out with twice the cache sizes of 3081D, claiming 7mip processors (and benchmarks on single 3081K processor slightly better than 3033UP). However, the single processor Amdahl machine was about the same MIPs as aggregate of two processor 3081K and much higher MVS throughput (since MVS of the era was claiming two processor throughput was around 1.2-1.5 times that of two processor, aka MVS multiprocessor overhead). trivia: some claim that the massive increase in circuits also motivated TCMs

... for other drift; end of ACS/360 ... supposedly IBM executives killed ACS/360 because they were afraid that it would advance technology to fast and IBM would loose control of the market; following also lists ACS/360 features that show up more than 20yrs later with ES/9000.
https://people.cs.clemson.edu/~mark/acs_end.html
more
https://people.computing.clemson.edu/~mark/acs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

some service processor and thermal modules:
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2014.html#31 Hardware failures (was Re: Scary Sysprogs ...)
https://www.garlic.com/~lynn/2011m.html#21 Supervisory Processors
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009b.html#77 Z11 - Water cooling?
https://www.garlic.com/~lynn/2004p.html#41 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe 3081D

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe 3081D
Date: 05 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D

... for 3084 with four processors (Amdahl two processor better MIPS than 3084 four processor) ... there was lots of work for cache line sensitivity ... storage used by threads aligned on cache line and storage used by different threads not in same cache line (i.e. end of some storage in same cache line with start of some other storage). Cache line thrashing can increase by factor of three when going from two processor to four processor.

I've repeated a number of times when FS imploded, I had got sucked into helping with a 16 processor tightly-coupled implementation and had con'ed the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody told head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way system until after turn of century). Then the head of POK directed some of us to never visit POK again ... and the 3033 processor engineers heads down and don't be distracted.

smp posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe OSI

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe OSI
Date: 05 Nov, 2023
Blog: Facebook
Starting in early 80s, I had HSDT project, T1 and faster computer links (both satellite and terrestrial) ... and lots of battles with communication group which was capped at 56kbit links. Communication group was fiercely fighting off client/server and distributed computing and blocking release of mainframe TCP/IP ... but apparently some influential customers got that reversed. Then communication changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). Note in early 90s, communication group hired silicon valley contractor to implement TCP/IP directly in VTAM. When he demonstrated TCP/IP running much faster than LU6.2, he was told that everybody "knows" a "proper" TCP/IP implementation runs much slower than LU6.2 and they would be only paying for a "proper" implementation.

In mid-80s, the communication group also produced analysis that customers weren't looking for T1 support until sometime in the 90s. They showed number of "fat pipes" (multiple parallel 56kbit links treated as single logical links) that dropped to zero by time there were six or seven (56kbit) parallel links. What they didn't know (or didn't want to use) that telco tariff for T1 link was about the same as 5 or 6 56kbit links. We had a trivial survey that found 200 customers that just skipped to full T1s and switched to non-IBM controller.

I was asked to be on Greg Chesson's XTP Technical Advisory Board which was fought by the communication group (& lost). There were some number of gov. operations involved and believed they needed standardization. We brought XTP (as high-speed protocol) to ISO chartered US ANSI X3S3.3 for standardization. After awhile we were told that ISO rules was they could only standardize protocols that conformed to OSI model. XTP/HSP violated OSI model because 1) supported internetworking protocol, non-existent layer between OSI transport/network, 2) skipped the transport/network interface, 3) went directly to LAN MAC interface, which doesn't exist in OSI model, somewhere in the middle of network. There was joke that ISO could pass a standard that was not possible to implement while IETF required at least interoperable implementations before progress with internet standards.

trivia: I had a PC/RT with megapel display in non-IBM booth at Interop88, center court at right-angles to SUN booth that had Case with SNMP ... con'ed Case into coming over and installing SNMP on PC/RT. I was somewhat dismayed at the number of OSI booths at Interop88.

internet trivia: we had been working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happened and finally NSF releases an RFP (in part based on what we already had running) ... Preliminary Announcement (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
virtualization experience starting Jan1968, online at home since Mar1970

360 CARD IPL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 CARD IPL
Date: 06 Nov, 2023
Blog: Facebook
I take two credit hr intro to fortran/computers, univ has 709/1401 (709 tape->tape with 1401 unit record front end), student jobs run less than second. At end of semester I was hired to rewrite 1401 MPIO for 360/30 (temporarily replacing 1401, pending arrival of 360/67). The univ. shutdown datacenter for weekends and I had the whole datacenter dedicate (although Monday classes could be difficult, after 48hrs w/o sleep). I was given a lot of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks, I had 2000 card assembler program, assemble, slap BPS loader on the front of TXT deck and IPL it (card deck). I then did assembler option that either assembled the stand-alone version or one with OS/360 macros; the stand-alone took 30mins to assemble (os/360 assembler) ... while the OS/360 version took an hour to assemble (OS/360 DCB macro ... 5-6 mins per).

Within a year of taking the intro class, 360/67 arrive and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65 w/os360). Student fortran jobs ran under second on 709, but initially w/os360 ran over minute. I install HASP and cuts time in half. I then redo STAGE2 SYSGEN so 1) can run in production job stream (rather than starter system) and 2) re-org statements to place datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs ... never got better than 709 until I install WATFOR.

Along the way, people came out from science center to install CP67 (3rd installation after cambridge itself and MIT Lincoln Labs) and I mostly played with it during my weekend time. At that time, CP67 was assembled under OS/360, txt decks punched, arranged in card tray behind BPS loader and whole tray of cards IPL'ed; it would invoke CPINIT which would write the storage image to disk for IPL. note: as individual modules were assembled, would take each TXT deck, do a diagonal stripe across the top of the deck with colored marker and write the module name, before placing in the tray in order. then later when individual modules were changed and re-assembled ... it was easy to find the module cards to be replaced

After a year or so, CAMBRIDGE moved all the source to CMS for assembly ... the TXT files (behind BPS loader) were either punched to virtual reader for IPL or written to a tape for IPL. I rewrite lots of CP67 ... initially to improve OS/360 running in virtual machine; OS360 test stream of FORTGCLG student jobs runs 322secs, initially 856secs under CP67 (534secs CP67 CPU) ... i get it down to 113secs. I then redo disk I/O adding ordered seek queuing and chained page requests (instead of separate I/O for each 4k page) for same disk cyl and for 2301 fixed head drum (would peak about 70 4k transfers/sec, could get it close to channel transfer at 270 4k transfers/sec).

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize investment, including offering services to non-Boeing entities). I think Renton datacenter is largest in the world, couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO who only had 360/30 up at Boeing field for payroll, although they enlarge it for 360/67 for me to play with when I'm not doing other stuff.

One of the things I do at Boeing, is CP67 pageable kernel ... breaking various (low useage) things up into 4k chunks that can be paged in/out ... which hits BPS loader limit of 255 ESD entries ... and spend a bit of time doing hacks to keep the number of ESD entries at 255. After graduating and joining science center ... I find a source copy of BPS loader in card cabinet in the 545 tech square attic ... which I modify to handle more than 255 ESD entries. While lots of my CP67 changes were shipped to customers ... pageable kernel doesn't ship until morph of CP67->VM370.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts that mention 709, 1401, MPIO, 360/67, WATFOR, and Boeing CFO
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

posts that mention BPS loader, IPL and pageable kernel
https://www.garlic.com/~lynn/2017e.html#32 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2012d.html#18 Memory versus processor speed
https://www.garlic.com/~lynn/2011g.html#63 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010h.html#42 IBM 029 service manual
https://www.garlic.com/~lynn/2008s.html#56 Computer History Museum
https://www.garlic.com/~lynn/2007n.html#57 IBM System/360 DOS still going strong as Z/VSE
https://www.garlic.com/~lynn/2006v.html#5 Why these original FORTRAN quirks?
https://www.garlic.com/~lynn/2006.html#40 All Good Things
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 7 Nov, 2023
Blog: Facebook
FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

... snip ...

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794

After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had his standard management presentation -to IBM and CIA groups - published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings and was translated into Spanish -and has been offered continuously for sale as a used book on Amazon.com. It is now reprinted -verbatim- and available from Createspace, Inc - for $15 per copy. The book presents a total of 22 traits and qualities and their role in real life situations- and their resolution- encountered during Mr. Fox's 20 years with IBM and with major computer customers, both government and commercial. The presentation and the book followed a focus and use of quotations to Identify and characterize the role of the traits and qualities. Over 400 quotations enliven the text - and synthesize many complex ideas.

... snip ...

When we were doing HA/CMP,
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
we were asked to review the latest IBM FAA modernization project ... and spent some time with technical assistant to FSD president who was spending 2nd shift coding for the project. After HA/CMP scale-up was transferred (for announce as IBM supercomputer and we were told we couldn't work on anything with more than four processors), we leave IBM. We didn't know Fox at IBM, but he had left IBM with some other FSD people and started a company that we would do a project with.

the late 80s IBM ATC modernization effort had been captured by hardware engineers and spec'ed the software didn't have to account for errors since triple-redundant hardware would mask any errors/failures. New software was well in progress before external reviews pointed out need to account for human mistakes/errors ... and effort needed a reboot. TA to FSD Pres was one of the people thrown into the reboot breach.

have email end of Jan1992 from FSD that after our HA/CMP scale-up presentations that FSD had notified Kingston supercomputer group that FSD was making HA/CMP their strategic offering. A day or two later, we are told that scale-up is transferred (to Kingston) and we aren't allowed to work on anything with more than 4 processors. Earlier in Jan1992, we had meeting with Oracle in CEO's conference room; AWD VP Hester told Ellison that we would have 16-processor scale-up by mid-92 and 128-processor scaleup by ye-92.

Computerworld news 17feb1992 (from wayback machne) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific only
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

the next(?) chairman trying to block the bureaucrats, careerists, and MBAs from destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
ibm downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some past posts mention IBM's FAA ATC
https://www.garlic.com/~lynn/2023d.html#82 Taligent and Pink
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

Take aways from the tense testimony of Eric Trump and Donald Trump Jr. in the New York fraud case

From: Lynn Wheeler <lynn@garlic.com>
Subject: Take aways from the tense testimony of Eric Trump and Donald Trump Jr. in the New York fraud case
Date: 7 Nov, 2023
Blog: Facebook
Take aways from the tense testimony of Eric Trump and Donald Trump Jr. in the New York fraud case
https://www.cnn.com/2023/11/02/politics/takeaways-eric-donald-trump-fraud-trial/index.html
Trump's Sons Cast the Blame for Fraud on Their Company's Accountants
https://www.nytimes.com/2023/11/02/nyregion/eric-trump-donald-jr-fraud-trial.html

After turn of century, rhetoric on floor of Congress was that Sarbanes-Oxley would prevent future ENRONs and guarantee that auditors and executives that sign fraudulent financial reports, do jail time (only required that they signed the reports, didn't require that they knew what they were doing).
https://en.wikipedia.org/wiki/Sarbanes%E2%80%93Oxley_Act

However it required SEC to do something. Possibly because GAO didn't believe SEC was doing anything, it started doing reports of fraudulent financial reports, even showed that they increased after SOX went into effect (and nobody doing jail time).
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R

There was some joke, that congress just felt badly that one of the audit houses went out of business, and SOX was a gift designed to increase auditing business (and wasn't intended to really change things).

EU has annual financial conference of corporate CEOs and exchange executives and 2004 subject was on SOX requirements leaking into Europe and I was invited to attend.

ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
Fraudulent Financial Report posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 8 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964

trivia (IBM 3083 faa/atc proposal): originally 308x would be multiprocessor only (wasn't planning on having single processor 3083). However, ACP/TPF didn't have multiprocessor support and IBM was concerned that whole market would go Amdahl (Amdahl had new single processor that was about same MIPS as aggregate of two processor 3081). Eventually they removed one of the 3081 processors for 3083 (note the 2nd processor was in the middle of the box and there was concern that it would make it top heavy and prone to tipping over ... so they had to rewire so the 3083 processor was in the middle)

recent 3081D post
https://www.garlic.com/~lynn/2023f.html#80 Vintage Mainframe 3081D

smp, multiprocessor, loosely-coupled processor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some 3083 and ACP/TPF posts
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023b.html#98 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#31 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021j.html#66 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

FAA ATC, The Brawl in IBM 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: FAA ATC, The Brawl in IBM 1964
Date: 9 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023f.html#86 FAA ATC, The Brawl in IBM 1964

other trivia: pending availability of (single processor) 3083, there were some unnatural things done to VM/SP to improve throughput of ACP/TPF running in single processor virtual machine (mostly for ACP/TPF on 3081), but degraded throughput for nearly every other VM/370 multiprocessor customer. I got called into long-time gov. customer back to CP67 days (I didn't even know about until after graduating, joining IBM and asked to teach computer & security classes for them, also in period when IBM had got a new CSO from gov. service, previously head of presidential detail, and I was to run around with him and talk about computer security).

IBM previously had done some 3270 response hacks to try and mask the multiprocessor degradation ... however this particular customer was mostly large numbers of high speed ASCII "glass" teletypes. I did a CMS terminal hack that improved response for all interactive users (regardless of type of terminal). Also, one of the problems in the morph of CP67->VM370, they changed scheduling decisions from real device type to virtual device type. This was fine as long as the virtual and real device types were similar ... but over time there were increasing mismatches. I provided Endicott the CP67 implementation for VM370 (also note after implosion of FS, the head of POK convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA, or supposedly MVS/XA wouldn't ship on time; Endicott eventually managed to save the VM370 product mission, but Endicott had to recreate development group from scratch).

the gov. agency was also very active in SHARE and on VMSHARE (TYMSHARE started offering their CMS-based online computer conferencing system, free to SHARE in AUG1976 as VMSHARE) ... VMSHARE archives here (SHARE installation codes were normally acronym for the organization, but they chose "CAD").
http://vm.marist.edu/~vmshare

smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

some archived posts (with old email) about fixes for the gov customer
https://www.garlic.com/~lynn/2001f.html#57
https://www.garlic.com/~lynn/2003c.html#35
older ref
https://www.garlic.com/~lynn/2006y.html#0

old posts mentioning new IBM CSO & presidential detail
https://www.garlic.com/~lynn/2022h.html#75 Researchers found security pitfalls in IBM's cloud infrastructure
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021j.html#37 IBM Confidential
https://www.garlic.com/~lynn/2021e.html#57 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#84 Bizarre Career Events
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2017g.html#75 Running unsupported is dangerous was Re: AW: Re: LE strikes again

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 709

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 709
Date: 9 Nov, 2023
Blog: Facebook
I took 2 credit hr intro fortran/computers ... the univ had 709 (tape->tape) with 1401 unit record front end (tapes moved between 709/1401). The univ. then had been sold 360/67 for TSS/360 ... and by the end of semester, the 1401 was temporarily replaced with 360/30 (pending arrival 360/67) and I was hired to rewrite 1401 MPIO for 360/30 (360/30 had 1401 and continued to run MPIO, I assume I was just part of getting 360 experience). Univ. shutdown datacenter on the weekends and I had the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given lots of hardware & software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, storage management, error recovery/retry, etc ... and within a few weeks had 2000 card assembler program. Lots of practice loading test job streams to tapes, moving tapes to 709, running tape->tape, moving tapes back to 360/30 and printing/punching ... and verifying the results.

Within a year of taking intro class, 360/67 arrived (replacing 709&360/30) and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65 with os/360). Later some people came out from the science center to install CP/67 (3rd after cambridge itself and MIT lincoln labs) and I mostly played with it during my dedicated weekend time. I initially rewrite a lot of CP67 for cutting overhead running OS/360 in virtual machine. Initial OS/360 test stream ran in 322sec "stand-alone" and originally 856sec w/CP67 ... CP67 CPU 534sec. After a few months, OS/360 test stream w/CP67 435secs ... CP67 CPU 113secs (improvement/reduction 534-113=421secs).

A decade go, I was asked if I could track down decision to add virtual memory to 370s. I found staff to executive making decision. Basically MVT storage management was so bad, that regions would have to be specified four times larger than use ... so typical 1mbyte 370/165 only would run four regions concurrently, insufficient to keep systems busy and justified. Going to virtual memory allowed number of concurrent regions running to be increased by a factor of four times with little or no paging.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning 709, 1401, mpio, 360/30, 360/67, fortran, and watfor
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2015.html#51 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 709

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 709
Date: 9 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#88 Vintage IBM 709

various trivia:

when I 1st joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters (and online sales&marketing support HONE was long time customer, I was even asked to do a couple of the the first non-US, overseas HONE installations). Then the 370/195 group cons me into help with multi-threading the machine (195 pipelined but no branch prediction and speculative execution with conditional branches draining pipeline, most codes running at half 195 throughput). Whit virtual memory decision, it was decided too hard to add it to 195, and further 195 work was dropped. ACS/360 (canceled when executives afraid it would advance state-of-the-art too fast and IBM would loose control off the market) ... talks about multithreading (also ACS/360 features that show up more than 20yrs later in ES/9000)
https://people.cs.clemson.edu/~mark/acs_end.html

most of the models had virtual memory nearing/already completed when 370/165 started whining that virtual memory announcement would have to slip 6months if they had to implement the full architecture ... eventually it was decided to retreat to the 165 subset (and other models and some software had to drop back to the subset).

archived posts with pieces of email about virtual memory decision ... also references VS2/MVS was to be "glide path" to future system
https://www.garlic.com/~lynn/2011d.html#73

FS was going to completely replace 370 and totally different (internal politics was killing 370 efforts and lack of new 370 is credited with giving clone 370 makers their market foothold). I continued to work on 370 all during FS and would periodically ridicule what they were doing (which wasn't exactly career enhancing activity). When FS imploded, there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel
http://www.jfsowa.com/computer/memo125.htm

one of the final nails in the FS coffin was Houston Science Center analysis that 195 applications ported to FS machine made out of fastest technology available, would have throughput of 370/145 (about 30 times slow down)

Les Comeau had transferred from science center to G'burg by the time I joined IBM ... and during FS, "owned" one of the 13 (or 14?) sections. My (future) wife reported to Les and has commented that in FS meetings, people in other sections had lots of blue sky ideas ... but many had no idea how features might actually be implemented. Les' '82 SEAS CP/40 presentation
https://www.garlic.com/~lynn/cp40seas1982.txt
other virtual memory (& virtual machine) history
http://www.leeandmelindavarian.com/Melinda#VMHist

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning acs/360 and multi-threading
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2019.html#62 instruction clock speed
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2017h.html#96 computer component reliability, 1951
https://www.garlic.com/~lynn/2017g.html#39 360/95
https://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
https://www.garlic.com/~lynn/2017.html#85 The ICL 2900
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2014m.html#164 Slushware
https://www.garlic.com/~lynn/2014g.html#11 DEC Technical Journal on Bitsavers

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM HASP

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM HASP
Date: 10 Nov, 2023
Blog: Facebook
I had taken two credit hr intro to computers/fortran and within a year, univ hired me fulltime responsible for os/360 (univ was sold 360/67 for tss/360 to replace 709/1401, temporarily they got a 360/30 to replace 1401 pending arrival of 360/67. At the end of the semester, the univ. hired me to rewrite 1401 MPIO (unit record frontend for 709) for 360/30 (m30 had 1401 emulation so continued to run MPIO, but apparently was part of getting 360 experience). I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Univ. shutdown datacenter over the weekend, and I had the place dedicated (although 48hrs w/o sleep made monday classes difficult). After a few weeks I had a 2000 cards assembler program. I also quickly learned coming in sat. morning 1st thing I do is clean all the tape drives; disassemble 2540 reader/punch, clean, reassemble; clean 1403. Sometimes production had finished early and datacenter is dark when I come in Sat. morning. Periodically 360/30 hangs during power-on ... and with some trial&error, I learn to put controllers in CE-mode, power-on individual controllers, power-on 360/30, take all controllers out of CE-mode.

Within a year of taking intro class, the 360/67 had arrived and univ. hires me fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65 with os/360). 709 tape->tape did student fortran jobs in less than second. Initially w/os360, student FORTGCLG ran over a minute. I install HASP cutting the time in half. I then start redoing STAGE2 SYSGEN; 1) enabling to be run in production jobstream and 2) placing datasets and PDS members for optimized arm seek and (PDS directory) multi-track search, cutting time by another 2/3rds to 12.9secs (3 step FORTGCLG). OS/360 never got better than 709 for student fortran until I install Univ. of Waterloo WATFOR.

Come MVT18, I added terminal (2741&TTY/ASCII) support and editor (that implemented the CP67/CMS EDITOR syntax/function) to HASP for our own CRJE

A decade ago, I was asked if I could track down decision to add virtual memory to all 370s. I find a staff member to executive making the decision. Basically, MVT storage management was so bad that regions frequently needed to be specified four times larger than used, as a result a 1mbyte 370/165 typically would only run four concurrent regions at a time, insufficient to keep 165 busy and justified. Going to 16mbyte virtual memory could increase number of regions by a factor of four times with little or no paging. Old archived post with pieces of email exchange (including some HASP/spooling)
https://www.garlic.com/~lynn/2011d.html#73

other trivia, my wife joined gburg JES group and was one of the ASP/JES3 catchers and co-author of JESUS (JES Unified System) specification, all the features of the JES2&JES3 that the respective customers couldn't live without ... for whatever reason, it never came to fruition.

Song at SHARE HASP sing-along (I was there when it was 1st performed)
http://www.mxg.com/thebuttonman/boney.asp

HASP/JES2, ASP/JES3, NJI/NJE, etc posts
https://www.garlic.com/~lynn/submain.html#hasp

posts mentioning adding terminal support and editor to HASP
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2013o.html#87 The Mother of All Demos: The 1968 presentation that sparked a tech revolution
https://www.garlic.com/~lynn/2013l.html#20 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2012f.html#11 Word Length
https://www.garlic.com/~lynn/2011f.html#73 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2009n.html#42 DARPA, at least, has a clue (maybe, sometimes)
https://www.garlic.com/~lynn/2007j.html#78 IBM 360 Model 20 Questions
https://www.garlic.com/~lynn/2007g.html#43 Wylbur and CRBE
https://www.garlic.com/~lynn/2007g.html#14 ISPF not productive
https://www.garlic.com/~lynn/2006o.html#3 MTS, Emacs, and... WYLBUR?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005p.html#37 CRJE and CRBE
https://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004c.html#27 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#26 Moribund TSO/E
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2001n.html#60 CMS FILEDEF DISK and CONCAT
https://www.garlic.com/~lynn/93.html#2 360/67, was Re: IBM's Project F/S ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 3101

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 3101
Date: 10 Nov, 2023
Blog: Facebook
we got some early "topaz" (i.e. straight glass teletype, development 3101) ... then we 1st got ROM to burn new EEPROMs and then some new boards with block-mode support for the topaz machines. Could run as straight glass-teletype or in "block-mode" with host support that did extra optimization (reducing line traffic/delay). Archived post
https://www.garlic.com/~lynn/2006y.html#4
with email
https://www.garlic.com/~lynn/2006y.html#email800311
https://www.garlic.com/~lynn/2006y.html#email800312
https://www.garlic.com/~lynn/2006y.html#email800314

archived post also mentions as undergraduate in the 60s, we built clone 360 terminal controller. Had mainframe code that could recognize terminal type and switch the port scanner type for each line ... but IBM had taken short-cut and hardwired line-speed for each line. I wanted to have a single dialup number for all terminal types ("hunt group") and wouldn't work for 1052, 2741 *and*, TTY/ascii. We built a channel interface board for Interdata/3 programed to emulate the IBM 360 terminal controller with the addition it could do dynamic line speed. Later was upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces ... four of us get written up for (some part of the) IBM clone controller business (initially sold by Interdata and then by Perkin-Elmer).
https://en.wikipedia.org/wiki/Interdata

Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... snip ...

Perkin-Elmer
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

posts mentioning clone controller
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 11 Nov, 2023
Blog: Facebook
Cambridge Science Center did virtual machines, lots of performance, internal network, ported APL\360 to CP67 as CMS\APL (redoing workspaces from 16kbyte swap to large demand page virtual memory), invented GML in 1969 and added GML tag processing to CMS SCRIPT (decade later morphs into ISO standard SGML and after another decade mophs into HTML as CERN).
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

One of my hobbies after joining the science center was enhanced production operating systems for internal datacenters (and online sales&marketing support HONE system was long time customer). In the morph of CP67->VM370, a lot of features were dropped and/or simplified and I spent some of 1974 adding them back in ... transitioning from internal CP67 to VM370 in 1974 installing R2-based CSC/VM at HONE.

One of the science center co-workers developed an analytical system model implemented in APL. It was made available on the HONE system as the Performance Predictor for branch people to enter customer workload & configuration information and ask "what if" questions if changes were made. Later when US HONE systems were consolidated in Palo Alto and grew into eight large systems in single-system-image, loosely-coupled complex with fall-over and load-balancing ... a version of the Performance Predictor was used for making the load-balancing decisions. I then add CP67 multiprocessor into R3-based CSC/VM initially for HONE so they could add a 2nd processor to each of their systems.

The 23Jun1969 unbundling announcement started to charge for (application) software (managed to make case that operating system/kernel software was still free). After the Future System (early 70s, completely different and going to completely replace 370s, lack of new 370s is credited with giving clone 370 makers their market foothold) imploded ... there was a mad rush to get stuff back into the 370 product pipelines (including kicking off quick&dirty 3033&3081 in parallel).
http://www.jfsowa.com/computer/memo125.htm

The rise of clone systems likely motivated the decision for transition to charging for kernel software, starting with charging for incremental kernel add-ons (except direct hardware support initially still free) ... and some of my stuff was selected for the initial charged for kernel software guinea pig (initially had the kernel re-org for multiprocessor, but not the multiprocessor support itself, caused problem later when they wanted to release multiprocessor support for free, but it was dependent on my charged for software).

I had previously done automated benchmarking with simulated workload & configuration (which included the autolog command). Preparing for my (charged-for software) initial release, a 1000 benchmarks were based on combinations from a large collection of different system configurations and workloads. The performance predictor would predict the result of each benchmark and afterwards compare the prediction with actual measurement (helped validated both my system and the performance predictor). Then another 1000 benchmarks were done using a modified version of performance predictor used to select new workload/configuration based on results of all previous benchmarks ... looking for possibly anomalous combinations. It took three months elapsed time to do all 2000 benchmarks (before release for customer ship).

In the 80s, the transition to charging for all kernel software was complete and have the OCO-wars (customers complaining that IBM was switching to "object code only", no more IBM source).

Turn of the century (after leaving IBM), I was brought in to look at large datacenter operation (>40 max. configured IBM mainframes @$30M, all running the same 450K Cobol statement application) that had a large group managing performance for decades (which possibly got somewhat myopic). I used some different analysis technology from the 70s science center and found a 14% improvement. Another person was brought in and used a descendant of the performance predictor (that he had acquired during IBM troubles in the early 90s and ran it through an APL->C converter) and was using it for large datacenter consulting ... and found another 7% improvement.

Note in 1992, IBM had one of the largest losses in history of US corporations and was being reorganized into the 13 "baby blues" in preparation for breakup of the company ...
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
and
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but get a call from the bowels of Armonk asking if we could help with the company break-up. Before we get started, the board had brought in the former president of AMEX as new CEO who reverses (much of) the breakup (although there was lots of technology, real estate, etc was still being unloaded)

cambrdge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
gml, sgml, html, script etc posts
https://www.garlic.com/~lynn/submain.html#sgml
hone &/or apl posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
dyanmic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
posts mentioning IBM downturn/downfall/breakup
https://www.garlic.com/~lynn/submisc.html#downturn

some specifc posts mentioning science center, hone, performance predictor, 23jun1969 unbundling, and future system
https://www.garlic.com/~lynn/2010l.html#15 Age
https://www.garlic.com/~lynn/2016b.html#36 Ransomware
https://www.garlic.com/~lynn/2016b.html#54 CMS\APL
https://www.garlic.com/~lynn/2017j.html#103 why VM, was thrashing
https://www.garlic.com/~lynn/2019c.html#85 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 11 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System

Note during addition of virtual memory to 370, 165 group started complaining if they had to implement the full virtual memory architecture, virtual memory announcement would have to be slipped six months. Eventually decision was made to drop back to the 165 subset, which included dropping segment protect ... and the other models had to retrench to the 165 subset and any software using the dropped features had to be redone. VM370 had initially used segment protect for "shared segments" ... but then had to implement a kludge loading shared segment storage protect keys with zero and not allowing virtual LPSW with key0). recent refs about decision to add virtual memory to all 370s ... and dropping back to 165 subset
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709

Then VM microcode assist for 158 & 168 was offered, where some privileged instructions for virtual machines could be handled directly by the microcode ... except for (CMS) virtual machines with shared segments (which required SSK & LPSW special rules for the shared segment storage protect hack). Then somebody came up with a hack that dropped the storage protect for shared segments ... let a virtual machine make any modifications ... but when switching users, if the previous user had changed any shared page, dissolve the changed shared page (and the next user could page fault an unmodified copy). CMS intensive 168 customers were then that the VM370R3 change would allow using VM microcode assist and the improved CMS throughput would justify buying the VM microcode assist (at some $200K). The problem was that the analysis was based on a VM370R2 which was limited to a single 16 page shared segment (requiring max. of 16 storage keys had to be checked on every task/user switch).

However, a subset of some of my work for lots of shared segments was also picked up for VM370R3 ... greatly increasing the number of storage keys that would have to be scanned on every task/user switch, flipping the throughput trade-off analysis originally done on a one shared segment VM370R2 base (justifying VM assist for CMS-intensive installations). Then scanning got much worse when multiprocessor support was released in VM370R4. Now there had to be different sets of unique duplicate shared segments for each processor and besides scanning all previous shared pages for change ... if the switched-to user/task was being redispatched on a different processor, all their shared segment page table entries had to be changed to the correct processor specific values.

When I upgraded my CSC/VM to VM370R3 base, I kept the previous storage key protect implementation for both the single processor and multiple processor implementation (not using the scanning hack that enabled CMS to run with VM assist).

Note when US HONE datacenters were consolidated in Palo Alto, it was across the back parking lot from the Palo Alto Science Center. While Cambridge had done the port of APL\360 for CP67 CMS\APL, PASC had done the port for VM370 APL\CMS ... and the majority of the sales&marketing HONE applications were implemented in APL. The large APL interpretor was in shared segments .... which would have required scanning a large number of pages on every task switch (and would have been much worse when upgraded to multiprocessors). There had been also a large (few hundred kbytes) "SEQUOIA" application that was preloaded into every sales&marketing online account (imagine something like a super powerful PROFS capability for the less computer literate). PASC did a hack that allowed SEQUOIA to be built into the HONE APL intrepretor (and a few hundred kbytes more included in the shared segments). PASC also did the APL microcode assist for the 370/145 ... claiming it ran APL at 168 speed ... however, HONE APL-based applications weren't just CPU intensive but also fairly memory intensive (larger than available on 145). What was done instead is some of the highest used APL-based HONE CPU intensive applications were recoded in forthq and facility added for APL to invoke Fortran apps and get results back. PASC had also done forthq ... which eventually released to customers (forthx opt3); 1979 forthq 168 test from old email:


fortgi         9.79/10.24   44.93/45.61
forthx opt(0)  5.94/7.07    50.27/50.92
forthx opt(1)  7.98/8.95    34.49/34.95
forthx opt(2) 12.84/13.94   27.32/27.77
forthq         8.69/9.22    22.63/23.08



... snip ...

cambrdge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
hone &/or apl posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
CMS page mapped filesystem & shared segment work
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 12 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System

ibm stl lab was turning pli compiler over to software company and wanted to give them the pasc compiler fortq optimization work also (same person at pasc that did fortq had also done the 370/145 apl microcode assist) ... some number people objected

decade later ... ibm was pulling back from breakup ... but unloaded lot of stuff ... beside descendent of Performance Predictor ... lots of chip design tools were being turned over to silicon valley industry standard tool vendor. Problem was that SUN was industry standard computer running chip design tools ... so IBM had to port all the tools to SUN.

I had left, IBM but get contract to port IBM 50k Pascal statement tool to SUN. In retrospect it would have been easier to rewrite in C (suspect that SUN Pascal had been used for little other than educational classes, aggravating was SUN had outsourced their Pascal to organization on the opposite side of the world ... space city)

hone &/or apl posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downturn/breakup/downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning porting 50k Pascal statement chip app to SUN:
https://www.garlic.com/~lynn/2015g.html#51 [Poll] Computing favorities
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2008j.html#77 CLIs and GUIs
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 13 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System

turn of century had mainframe hardware a few percent of revenue and dropping. z12 time frame, mainframe hardware was a couple percent of revenue and still dropping ... but mainframe division was 25% of revenue (nearly all software and services) and 40% of profit. becoming increasingly hard to decode mainframe hardware revenue, mostly being expressed as percent change from prior quarters and years.

big change from era when majority of revenue was mainframe hardware ... and difficult to justify large investment in proprietary mainframe hardware development infrastructures

IBM downturn/breakup/downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

lots of posts mentioning drop in mainframe hardware revenue after turn of century
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2022g.html#74 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022g.html#70 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#35 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022d.html#70 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#5 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#98 IBM Systems Revenue Put Into a Historical Context
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021g.html#24 Big Blue's big email blues signal terminal decline - unless it learns to migrate itself
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2019.html#19 IBM assembler over the ages
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2018b.html#63 Major firms learning to adapt in fight against start-ups: IBM
https://www.garlic.com/~lynn/2018.html#98 Mainframe Use/History
https://www.garlic.com/~lynn/2018.html#4 upgrade
https://www.garlic.com/~lynn/2017i.html#73 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017h.html#95 PDP-11 question
https://www.garlic.com/~lynn/2017h.html#61 computer component reliability, 1951
https://www.garlic.com/~lynn/2017g.html#103 SEX
https://www.garlic.com/~lynn/2017g.html#86 IBM Train Wreck Continues Ahead of Earnings
https://www.garlic.com/~lynn/2017f.html#11 The Mainframe vs. the Server Farm: A Comparison
https://www.garlic.com/~lynn/2017d.html#17 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017c.html#63 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#23 IBM "Breakup"
https://www.garlic.com/~lynn/2017.html#62 Big Shrink to "Hire" 25,000 in the US, as Layoffs Pile Up
https://www.garlic.com/~lynn/2016h.html#56 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016g.html#69 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016b.html#52 MVS Posix
https://www.garlic.com/~lynn/2015h.html#20 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
https://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#30 Why on Earth Is IBM Still Making Mainframes?
https://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#155 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#145 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#71 Decimation of the valuation of IBM
https://www.garlic.com/~lynn/2014f.html#84 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014f.html#80 IBM Sales Fall Again, Pressuring Rometty's Profit Goal
https://www.garlic.com/~lynn/2013g.html#7 SAS Deserting the MF?
https://www.garlic.com/~lynn/2013f.html#70 How internet can evolve
https://www.garlic.com/~lynn/2013f.html#64 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013f.html#57 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#37 Where Does the Cloud Cover the Mainframe?
https://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo
https://www.garlic.com/~lynn/2013e.html#4 Oracle To IBM: Your 'Customers Are Being Wildly Overcharged'
https://www.garlic.com/~lynn/2013b.html#24 New HD
https://www.garlic.com/~lynn/2013b.html#15 A Private life?
https://www.garlic.com/~lynn/2012n.html#25 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012n.html#13 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012m.html#67 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?

--
virtualization experience starting Jan1968, online at home since Mar1970

Conferences

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Conferences
Date: 13 Nov, 2023
Blog: Facebook
... didn't make the 1st ... but made most since then
https://en.m.wikipedia.org/wiki/The_Hackers_Conference

trivia: late 80s, CBS 60mins insisted on doing segment on conference, couple month negotiation, they promised to *NOT* sensationalize; the segment opened with statement that a secret group in the santa cruz mountains was plotting to take over the world.

... oh and reference to Learson trying (and failing) to block the bureaucrats, careerists and MBAs from destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

89/90 the commandant of the Marine Corps leverages Boyd for a make over of the Corps (at a time when IBM was desperately in need of a make over). After Boyd passes in 1997 the Air Force had pretty much disowned Boyd and it was the Marines at Arlington and we've continued to have Boyd conferences at MCU in Quantico.

... was at the SHARE HASP sing-a-long when this was 1st performed
http://www.mxg.com/thebuttonman/boney.asp

trivia: decade ago was asked to track down decision to add virtual memory to all 370s, found staff to executive that made decision; basically MVT storage management was so bad that regions frequently needed to be specified four times larger than used, as a result a 1mbyte 370/165 typically would only run four concurrent regions at a time, insufficient to keep 165 busy and justified. Going to 16mbyte virtual memory could increase number of regions by a factor of four times with little or no paging. Old archived post with pieces of email exchange (including some HASP/spooling)
https://www.garlic.com/~lynn/2011d.html#73

IBM downturn/breakup/downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

The End of Milton Friedman's Reign

From: Lynn Wheeler <lynn@garlic.com>
Subject: The End of Milton Friedman's Reign
Date: 14 Nov, 2023
Blog: Facebook
The End of Milton Friedman's Reign
https://newrepublic.com/article/175932/milton-friedman-chicagonomics-end-reign

In April this year, President Joe Biden's national security adviser, Jake Sullivan, gave a speech outlining America's major economic problems, and the administration's plan to address them. The industrial base had been hollowed out, he said, and the country had become so unequal that it threatened the foundations of democracy. He identified "one assumption" that had fueled these transformations: the idea "that markets always allocated capital productively and efficiently--no matter what our competitors did, no matter how big our shared challenges grew, and no matter how many guardrails we took down." Sullivan hastened to add that he was no foe of markets as such. But excessive faith in free markets had left the United States vulnerable to foreign rivals and internal threats.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

specific posts mentioning milton friedman
https://www.garlic.com/~lynn/2023c.html#50 Public Education as a Domestic Machinery of Indoctrination and Disposability
https://www.garlic.com/~lynn/2022h.html#15 It's still Ben Bernanke and Milton Friedman's Fed
https://www.garlic.com/~lynn/2022g.html#0 We need to rebuild a legal system where corporations owe duties to the general public
https://www.garlic.com/~lynn/2022f.html#103 The Origin of Student Debt
https://www.garlic.com/~lynn/2022e.html#81 There is No Nobel Prize in Economics
https://www.garlic.com/~lynn/2022d.html#84 Destruction Of The Middle Class
https://www.garlic.com/~lynn/2022c.html#97 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2022c.html#96 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2021k.html#48 The System
https://www.garlic.com/~lynn/2021k.html#30 Why Mislead Readers about Milton Friedman and Segregation?
https://www.garlic.com/~lynn/2021j.html#34 Chicago Boys' 100% Private Pension System in Chile Is in Big Trouble
https://www.garlic.com/~lynn/2021i.html#36 We've Structured Our Economy to Redistribute a Massive Amount of Income Upward
https://www.garlic.com/~lynn/2021h.html#22 Neoliberalism: America Has Arrived at One of History's Great Crossroads
https://www.garlic.com/~lynn/2021f.html#17 Jamie Dimon: Some Americans 'don't feel like going back to work'
https://www.garlic.com/~lynn/2021.html#21 ESG Drives a Stake Through Friedman's Legacy
https://www.garlic.com/~lynn/2020.html#25 Huawei 5G networks
https://www.garlic.com/~lynn/2020.html#15 The Other 1 Percent": Morgan Stanley Spots A Market Ratio That Is "Unprecedented Even During The Tech Bubble"
https://www.garlic.com/~lynn/2019e.html#158 Goliath
https://www.garlic.com/~lynn/2019e.html#149 Why big business can count on courts to keep its deadly secrets
https://www.garlic.com/~lynn/2019e.html#64 Capitalism as we know it is dead
https://www.garlic.com/~lynn/2019e.html#51 Big Pharma CEO: 'We're in Business of Shareholder Profit, Not Helping The Sick
https://www.garlic.com/~lynn/2019e.html#50 Economic Mess and Regulations
https://www.garlic.com/~lynn/2019e.html#32 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019e.html#31 Milton Friedman's "Shareholder" Theory Was Wrong
https://www.garlic.com/~lynn/2019e.html#14 Chicago Theory
https://www.garlic.com/~lynn/2019d.html#48 Here's what Nobel Prize-winning research says will make you more influential
https://www.garlic.com/~lynn/2019c.html#73 Wage Stagnation
https://www.garlic.com/~lynn/2019c.html#68 Wage Stagnation
https://www.garlic.com/~lynn/2018f.html#117 What Minimum-Wage Foes Got Wrong About Seattle
https://www.garlic.com/~lynn/2018f.html#107 Politicians have caused a pay 'collapse' for the bottom 90 percent of workers, researchers say
https://www.garlic.com/~lynn/2018e.html#115 Economists Should Stop Defending Milton Friedman's Pseudo-science
https://www.garlic.com/~lynn/2018c.html#83 Economists and the Powerful: Convenient Theories, Distorted Facts, Ample Rewards
https://www.garlic.com/~lynn/2018c.html#81 What Lies Beyond Capitalism And Socialism?
https://www.garlic.com/~lynn/2018b.html#87 Where Is Everyone???
https://www.garlic.com/~lynn/2018b.html#82 The Real Reason the Investor Class Hates Pensions
https://www.garlic.com/~lynn/2018.html#25 Trump's Infrastructure Plan Is Actually Pence's--And It's All About Privatization
https://www.garlic.com/~lynn/2017i.html#60 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017i.html#47 Retirement Heist: How Firms Plunder Workers' Nest Eggs
https://www.garlic.com/~lynn/2017h.html#116 The Real Reason Wages Have Stagnated: Our Economy Is Optimized For Financialization
https://www.garlic.com/~lynn/2017h.html#92 'X' Marks the Spot Where Inequality Took Root: Dig Here
https://www.garlic.com/~lynn/2017h.html#9 Corporate Profit and Taxes
https://www.garlic.com/~lynn/2017g.html#107 Why IBM Should -- and Shouldn't -- Break Itself Up
https://www.garlic.com/~lynn/2017g.html#83 How can we stop algorithms telling lies?
https://www.garlic.com/~lynn/2017g.html#79 Bad Ideas
https://www.garlic.com/~lynn/2017g.html#49 Shareholders Ahead Of Employees
https://www.garlic.com/~lynn/2017g.html#19 Financial, Healthcare, Construction, Education complexity
https://www.garlic.com/~lynn/2017g.html#6 Mapping the decentralized world of tomorrow
https://www.garlic.com/~lynn/2017f.html#53 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#45 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#44 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#16 Conservatives and Spending
https://www.garlic.com/~lynn/2017e.html#96 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#44 [CM] cheap money, was What was your first home computer?
https://www.garlic.com/~lynn/2017e.html#7 Arthur Laffer's Theory on Tax Cuts Comes to Life Once More
https://www.garlic.com/~lynn/2017d.html#93 United Air Lines - an OODA-loop perspective
https://www.garlic.com/~lynn/2017d.html#77 Trump delay of the 'fiduciary rule' will cost retirement savers $3.7 billion
https://www.garlic.com/~lynn/2017d.html#67 Economists are arguing over how their profession messed up during the Great Recession. This is what happened
https://www.garlic.com/~lynn/2017b.html#43 when to get out???
https://www.garlic.com/~lynn/2017b.html#17 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017b.html#11 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#102 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#97 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#92 Trump's Rollback of the Neoliberal Market State
https://www.garlic.com/~lynn/2017.html#34 If economists want to be trusted again, they should learn to tell jokes
https://www.garlic.com/~lynn/2017.html#29 Milton Friedman's Cherished Theory Is Laid to Rest
https://www.garlic.com/~lynn/2017.html#26 Milton Friedman's Cherished Theory Is Laid to Rest
https://www.garlic.com/~lynn/2017.html#24 Destruction of the Middle Class
https://www.garlic.com/~lynn/2017.html#17 Destruction of the Middle Class
https://www.garlic.com/~lynn/2016d.html#72 Five Outdated Leadership Ideas That Need To Die
https://www.garlic.com/~lynn/2013f.html#34 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#64 IBM Is Changing The Terms Of Its Retirement Plan, Which Is Frustrating Some Employees
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM

--
virtualization experience starting Jan1968, online at home since Mar1970

F-22 Raptor Vs F-35 Lightning: Ultimate Dog Fight Of The Fifth-Gen Fighter Jets

From: Lynn Wheeler <lynn@garlic.com>
Subject: F-22 Raptor Vs F-35 Lightning: Ultimate Dog Fight Of The Fifth-Gen Fighter Jets
Date: 14 Nov, 2023
Blog: Facebook
F-22 Raptor Vs F-35 Lightning: Ultimate Dog Fight Of The Fifth-Gen Fighter Jets
https://avgeekblog.com/f-22-raptor-vs-f-35-lightning-ultimate-dog-fight-of-the-fifth-gen-fighter-jets/

trivia ... Last decade the F35 people were making claims about it replacing F15, F16, F18 (including Growlers), and A10s ... and Sprey was pushing back. I got caught in some of the social media having to respond to F35 claims. First got them to switch from "stealth" to "low observable" ... and then showed how updated radar could target F35s (after which they claimed I shouldn't be allowed to post ... even though everything was from open source). A couple years later, there was news that not only weren't Growlers being retired, but were getting new radar jamming pods that could address adversaries being able to target F35s.

from last decade, F35 originally designed as bomb truck assuming (air superiority) F22 was flying cover to handle real threats. F35 compared to original prototype, stealth characteristics significantly compromised.
http://www.ausairpower.net/APA-JSF-Analysis.html
http://www.ausairpower.net/jsf.html
http://www.ausairpower.net/APA-2009-01.html

... there was also stories about F22 stealth coating was subject to moisture ... and jokes about not being able take to F22 out in the rain. Before the move of Tyndall F22s to Hawaii (and before all the Tyndall storm damage) ... there were articles about the heroic efforts of the Tyndall F22 stealth maintenance bays dealing with backlog of F22 coating maintenance.
http://www.tyndall.af.mil/News/Features/Display/Article/669883/lo-how-the-f-22-gets-its-stealth/
Old F22 news: F22 hangar empress (2009) "Can't Fly, Won't Die"
http://nypost.com/2009/07/17/cant-fly-wont-die/

Pilots call high-maintenance aircraft "hangar queens." Well, the F-22's a hangar empress. After three expensive decades in development, the plane meets fewer than one-third of its specified requirements. Anyway, an enemy wouldn't have to down a single F-22 to defeat it. Just strike the hi-tech maintenance sites, and it's game over. (In WWII, we didn't shoot down every Japanese Zero; we just sank their carriers.) The F-22 isn't going to operate off a dirt strip with a repair tent.

But this is all about lobbying, not about lobbing bombs. Cynically, Lockheed Martin distributed the F-22 workload to nearly every state, employing under-qualified sub-contractors to create local financial stakes in the program. Great politics -- but the result has been a quality collapse.


... snip ....

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

posts referencing ausairpower F-35 analysis
https://www.garlic.com/~lynn/2022h.html#81 Air Force unveils B-21 stealth plane. It's not a boondoggle, for a change
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2022e.html#101 The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#109 JSF/F-35
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#63 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#19 How China's New Stealth Fighter Could Soon Surpass the US F-22 Raptor
https://www.garlic.com/~lynn/2018c.html#14 Air Force Risks Losing Third of F-35s If Upkeep Costs Aren't Cut
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2017i.html#78 F-35 Multi-Role
https://www.garlic.com/~lynn/2017g.html#44 F-35
https://www.garlic.com/~lynn/2017c.html#15 China's claim it has 'quantum' radar may leave $17 billion F-35 naked
https://www.garlic.com/~lynn/2016h.html#93 F35 Program
https://www.garlic.com/~lynn/2016h.html#77 Test Pilot Admits the F-35 Can't Dogfight
https://www.garlic.com/~lynn/2016e.html#104 E.R. Burroughs
https://www.garlic.com/~lynn/2016e.html#61 5th generation stealth, thermal, radar signature
https://www.garlic.com/~lynn/2016e.html#22 Iran Can Now Detect U.S. Stealth Jets at Long Range
https://www.garlic.com/~lynn/2016b.html#96 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#89 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#55 How to Kill the F-35 Stealth Fighter; It all comes down to radar ... and a big enough missile
https://www.garlic.com/~lynn/2016b.html#20 DEC and The Americans
https://www.garlic.com/~lynn/2016.html#75 American Gripen: The Solution To The F-35 Nightmare
https://www.garlic.com/~lynn/2015f.html#46 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#44 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015c.html#14 With the U.S. F-35 Grounded, Putin's New Jet Beats Us Hands-Down
https://www.garlic.com/~lynn/2015b.html#75 How Russia's S-400 makes the F-35 obsolete
https://www.garlic.com/~lynn/2015b.html#59 A-10
https://www.garlic.com/~lynn/2014j.html#43 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2014j.html#41 50th/60th anniversary of SABRE--real-time airline reservations computer system
https://www.garlic.com/~lynn/2014j.html#40 China's Fifth-Generation Fighter Could Be A Game Changer In An Increasingly Tense East Asia
https://www.garlic.com/~lynn/2014i.html#102 A-10 Warthog No Longer Suitable for Middle East Combat, Air Force Leader Says
https://www.garlic.com/~lynn/2014h.html#49 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014h.html#36 The Designer Of The F-15 Explains Just How Stupid The F-35 Is
https://www.garlic.com/~lynn/2014g.html#22 Has the last fighter pilot been born?
https://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014d.html#97 The Planet's Best Stealth Fighter Isn't Made in America
https://www.garlic.com/~lynn/2014c.html#40 F-35 JOINT STRIKE FIGHTER IS A LEMON
https://www.garlic.com/~lynn/2013o.html#40 ELP weighs in on the software issue:
https://www.garlic.com/~lynn/2013o.html#28 ELP weighs in on the software issue:

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage S/38

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage S/38
Date: 14 Nov, 2023
Blog: Facebook
s/38 trivia: my brother was regional Apple marketing rep ... and when he visited hdqtrs, I could be invited to business dinners, even got to argue Mac design with developers before it was announced. He also tells about learning to dial into the S/38 that ran Apple, to track manufacturing and delivery schedules.

S/38 is frequently touted as greatly simplified Future System, after FS imploded
http://www.jfsowa.com/computer/memo125.htm

one of the final nails in FS coffin was analysis by the IBM houston science center that if applications from 370/195 were redone for FS machine made out of the fastest technology available, it would have throughput of 370/145 (30 times slowdown). The saving grace for s/38 low-end market was there was plenty of performance head room between the S/38 low-end market throughput requirements and available technology.

To some extent, FS (& S/38) "single level store" came from TSS/360. All during FS, I continued to work on 370 ... even periodically ridiculing FS (which wasn't exactly a career enhancing activity) ... even doing a page-mapped filesystem for CP67/CMS (later ported to VM370/CMS) ... and would claim I learned what not to do for CMS filesystem from TSS/360.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

some posts mentioning S/38 & single-level-store
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of
https://www.garlic.com/~lynn/2023d.html#100 IBM 3083
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2021k.html#132 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#43 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021c.html#16 IBM Wild Ducks
https://www.garlic.com/~lynn/2019c.html#32 IBM Future System
https://www.garlic.com/~lynn/2017j.html#95 why VM, was thrashing
https://www.garlic.com/~lynn/2017j.html#34 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017f.html#87 How a few yellow dots burned the Intercept's NSA leaker
https://www.garlic.com/~lynn/2015h.html#59 IMPI (System/38 / AS/400 historical)
https://www.garlic.com/~lynn/2015d.html#3 30 yr old email
https://www.garlic.com/~lynn/2015c.html#105 IBM System/32, System/34 implementation technology?
https://www.garlic.com/~lynn/2015b.html#60 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2014m.html#115 Mill Computing talk in Estonia on 12/10/2104
https://www.garlic.com/~lynn/2014l.html#20 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014e.html#50 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
https://www.garlic.com/~lynn/2014b.html#68 Salesmen--IBM and Coca Cola
https://www.garlic.com/~lynn/2013i.html#31 DRAM is the new Bulk Core
https://www.garlic.com/~lynn/2013h.html#45 Storage paradigm [was: RE: Data volumes]
https://www.garlic.com/~lynn/2013h.html#35 Some Things Never Die
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2013.html#63 what makes a computer architect great?
https://www.garlic.com/~lynn/2012n.html#35 390 vector instruction set reuse, was 8-bit bytes
https://www.garlic.com/~lynn/2012k.html#57 1132 printer history
https://www.garlic.com/~lynn/2011l.html#15 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2011h.html#70 IBM Mainframe (1980's) on You tube
https://www.garlic.com/~lynn/2011h.html#35 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2011h.html#34 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2011f.html#85 SV: USS vs USS
https://www.garlic.com/~lynn/2011d.html#71 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011d.html#17 The first personal computer (PC)
https://www.garlic.com/~lynn/2011c.html#91 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 15 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System

Well ... communication group was strengthening its stranglehold on mainframe datacenters with its corporate strategic ownership for everything that crossed datacenter walls.

before we met, my wife was in gburg jes group and then con'ed into going to POK to be in charge of loosely-coupled architecture where she did peer-coupled shared data architecture. She didn't remain long because 1) periodic battles with CPD trying to force her into using VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby. She has story about asking Vern Watts who he would ask permission to do IMS hot-standby, and he said "nobody, he would just do it and tell them when he was all done".

loosely-coupled, peer-coupled shared dasd posts
https://www.garlic.com/~lynn/submain.html#shareddata

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html

According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

early in ACORN (IBM/PC), Boca said that they weren't interested in software ... and an add-hoc IBM group of some 20-30 was formed in silicon valley to do software ... would touch base with Boca every month to make sure nothing had changed. Then one month, Boca changes its mind and says if you are doing ACORN software, you have to move to Boca ... and the whole effort imploded.

Mid-80s, communication group was fighting the release of mainframe tcp/ip and apparently some influential customers got that changed. They then changed their strategy and said since they had responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did the support for RFC1044 and in some tuning tests at Cray Research, between IBM 4341 and Cray, got sustained channel throughput using only modest amount of 4341 cpu (something like 500 times the bytes moved per instruction executed).

RFC1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Note; AWD (workstation division) had done their own (16bit PC/AT bus) 4mbit token-ring card for PC/RT. However, for RS/6000 microchannel, they were told they had to use PS2 cards (and couldn't do their own). The communication group (in their battle fighting off client/server and distributed computing) had severely performance kneecaped microchannel cards. For instance the 16mbit token-ring microchannel card had lower throughput than the PC/RT 4mbit T/R card.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Late 80s, a senior disk engineer had got a talk scheduled at internal, world-wide, annual communication group conference, supposedly on 3174 performance. However, he opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing to more distributed computing friendly platforms with a drop in disk sales. They had come up with a number of solutions, but were constantly vetoed by the communication group with their corporate responsibility for everything that crossed the datacenter walls. The disk division VP of software partial countermeasure was to invest in distributed computing startups that would use IBM disks ... and would periodically ask us to drop by his investments to offer any help (by this time the disk division GPD had been renamed AdStar as part of the "baby blue" reorg in preparation for break up).

dumb terminal (emulation) paradigm and install base
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

However by 1992, the communication group death grip on datacenters had resulted in one of the largest losses in history of US companies ... even though a new CEO reverses the "breakup" of the company ... it wasn't long before the IBM disk division was no more. As mentioned other details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/breakup/downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 15 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System

re: "Wheeler Scheduler" ... yes, much of it was done originally for CP/67 when I was undergraduate in the 60s (and before joining IBM). z/VM "50th" series:
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

Dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
ibm science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other posts this year referencing "zvm-50" series
https://www.garlic.com/~lynn/2023f.html#37 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023d.html#25 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#6 Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#74 Al Gore Inventing The Internet
https://www.garlic.com/~lynn/2023c.html#70 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#69 NSFNET (Old Farts)
https://www.garlic.com/~lynn/2023c.html#58 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#40 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#27 What Does School Teach Children?
https://www.garlic.com/~lynn/2023c.html#22 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#100 5G Hype Cycle
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#72 Schooling Was for the Industrial Era, Unschooling Is for the Future
https://www.garlic.com/~lynn/2023b.html#69 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#65 HURD
https://www.garlic.com/~lynn/2023b.html#51 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023b.html#12 Open Software Foundation
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#117 IBM 5100
https://www.garlic.com/~lynn/2023.html#110 If Nothing Changes, Nothing Changes
https://www.garlic.com/~lynn/2023.html#109 Early Webservers
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#88 Northern Va. is the heart of the internet. Not everyone is happy about that
https://www.garlic.com/~lynn/2023.html#83 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#70 GML, SGML, & HTML
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

--
virtualization experience starting Jan1968, online at home since Mar1970

MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
Date: 15 Nov, 2023
Blog: Facebook
I took 2 credit hr intro fortran/computers ... the univ had 709 (tape->tape) with 1401 unit record front end (tapes moved between 709/1401). The univ. had been sold 360/67 for TSS/360 ... and by the end of semester, the 1401 was temporarily replaced with 360/30 (pending arrival 360/67) and I was hired to rewrite 1401 MPIO for 360/30 (360/30 had 1401 and continued to run MPIO, I assume I was just part of getting 360 experience). Univ. shutdown datacenter on the weekends and I had the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given lots of hardware & software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, storage management, error recovery/retry, etc ... and within a few weeks had 2000 card assembler program. Lots of practice loading test job streams to tapes, moving tapes to 709, running tape->tape, moving tapes back to 360/30 and printing/punching ... and verifying the results. I then use assembler option to either generate stand-alone, IPL by BPS loader (takes 30min to assemble) or OS/360 (takes hr to assemble, DCB macros taking 5-6mins each).

Within a year of taking intro class, 360/67 had arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition, so ran as 360/65 and still still have my 48hr weekend dedicated time). Student fortran jobs ran under second on 709, but initially (3 step FORTGCLG) on os/360 took over a minute. I install HASP and it cuts time in half. I then start redoing stage2 sysgen, 1) be able to run in production job stream and 2) careful order statements for dataset and pds member placement to optimize arm seek and multi-track search cutting (student jobs) another 2/3rds to 12.9secs. Never got better than 709 until I install Univ. of Waterloo WATFOR.

Along the way science center comes out and installs CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my weekend dedicated time. I rewrite a lot of CP67 code ... initially optimizing for running OS/360 in virtual machine. Test stream runs 322secs and initially 856secs under CP67, 534secs CP67 CPU ... within a few months i get CP67 CPU down to 113secs. I then redo disk I/O adding ordered seek queuing and chained page requests (instead of separate I/O for each 4k page) for same disk cyl and for 2301 fixed head drum (would peak about 70 4k transfers/sec, could get it close to channel transfer at 270 4k transfers/sec). CP67 came with 1052&2741 support with automagic terminal type identification, but univ. had some TTY/ASCII and I add TTY/ASCII support to CP67, extending the automagic terminal type recognition.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Univ. had also got a 2250 graphics screen and I modify the CMS 2250 fortran library from LL to interface with CMS editor for fullscreen. For MVT18, I add 2741 & TTY support to HASP and implement an editor with CMS syntax for CRJE. I then want to have single terminal dial-up number for all terminals, but IBM had taken short-cut and hardwired line speed for each port. Univ. kicks off clone telecommunication controller; build channel interface board for Interdata/3 programmed to emulate IBM controller with the addition of dynamic line speed identification. This is upgraded to Interdata/4 for channel interface and cluster of Interdata/3 for ports. Interdata (& later Perkin/Elmer) sell it as clone controller and four of us get written up for (some part of) the clone controller business.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Lots of univ. had been sold 360/67 for TSS/360 ... most just used it for OS/360. Univ. of Michigan and Stanford implement their own interactive virtual memory systems. UofM does MTS and Stanford does Orvyl/Wylbur (Wylbur is later ported to MVS).

A decade ago, I was asked to track down decision to add virtual memory to all 370s, found staff to exec making decision. Basically MVT storage management was so bad that regions frequently needed to be specified four times larger than used, as a result a 1mbyte 370/165 typically would only run four concurrent regions at a time, insufficient to keep 165 busy and justified. Going to 16mbyte virtual memory could increase number of regions by a factor of four times with little or no paging. Old archived post with pieces of email exchange (including some HASP/spooling)
https://www.garlic.com/~lynn/2011d.html#73

other recent posts mentioning adding virtual memory to all 370s
https://www.garlic.com/~lynn/2023f.html#96 Conferences
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

Microcode Development and Writing to Floppies

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Microcode Development and Writing to Floppies
Date: 15 Nov, 2023
Blog: Facebook
San Jose had a large complex 3880 microcode development application (ran to several hundred text decks) that required MVS to run. The plant site primary MVS datacenter floor space was maxed out ... and they were looking for capacity. This was as internal IBM was starting to deploy lots of VM/4341 out in (non-datacenter) departmental areas (five 4341s was higher aggregate throughput than 3033, much less expensive, required less floor space and environmentals) and there was suggestion that they could also deploy MVS on 4341s.

Some analysis for executives look at the feasibility. One of the issues was 168-3 & 3033 CPU use mapped to 4341. However, they just showed the MVS "capture" CPU ... but those MVS systems was running 40%-50% "capture ratio", much of the rest was VTAM (aka full CPU was at least twice the captured CPU they were showing). The other was the departmental VM/4341s were running scores of systems per support person while MVS was still running dozens of people per system. The Los Gatos lab then looked at what it would take to get the application running on CMS.

CMS had about 64kbytes of OS/360 emulation ... they eventually got the application running with 12kbytes of additional OS/360 emulation (and then found they could make significant enhancements that weren't possible in the MVS environment, also supporting many more users with VM/370 at better response).

Note in the wake of Future System implosion
http://www.jfsowa.com/computer/memo125.htm

and the mad rush to restart 370 efforts, the head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. At the time they had a large body of new, additional OS/360 emulation, which appeared to just evaporate with the shutdown. Eventually Endicott managed to save the VM370 product mission, but had to reconstitute a development group from scratch (and Endicott was more interested in DOS/VS emulation than OS/360).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

kildall/cp/m trivia:

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS (precursor to VM370) at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

early in ACORN (IBM/PC), Boca said that they weren't interested in software ... and an add-hoc IBM group of some 20-30 was formed in silicon valley to do software ... would touch base with Boca every month to make sure nothing had changed. Then one month, Boca changes its mind and says if you are doing ACORN software, you have to move to Boca ... and the whole effort imploded.

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS versus VM370, PROFS and HONE

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS versus VM370, PROFS and HONE
Date: 16 Nov, 2023
Blog: Facebook
when I joined IBM, one of my hobbies was enhanced operating systems for internal systems and HONE was long time customer (started in US, but expanded to world-wide, online sales&marketing support system).

Note with the implosion of Future System project in the mid-70s
http://www.jfsowa.com/computer/memo125.htm
and the mad rush to restart 370 efforts, the head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. At the time they had a large body of new, additional OS/360 emulation, which appeared to just evaporate with the shutdown. Eventually Endicott managed to save the VM370 product mission, but had to reconstitute a development group from scratch.

POK had done a virtual machine tool in support of MVS/XA development that was never intended to ship to customers. However, customers weren't moving to MVS/XA as planned ... somewhat similar to a decade earlier when they weren't moving to MVS ... highlighted in this SHARE/HASP song:
http://www.mxg.com/thebuttonman/boney.asp

Amdahl was having some better success because they had implemented "Hypervisor" a virtual machine subset completely in microcode, allowing MVS and MVS/XA to be run concurrently on the same machine. This motivation in releasing the VMTOOL, as VM/MA (migration aid) and then VM/SF (system facility) ... which required the SIE instruction for virtual machine operation. Note, SIE was interesting since it was never intended for production operation and with the limited 3081 microcode space, the SIE instruction microcode had to be swapped in and out. Old archived email by trout/3090 engineer bragging the 3090 SIE would be implemented (somewhat) for performance (or at least better than the 3081 implementation not requiring swapping the microcode).
https://www.garlic.com/~lynn/2006j.html#email810630
also
https://www.garlic.com/~lynn/2007c.html#email860121

also note, that while Amdahl had done (hardware/microcode) HYPERVISOR in the early 80s, IBM wasn't able to respond with PRSM/LPAR on the 3090 until the late 80s.

trivia: in the mid-70s, Endicott had con'ed me into help with VM ECPS for the 138/148 ... old email with the initial analysis:
https://www.garlic.com/~lynn/94.html#21

In the early 80s, I get permission to present how ECPS was implemented at user group meetings. After the monthly BAYBUNCH meetings (hosted by SLAC), Amdahl people would corner me, describing how they were implementing HYPERVISOR in MACROCODE and ask for suggestions.

HONE trivia: between the VM370 "death" and VM/XA, HONE would be periodically bullied to migrate off VM370 to MVS. There were also a couple sequences where a branch manager would be promoted to hdqtrs as an executive that included the HONE operation. He would be horrified to discover that HONE was VM370-based and they would figure they would have their career made if HONE was migrated to MVS and they were the executive in charge. All HONE resources would be allocated to the MVS migration ... after a year or so, it was decided it wouldn't work, declared a success and the executive would be promoted (ie organization "heads roll uphill"). Finally in the 1st half of the 80s, somebody decided that they couldn't migrate HONE to MVS, because they were running my enhanced operating system. They instruct HONE that it had to migrate to a "vanilla" VM370 (because otherwise what would they do if I was hit by a bus), figuring that it would then be easier to migrate to MVS.

recent related post about IBM controller microcode development:
https://www.garlic.com/~lynn/2023f.html#103 Microcode Development and Writing to Floppies

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

some posts mentioning: hypervisor, macrocode, ecps, sie, vm/ma, vm/sf, lpar
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

some recent posts mentioning PROFS
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA

--
virtualization experience starting Jan1968, online at home since Mar1970

360/67 Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/67 Virtual Memory
Date: 17 Nov, 2023
Blog: Facebook
... lots of univ. were sold (virtual memory) 360/67 for tss/360 ... tss/360 never really came to production fruition ... and so many places just used it as 360/65 for os/360. stanford (ORVYL) and univ of mich. (MTS) did a virtual memory system for 360/67 (later stanford ported wylbur editor to MVS). IBM Cambridge Science Center had modified a 360/40 for virtual memory and implemented CP/40 ... later when 360/67 standard with virtual memory became available, CP/40 morphs into CP/67 (I was undergraduate at univ that had one of the 360/67s and fulltime employee responsible for os/360, then CSC came out installed CP67, 3rd installation after CSC itself and MIT Lincoln Labs ... I mostly played with it at my 48hr weekend dedicated time)

ORVYL and WYLBUR
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
more documents
https://web.stanford.edu/dept/its/support/wylorv/
ORVYL for 370
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML

Univ of Michigan MTS
https://en.wikipedia.org/wiki/Michigan_Terminal_System

CP67
https://en.wikipedia.org/wiki/CP-67
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
Melinda's virtual machine history page/documents
http://www.leeandmelindavarian.com/Melinda#VMHist

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into a independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter was largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Disaster plan was to duplicate Renton up at new 747 plant at Paine field in Everett (Mt. Rainier heats up and the resulting mud slide takes out the Renton datacenter). Somebody recently commented that Boeing was getting 360/65s like other companies bought keypunches.

trivia: MTS was originally scaffolded off of MIT Lincoln Labs LLMPS
https://web.archive.org/web/20110111202017/http://archive.michigan-terminal-system.org/myths

Germ of Truth. Early versions of what became UMMPS were based in part on LLMPS from MIT's Lincoln Laboratories. Early versions of what would become MTS were known as LTS. The initial "L" in LTS and in LLMPS stood for "Lincoln".

... snip ...

End of ACS/360, some statements that it was canceled because executives were afraid that it would advance state-of-the-art too fast and IBM would loose control of the market. Also mentions 360/65 ranked 1st in total operations/sec with 23% (Univac 1108 was 2nd with 14% & CDC 6600 was 3rd with 10%)
https://people.computing.clemson.edu/~mark/acs_end.html

posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

a couple posts mentioning 360/67, orvyl, mts, cp/67, llmps, and wylbur
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2016c.html#6 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv

--
virtualization experience starting Jan1968, online at home since Mar1970

360/67 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/67 Virtual Memory
Date: 17 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory

MIT Urban lab in tech sq (across quad from 545 with both multics and science center) had 360/67 running cp67 ... story about making mod to CP67 crashing it 27times in single day (automatic reboot/start taking a couple minutes)
http://www.multicians.org/thvv/360-67.html
Multics was still crashing but salvaging filesystem taking an hr or more ... CP67 example prompting the new storage system
https://www.multicians.org/nss.html

note folklore that some of the Multics Bell people did simplified Multics as Unix ... with some features adopted from Multics (including filesystem salvage).

The Urban lab crashing was partly my fault. CP67 installed at the univ. had 1052 & 2741 but univ. had TTY/ASCII terminals so I added TTY support (that played some games with one byte fields for line lengths ... aka TTY was less than 80) which was picked up and distributed by science center. Somebody down at Harvard using Urban lab got an ASCII terminal with 1200 line length. The Urban lab change overlooked some games with one byte field for line lengths which resulted invalid line length calculations.

Other trivia: science center CP67 had profs, staff, student users from boston/cambridge area univ and we had to do extra special security. 1) we had distributed development project with Endicott doing CP67 mods supporting 370 virtual memory architecture virtual machines ... before 370 virtual memory was announced 2) science center had done APL\360 port to CMS as CMS\APL (redone storage management from 16kbyte swapped workspaces to large virtual memory, demand page workspaces and API that supported system services like file I/O ... enabling lots of real-world apps) and Armonk business planners were using it remotely and had loaded the most valuable corporate data (detailed customer info files).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

... another 360/67 ... before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP67/CMS
https://en.wikipedia.org/wiki/CP-67
at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

posts mentioning MIT Urban lab cp67:
https://www.garlic.com/~lynn/2023f.html#61 The Most Important Computer You've Never Heard Of
https://www.garlic.com/~lynn/2022h.html#122 The History of Electronic Mail
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#94 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022.html#127 On why it's CR+LF and not LF+CR [ASR33]
https://www.garlic.com/~lynn/2021e.html#45 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2020.html#9 IBM timesharing terminal--offline preparation?
https://www.garlic.com/~lynn/2019e.html#133 IBM system/360 ad
https://www.garlic.com/~lynn/2018d.html#68 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018.html#99 Prime
https://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017h.html#44 VM/370 45th Birthday
https://www.garlic.com/~lynn/2017e.html#21 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#77 Honeywell 200
https://www.garlic.com/~lynn/2015c.html#103 auto-reboot
https://www.garlic.com/~lynn/2015c.html#57 The Stack Depth
https://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014g.html#24 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014d.html#39 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2013m.html#38 Quote on Slashdot.org
https://www.garlic.com/~lynn/2013l.html#24 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013h.html#84 Minicomputer Pricing
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#30 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013b.html#52 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012k.html#84 Did Bill Gates Steal the Heart of DOS?
https://www.garlic.com/~lynn/2012k.html#17 a clock in it, was Re: Interesting News Article
https://www.garlic.com/~lynn/2012j.html#22 Interesting News Article
https://www.garlic.com/~lynn/2011k.html#31 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011j.html#12 program coding pads
https://www.garlic.com/~lynn/2011h.html#44 OT The inventor of Email - Tom Van Vleck
https://www.garlic.com/~lynn/2011h.html#26 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011d.html#17 The first personal computer (PC)
https://www.garlic.com/~lynn/2011.html#15 545 Tech Square
https://www.garlic.com/~lynn/2010q.html#22 Who hasn't caused an outage? What is the worst thing you have done?
https://www.garlic.com/~lynn/2010o.html#48 origin of 'fields'?
https://www.garlic.com/~lynn/2010l.html#11 Titles for the Class of 1978
https://www.garlic.com/~lynn/2010k.html#25 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010j.html#51 Information on obscure text editors wanted
https://www.garlic.com/~lynn/2010d.html#14 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#40 PC history, was search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009k.html#1 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009.html#81 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2008s.html#48 New machine code
https://www.garlic.com/~lynn/2008l.html#59 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2007s.html#30 Intel Ships Power-Efficient Penryn CPUs
https://www.garlic.com/~lynn/2007o.html#58 ACP/TPF
https://www.garlic.com/~lynn/2007l.html#11 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007h.html#16 conformance
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?
https://www.garlic.com/~lynn/2007g.html#37 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007c.html#41 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006r.html#41 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006p.html#50 what's the difference between LF(Line Fee) and NL (New line) ?
https://www.garlic.com/~lynn/2006n.html#49 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
https://www.garlic.com/~lynn/2006k.html#32 PDP-1
https://www.garlic.com/~lynn/2006c.html#28 Mount DASD as read-only
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest
https://www.garlic.com/~lynn/2005s.html#46 Various kinds of System reloads
https://www.garlic.com/~lynn/2005o.html#25 auto reIPL
https://www.garlic.com/~lynn/2005j.html#48 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2005c.html#58 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005b.html#30 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004o.html#45 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#18 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004k.html#43 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#47 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2003p.html#23 1960s images of IBM 360 mainframes
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2003g.html#5 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003.html#73 Card Columns
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002i.html#5 DCAS [Was: Re: 'atomic' memops?]
https://www.garlic.com/~lynn/2002f.html#38 Playing Cards was Re: looking for information on the IBM
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001f.html#78 HMC . . . does anyone out there like it ?
https://www.garlic.com/~lynn/2001c.html#36 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000b.html#77 write rings
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/99.html#207 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/99.html#53 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 18 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#101 CSC, HONE, 23Jun69 Unbundling, Future System

... in z196 time-frame, max configured z196 was $30M and was rated at 50BIPS (industry benchmark that counted number of iterations compared to 370/158 assumed to be 1MIP). Same time-frame cloud blade (typical megadatacenter having half million or more such blades) was E5-2600 benchmarked at 500BIPS (same benchmark, each ten times max z196) and IBM had base list price of $1815. this was before server chip press said at least half their product was being shipped directly to megadatacenters and IBM sells off its server business. Note for a couple decades, the large cloud operators claimed that they assemble their own servers at 1/3rd the cost of brand name servers ... potentially $605 or $1.21/BIPS (compared to $600,000/BIPS for z196). A cloud megadatacenter with half million blades @$605 would be $300M (equivalent ten max configured z196) but aggregate BIPS equivalent of five million max configured z196s. The spread today between max configured IBM mainframe and server blades seemed to have increased.

Note cloud megadatacenters had so drastically cut their system costs, that power & cooling was becoming more significant ... and they have been put increasing "green" pressure on chip vendors ... watts/BIPS becoming increasingly important (threatening to shift from i86 chips to ARM chips designed for battery operation and power efficiency).

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 19 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#101 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System

Note that science center originally wanted 360/50 to modify, adding hardware virtual memory ... but couldn't get any since all extra 360/50s were going to FAA ATC project ... so had to settle for 360/40. They claim that was fortunate, since it was easier to implement for 360/40 (than it would have been for 360/50). They then implement CP40/CMS ... which morphs into CP67/CMS when 360/67 standard with virtual memory becomes available. Comeau's history of CP40 at SEAS 82:
https://www.garlic.com/~lynn/cp40seas1982.txt
other history
http://www.leeandmelindavarian.com/Melinda#VMHist

within a year of taking intro fortran/computers, the univ's 709/1401 was replaced with 360/67 for tss/360 ... but never came to production fruition, so ran as 360/65 with os/360 ... and I was hired fulltime responsible for os/360. The univ. shutdown datacenter on weekends, and I had the place dedicated (although 48hrs w/o sleep made monday classes hard). One weekend, the bell rang and everything stopped ... I tried everything ... but best I got was bell ringing ... finally i hit 1052 with fist and paper dropped out. The end of paper had moved past the finger sensor (cause unit check and intervention required) ... but there was enough friction to not drop all the way out.

Later after graduating and joining the science center, I learned the CE kept a spare 1052-7 ... because people would regularly fist the keyboard ... faster to replace ... and repair the broken one offline. Also, very early CP67 had feature that operator could log in on 2741 terminals in the machine room ... not requiring 1052-7 to be operational.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some specific posts mentioning faa/atc, csc and cp40
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#54 IBM History
https://www.garlic.com/~lynn/2022b.html#22 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2019e.html#121 Virtualization
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2014j.html#33 Univac 90 series info posted on bitsavers
https://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2009k.html#55 Hercules; more information requested
https://www.garlic.com/~lynn/2007i.html#14 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2007d.html#52 CMS (PC Operating Systems)

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 19 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#101 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System

When CP67 was initially installed at univ, I mostly played with it on my dedicated weekend time ... rest of time 360/67 ran as 360/65 with OS/360. Initially I concentrated on optimization CP67 running OS/360 in virtual machine. A jobstream that ran 322secs on bare machine, ran 856secs under CP67 (534 CP67 CPU secs). After a few months I had it down to 435secs under CP67 (113 CP67 CPU secs).

Then I did a lot of work on scheduling and page replacement algorithms and I/O system. Originally I/O queuing was FIFO and did single page transfer per I/O. I modified disk queuing for ordered seek and would chain multiple queued page transfers per I/O (for same disk cylinder and for all queued 2301, optimized to maximize transfers per rotation). Originally 2301 peaked around 70 page transfers/sec ... optimized it could peak around 270/sec, nearly channel speed. Archived post with part of old SHARE presentation describing some of the univ. os/360 and cp/67 work:
https://www.garlic.com/~lynn/94.html#18

After graduating and joining science center, I upgraded cambridge with all my enhancements ... and most were included in customer distribution.

1977, I transfer to san jose research and do some work with Jim Gray and Vera Watson on original sql/relational System/R. Jim leaves IBM for Tandem in fall of 1980 and foists some amount of stuff on me. A year later, at Dec81 ACM SIGOPS meeting, he asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved GLOBAL LRU (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving GLOBAL LRU). Jim knew I had detailed stats on the Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local).

Early 70s, IBM Grenoble Science Center had a 1mbyte 360/67 (155 4k pageable pages after fixed memory) running 35 CMS uses and had modified "standard" CP67 with working set dispatcher and local LRU page replacement ... corresponding to 60s academic papers. I was at Cambridge which had 768kbyte 360/67 (104 4k pageable pages, only 2/3rds the number of Grenoble) and running 80 CMS users, similar kind of workloads, similar response, better throughput (more than twice as many users) running "standard" CP67 that I had originally done as undergraduate in the 60s. In addition to the Grenoble APR73 CACM article, I also had loads of detailed background performance data from Grenoble and Cambridge. IBM initially blocked me for nearly a year in responding, I hoped it was punishing me for being blamed for online computer conferencing on the internal network and not that they were playing in academic dispute.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Some posts mentioning Cambridge&Grenoble science centers, GLOBAL/LOCAL LRU page replacement, Stanford PHD
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2021j.html#19 Windows 11 is now available
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018d.html#28 MMIX meltdown
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013l.html#25 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011l.html#6 segments and sharing, was 68000 assembly language programming
https://www.garlic.com/~lynn/2010f.html#85 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006i.html#36 virtual memory
https://www.garlic.com/~lynn/2006i.html#31 virtual memory
https://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/99.html#18 Old Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 20 Nov, 2023
Blog: Facebook
re: re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#101 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System

IBM Pisa and Grenoble Science Centers ran CP67/CMS on 360/67.

Pisa Science Center had added SPM to CP67/CMS (superset of a combination of the later VMCF, IUCV & SMSG for VM370) and RSCS/VNET had support ... included things like forwarding "instant messages" over the internal network. POK then ported SPM to VM370. Circa 1980, the author of REXX did a multi-user spacewar game and used SPM for communication between server and CMS 3270 clients (since RSCS/VNET supported SPM, clients on different systems on the internal network could play). trivia: almost immediately robot clients appeared beating human players (with their faster response). Server was then upgraded for non-linear increase in power use as client response times dropped below human threshold.

for whatever reason SPM never shipped to customers ... but starting with CP67 also started to be used for automated operator applications

I mentioned Grenoble in this upthread post
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System

I mention before I graduate, I am hired fulltime into small group in the Boeing CFO office to help with formation of BCS
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory

there was lots of politics between the Renton datacenter director and Boeing CFO, who only had 360/30 up at Boeing field for payroll ... although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff.

Boeing Huntsville previously had large 360/67 with several 2250M1 graphic screens for TSS/360 CAD/CAM. TSS/360 never came to production fruition and so ran with OS/360. Long running CAD/CAM interactive jobs severely aggravated the MVT poor storage management and Huntsville modified MVT13 with virtual memory support ... didn't do any paging, but used to reorganize virtual memory addresses to somewhat offset the poor MVT storage management ... somewhat precursor to the decision to add virtual memory to all 370s to compensate for poor MVT storage management.

A decade ago, I was asked to track down decision to add virtual memory to all 370s, found staff to exec making decision. Basically MVT storage management was so bad that regions frequently needed to be specified four times larger than used, as a result a 1mbyte 370/165 typically would only run four concurrent regions at a time, insufficient to keep 165 busy and justified. Going to 16mbyte virtual memory could increase number of regions by a factor of four times with little or no paging. Old archived post with pieces of email exchange (including some HASP/spooling)
https://www.garlic.com/~lynn/2011d.html#73

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

past posts mentioning Boeing Huntsville 360/67
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

posts mentioning Pisa, SPM, multi-user spacewar
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2018e.html#104 The (broken) economics of OSS
https://www.garlic.com/~lynn/2016c.html#1 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015d.html#9 PROFS
https://www.garlic.com/~lynn/2014g.html#93 Costs of core
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2013i.html#27 RBS Mainframe Meltdown: A year on, the fallout is still coming
https://www.garlic.com/~lynn/2012j.html#7 Operating System, what is it?
https://www.garlic.com/~lynn/2012e.html#64 Typeface (font) and city identity
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2012d.html#24 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011i.html#66 Wasn't instant messaging on IBM's VM/CMS in the early 1980s
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2010k.html#33 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010h.html#0 What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 20 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#23 Copyright Software
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software

IBM charged for the machine ... with no additional charges for software, maintenance, SE technical services, etc (and no copyright notices). Litigation resulted in separate charging (23jun1969 announce) .... part of results was also IBM started including copyright notices in all software ... even kernel software which was still providing for free. The rise of clone 370 systems and the Future System implosion, resulted in decision (some 6-7yrs later after unbundling) to transition to start charging for kernel/system software. In early 80s, transition to charging for all kernel software had completed and customer "OCO-wars" started (customers complaining about IBM "object code only", source no longer available).

23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

CSC, HONE, 23Jun69 Unbundling, Future System

From: Lynn Wheeler <lynn@garlic.com>
Subject: CSC, HONE, 23Jun69 Unbundling, Future System
Date: 20 Nov, 2023
Blog: Facebook
re: re:
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#94 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#95 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#101 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#107 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System

trivia: the other part of automated operator; while SPM could be used in an application to program capture all messages, instant messages from other users .... and all kernel/system messages sent to the user; the operator user could run SPM automated operator applications. CP67 had automatic re-ipl and start after crash .... system was up and available for users to logon (part of extensions to dark room, no human, 7x24, off-shift operation).

However, the operator wasn't logged on, required human to login the operator to bring up any automated operator applications.

After joining IBM, I was doing lots of algorithm & performance work and constantly benchmarking. Eventually I implemented CP67 automatic benchmarking which included "autolog" command for the operator at IPL. For automatic benchmarking, the operator automatic script would have list of benchmarks to run ... which would "autolog" simulated users running benchmark scripts.

Very soon, standard production systems had adopted IPL "autolog" operator command with production scripts ... and I ported to VM370 about the same time as POK ported SPM to VM370. While SPM never shipped to customers, the autolog command eventually shipped in customer release (including the automatic autolog of the operator at IPL).

automated benchmark posts
https://www.garlic.com/~lynn/submain.html#benchmark
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning SPM, AUTOLOG, and automated operator
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2017k.html#37 CMS style XMITMSG for Unix and other platforms
https://www.garlic.com/~lynn/2016c.html#1 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?

besides SPM, AUTOLOG, and automated operator, describes CP67/VM370 "SPMS", a CMS application that captures messages that can be handled by EXECs.
https://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)

--
virtualization experience starting Jan1968, online at home since Mar1970

360/67 Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/67 Virtual Memory
Date: 20 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#106 360/67 Virtual Memory

late 60s there were 12 people in csc cp67/cms group as well as two commercial timesharing cp67 service bureau spinoffs of the science center ... and running many more 360/67s than tss/360 ... and tss/360 group had 1200 people.

cp/67 early running 35 user benchmark had better throughput and response than tss/360 had running equivalent 4 users.

tss/360 would claim it had leading edge multiprocessing support because two processing system had 3.9 times the throughput of single processor. The actual problem was it had huge bloated kernel and applications ... and was page thrashing in 1mbyte (single processor) ... but 2mbyte (two processor) was starting to have sufficient memory for applications to execute (through still didn't beat cp67)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning 1200 in tss/360 group and 12 in cp/67 group
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#34 Vintage Computing
https://www.garlic.com/~lynn/2022d.html#17 Computer Server Market
https://www.garlic.com/~lynn/2019d.html#67 Facebook Knows More About You Than the CIA
https://www.garlic.com/~lynn/2019d.html#59 IBM 360/67
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2014l.html#20 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013m.html#37 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013l.html#24 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013h.html#45 Storage paradigm [was: RE: Data volumes]
https://www.garlic.com/~lynn/2013h.html#16 How about the old mainframe error messages that actually give you a clue about what's broken
https://www.garlic.com/~lynn/2011m.html#6 What is IBM culture?
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2007m.html#60 Scholars needed to build a computer history bibliography
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2002n.html#62 PLX

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 20 Nov, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#23 Copyright Software
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#33 Copyright Software
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software

nearly all 360s models were microcoded instructions. 360/67 was 360/65 with virtual memory ... but many 360/67s also shipped with the MIT Lincoln Labs "Search List" (microcoded) instruction (so many that CP67 kernel used it with program check handler that checked for invalid "search list" and simulated the instruction if it wasn't installed). IBM 360/67 "blue card" also documented (When I joined IBM, I got copy of blue card from one of the inventors of GML).

Boeblingen did 370 115&125 that had memory bus with nine positions for microprocessors. All 115 microprocessors were the same, just with different microcode, the microprocessors were about 800 KIPS, the 370 microcode avg of ten native instruction for every 370 instruction (aka about 80KIPS). The 125 was the same except the microprocessor that ran 370 microcode at about 1.2MIPS native resulting in 120KIPS 370.

Spring 1975, after Future System implodes ... Boeblingen cons me into working on design/spec for 125-11 (multiprocessor) with five microprocessors running 370 microcode and Endicott cons me into working on ECPS for 370 138&148 (follow-on to 135&145), identify highest executed 6kbytes of VM370 kernel code segments for moving into micocode (on approx 1:1 basis getting 10 times throughput increase). This is archived post with the initial analysis showing 6kbytes accounting for 80% of kernel execution:
https://www.garlic.com/~lynn/94.html#21

I wanted to have all of the 138/148 ECPS also applied to the five 370 125 machine ... which I also specified a multiprocessor work queue in microcode (somewhat like the later i432) as well as high level queued interfaces between 370s and I/O processors (a little like some of the later 370/XA). Endicott objected that the five processor 125 would overlap the throughput of their processors and got the 125 effort canceled (in the escalation meetings, I was required to argue both sides).

Early 80s, I got permission to give presentations on how the 138/148 ECPS was done to user group meetings, including monthly BAYBUNCH meetings hosted by Stanford SLAC. After SLAC meetings, Amdahl people would corner me for more information. They described how they created MACROCODE (370-like instruction set that ran in microcode mode), initially to respond to the series of minor 3033 microcode changes that were required for MVS to run. It was then being used to implement HYPERVISOR ... virtual machine subset all done in microcode allowing them to run different concurrent operating systems.

VAMPS 5-processor 370/125 posts
https://www.garlic.com/~lynn/subtopic.html#bounce SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
360&370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

some posts mentioning 125 multiprocessor & 138/148 ECPS
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023.html#47 370/125 and MVCL instruction
https://www.garlic.com/~lynn/2022c.html#41 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2021k.html#38 IBM Boeblingen
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021b.html#49 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2020.html#39 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2019.html#84 IBM 5100
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018c.html#18 Old word processors
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2017.html#74 The ICL 2900
https://www.garlic.com/~lynn/2016d.html#62 PL/I advertising
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015g.html#91 IBM 4341, introduced in 1979, was 26 times faster than the 360/30
https://www.garlic.com/~lynn/2015c.html#44 John Titor was right? IBM 5100
https://www.garlic.com/~lynn/2015b.html#46 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2015b.html#39 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2010j.html#1 History: Mark-sense cards vs. plain keypunching?
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes

360/67 blue card

360/67 blue card

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM RAS

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM RAS
Date: 21 Nov, 2023
Blog: Facebook
As undergraduate, univ hired me fulltime responsible for the system software and I did a lot of OS/360 and CP/67 optimization work (that IBM picked up and shipped in products). Before I graduate, I was hired fulltime into the Boeing CFO office to help with the creation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). Just before graduation I accept a position at IBM science center (instead of staying at Boeing) ... and for a lark take the IBM programmers aptitude test at univ. job fair. Somebody from San Jose told me I flunked ... I then tell him my job offer at IBM ... and he is totally bewildered. Much of my time at IBM seemed as if I was battling the bureaucracy ... account of Learson trying to block the bureaucrats, careerists, and MBAs from destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

When I transferred to SJR, I worked some with Jim Gray on original SQL/relational, System/R. He left in fall of 1980 (palming some stuff off on me) for Tandem. At Tandem, he did some studies of availability and service outages and found that hardware reliability was increasing to the point that service outages were starting to be dominated by environment (earthquakes, floods, bldg integrity, etc) factors and people mistakes. Part of his analysis
https://www.garlic.com/~lynn/grayft84.pdf
also
https://jimgray.azurewebsites.net/papers/TandemTR86.2_FaultToleranceInTandemComputerSystems.pdf

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

... also after transferring to SJR, I got to wander around IBM and customer datacenters in Silicon Valley, including disk engineering & product test (bldgs 14&15) across the street. They were running stand-alone, pre-scheduled, 7x24 testing and had mentioned that they had recently tried MVS (for concurrent testing) but MVS had 15min MTBF/system failure (requiring manual re-ipl) in that environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail, supporting any amount of on-demand, concurrent testing greatly improving productivity. A side-effect was I also further cut the pathlength. I did an internal research report on the work, happening to mention the MVS MTBF ... bringing down the wrath of the MVS org on my head (offline I was told, they even tried to have me separated from the company). Note later when 3380s were getting ready to ship, FE had a test suite of 57 (simulated) errors they expected to see and in all 57, MVS was having system failure and in 2/3rds of the cases there was no evidence of what caused the failure (I didn't feel bad, joke about MVS recovery, repeatedly covering up the original fault that it was no longer possible to find it)

downside supporting disk development was they got into habit anytime there was a problem of calling me claiming it was my software problem ... and I would spend increasing amount of my time playing disk engineer diagnosing hardware issues.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

late 80s, one day Nick Donofrio
https://www.amazon.com/If-Nothing-Changes-Donofrio-Story-ebook/dp/B0B178D91G/
stopped in Austin and all the local executives were out of town. My wife put together hand drawn charts and estimates for doing NYTimes project for Nick ... and he approved it. Possibly contributed to offending so many people in Austin that suggested that we do the project in San Jose. It started out HA/6000 for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres who had VAXcluster support in same source base with UNIX, i did cluster API that implemented VAXCluster semantics to simplify the ports). Out marketing, I coined the terms disaster survivability and "geographic survivabilty" (as alternative to disaster/recovery).

The IBM S/88 (logo'ed fault tolerant vendor) product administrator started taking us around to their customers. Also arranged for me to write a section for the corporate continuous available strategy document (however it got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the requirements).

Early Jan1992, had meeting with Oracle CEO where AWD VP Hester told Ellison that we would have 16-way cluster mid-92 and 128-way ye-92. Then during Jan1992 have presentations to FSD about HA/CMP for national labs. FSD then tells Kingston supercomputer lab that FSD was adopting HA/CMP. Within a day or two, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Computerworld news 17feb1992 (from wayback machne) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific only
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster&geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Games

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Games
Date: 21 Nov, 2023
Blog: Facebook
After joining IBM, discovered somebody at science center had ported PDP1 space wars
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
to 2250M4 (large graphics screen with 1130 computer)
https://en.wikipedia.org/wiki/IBM_2250
and would bring in my kids on weekends to play; two players, each player got half keyboard for control/command sequences (game play not networked)

Later at SJR, I got to wander around IBM and customer datacenters, including TYMSHARE that I would also see at monthly user group meetings hosted at Stanford SLAC. TYMSHARE started offering their CMS-based online computer conferencing free to SHARE organization in AUG1976 as VMSHARE, archives here
http://vm.marist.edu/~vmshare

I cut a deal getting monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal datacenters and network (including world-wide online sales&marketing support HONE systems). Biggest problem was corporate lawyers concerned internal employees would be contaminated with (unfiltered) customer information. On one visit to TYMSHARE, they demo'ed ADVENTURE game, which somebody had found on Stanford PDP10 and ported to CMS. I got copy of all files & source and made it available on internal systems and network (but game play wasn't multi-user or networked).

Internal systems had SPM (originally implemented by the Pisa scientific center for CP/67), an internal system communication facility that also ran over the internal network (supported by RSCS/VNET). Around 1980, the author of REXX created a multi-user client/server spacewar game and SPM was used to communicate between CMS 3270 clients and servers (so clients could be on the same machine or anywhere in the internal network). Almost immediately robot clients started appearing beating human players. The server was then modified to increase power use non-linear as responses dropped below human response thresholds to somewhat level the playing field.

a few posts mentioning games, adventure, spacewar
https://www.garlic.com/~lynn/2023b.html#88 Online systems fostering online communication
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2015d.html#9 PROFS
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011n.html#9 Colossal Cave Adventure
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2009j.html#79 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970



--
previous, next, index - home