List of Archived Posts

2024 Newsgroup Postings (02/22 - )

Assembler language and code optimization
Vintage REXX
Can a compiler work without an Operating System?
Bypassing VM
"Under God" (1954), "In God We Trust" (1956)
Vintage REXX
"Under God" (1954), "In God We Trust" (1956)
IBM Tapes
IBM User Group SHARE
The Attack of the Killer Micros
Some NSFNET, Internet, and other networking background
3033
3033
rusty iron why ``folklore''?
Machine Room Access
IBM 5100
IBM 5100
IBM 5100
IBM 5100
Machine Room Access
DB2, Spooling, Virtual Memory
HA/CMP
HA/CMP
HA/CMP
HA/CMP
CTSS/7094, Multics, Unix, CP/67
HA/CMP
HA/CMP
DB2
DB2
ACP/TPF
HONE, Performance Predictor, and Configurators
HA/CMP
Internet
Internet
Internet
Internet
Internet
Internet
Tonight's tradeoff
Vintage IBM 3380s
Vintage Mainframe
Vintage Mainframe
Vintage Mainframe
Mainframe Career
Automated Operator
Companies paid top executives more than they paid in US taxes
OS2
Vintage 3033
Vintage 2250
IBM Token-Ring
IBM Token-Ring
IBM Token-Ring
Vintage Mainframe
Vintage Mainframe
IBM Token-Ring
Vintage Mainframe
Vintage RISC
Vintage MVS
Vintage HSDT
Vintage Selectric
Vintage MVS
Vintage Series/1
Computers and Boyd
Private Equity Buying Up Accounting Practices
MVT/SVS/MVS/MVS.XA
New data shows IRS's 10-year struggle to investigate tax crimes
The Communication Group Datacenter Stranglehold
IBM Hardware Stories
3270s For Management
HSDT, HA/CMP, NSFNET, Internet
Vintage Internet and Vintage APL
Vintage Internet and Vintage APL
Vintage IBM, RISC, Internet
Internet DNS Trivia
IBM Financial Engineering
Software vendors dump open source, go for the cash grab
Mexican cartel sending people across border with cash to buy these weapons
ARPANET Directory 1982
Difference between NCP and TCP/IP protocols
IBM DBMS/RDBMS
rusty iron why ``folklore''?
rusty iron why ``folklore''?
rusty iron why ``folklore''?
IBM DBMS/RDBMS
IBM AIX
Vintage BITNET
Dialed in - a history of BBSing
Vintage BITNET
Dialed in - a history of BBSing
IBM User Group Share
7Apr1964 - 360 Announce
IBM User Group Share
PC370
How investment firms shield the ultrawealthy from the IRS
Ferranti Atlas and Virtual Memory
Ferranti Atlas and Virtual Memory
IBM 360 Announce 7Apr1964
IBM 360 Announce 7Apr1964
OSI: The Internet That Wasn't
OSI: The Internet That Wasn't
OSI: The Internet That Wasn't
IBM 360 Announce 7Apr1964
IBM 360 Announce 7Apr1964
OSI: The Internet That Wasn't
IBM 360 Announce 7Apr1964
OSI: The Internet That Wasn't
IBM 360 Announce 7Apr1964
IBM 360 Announce 7Apr1964
IBM->SMTP/822 conversion
IBM 360 Announce 7Apr1964
IBM 360 Announce 7Apr1964
OSI: The Internet That Wasn't
EBCDIC
EBCDIC
Disk & TCP/IP I/O
Disk & TCP/IP I/O

Assembler language and code optimization

From: Lynn Wheeler <lynn@garlic.com>
Subject: Assembler language and code optimization
Date: 22 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com~lynn/2024.html#122 Assembler language and code optimization

topic drift, early 80s, co-worker left IBM SJR and was doing contracting work in silicon valley, optimizing (fortran) HSPICE, lots of work for the senior engineering VP at major VLSI company, etc. He did a lot of work on AT&T C compiler (bug fixes and code optimization) getting running on CMS ... and then ported a lot of the BSD chip tools to CMS.

One day the IBM rep came through and asked him what he was doing ... he said ethernet support for using SGI workstations as graphical frontends. The IBM rep told him that instead he should be doing token-ring support or otherwise the company might not find its mainframe support as timely as it has been in the past. I then get a hour long phone call listening to four letter words. The next morning the senior VP of engineering calls a press conference to say the company is completely moving off all IBM mainframes to SUN servers. There were then IBM studies why silicon valley wasn't using IBM mainframes ... but they weren't allowed to consider branch office marketing issues.

some past posts mentioning HSPICE
https://www.garlic.com~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021h.html#69 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2011p.html#119 Start Interpretive Execution
https://www.garlic.com/~lynn/2003n.html#26 Good news for SPARC
https://www.garlic.com/~lynn/2002h.html#4 Future architecture
https://www.garlic.com/~lynn/2002h.html#3 Future architecture

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage REXX

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage REXX
Date: 22 Feb, 2024
Blog: Facebook
In the very early 80s, I wanted to demonstrate REX(X) was not just another pretty scripting language (before renamed REXX and released to customers). I decided on redoing a large assembler application (dump processor & fault analysis) in REX with ten times the function and ten times the performance (lot of hacks and slight of hand done to make interpreted REX run faster than the assembler version), working half time over three months elapsed. I finished early so started writing automated script that searched for most common failure signatures. It also included a pseudo dis-assembler ... converting storage areas into instruction sequences and would format storage according to specified dsects. I got softcopy of messages&codes and could index applicable information. I had thought that it would be released to customers, but for what ever reasons it wasn't (even tho it was in use by most PSRs and internal datacenters) ... however, I finally did get permission to give talks on the implementation at user group meetings ... and within a few months similar implementations started showing up at customer shops.

... oh and it could be run either as consol command/session. ... or as xedit macro ... so everything was captured as xedit file

.. later the 3090 service processor guys (3092) asked to ship it installed on the service processor

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

Can a compiler work without an Operating System?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Can a compiler work without an Operating System?
Newsgroups: alt.folklore.computers
Date: Thu, 22 Feb 2024 23:48:01 -1000
John Levine <johnl@taugh.com> writes:
For very small 360s or I suppose for low level program developmen, IBM had Basic Programming Support (BPS) all of which loaded from cards. It had RPG, an assembler, loaders, I/O library, and some I/O utilities to copy stuff and format disks.

https://bitsavers.org/pdf/ibm/360/bos_bps/


I took two credit hr intro to fortran/computers. Univ had 709/1401 and IBM pitched 360/67 for tss/360 as replacement. Pending availability of 360/67, the 1401 was replaced with 64k 360/30 (that had 1401 emulation) to start getting 360 experience. At the end of the intro clss, I was hired to rewrite 1401 MPIO (card reader->tape, tape->printer/punch, aka unit record front end for 709 running tape->tape) for 360/30, part of getting 360 experience. The univ shutdown datacenter over the weekend and I got the whole place dedicated (but 48hrs w/o sleep made monday classes hard). They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc and within a few weeks had 2000 card program that ran stand-alone (IPL'ed with the BPS loader).

I then modified it with assembler option that generated either the stand alone version (took 30mins to assemble) or OS/360 with system services macros (took 60mins to assemble, each DCB macro taking 5-6mins). I quickly learned 1st thing coming in sat. morning was clean the tape drives and printers, disassemble the 2540 printer/punch, clean it and reassemble. Also sometimes sat. morning, production had finished early and everything was powered off. Sometimes 360/30 wouldn't power on and reading manuals and trail&error would place all the controllers in CE mode, power-on the 360/30, individually power-on each controller, then placing them back in normal mode.

Within a year of the intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (TSS/360 never came to production so ran as 360/65). Then some people from science center came out to install CP67 (precursor to vm370) ... 3rd after CSC itself and MIT Lincoln labs; and I mostly played with it in my 48hr weekend window. Initially cp67 kernel was couple dozen assembler source routines originally kept on os/360 ... individually assembled and txt output placed in card tray in correct order behind BPS loader to IPL kernel... which writes memory image to disk. disk then can be IPLed to run CP67. Each module TXT deck had diagonal stripe and module name across top ... so updating and replacing individual module (in card tray) can be easily identify/done. Later CP67 source was moved to CMS and it was possible to assemble and then virtually punch BPS loader and TXT output that is transferred to an input reader, (virtually) IPL it and have it written to disk (potentially even the production system disk) for re-ipl.

One of the things I worked on was a mechanism for paging part of the CP67 kernel (reducing fixed storage requirement) ... reworking some code into 4kbyte segments ... which increased the number of ESD entry symbols ... and eventually found I started running into the BPS loader 255 ESD limit.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning BPS loader, MPIO work and CP67
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019e.html#19 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#75 CP67 & EMAIL history
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#104 OS/360 PCP JCL
https://www.garlic.com/~lynn/2017h.html#49 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2009h.html#12 IBM Mainframe: 50 Years of Big Iron Innovation

--
virtualization experience starting Jan1968, online at home since Mar1970

Bypassing VM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Bypassing VM
Newsgroups: comp.arch
Date: Fri, 23 Feb 2024 09:47:29 -1000
jgd@cix.co.uk (John Dallman) writes:
Virtual memory systems without swapping or demand paging. You are limited to physical memory, but the ability of the OS to re-map pages to avoid fragmentation makes life reasonably simple. But only reasonably: you really get to exercise your out-of-memory handlers.

In the 60s, Boeing Huntsville modified OS/360 MVT13 (real memory) to do just that. They had gotten a 360/67 multiprocessor for tss/360 with lots of 2250 graphics displays ... but tss/360 never came to production ... so they configured it as two 360/65s and ran os/360. Because of MVT storage management problems (exacerbated by long running 2250 graphics cad/cam progams), they modified MVT13 to run virtual memory using 360/67 hardware ... but w/o paging.

a little more than decade ago, I was asked to tract down decision to add virtual memory to all IBM 370s. Turns out MVT storage management was so bad, that program execution "regions" had to be specified four times larger than used ... a typical one mbyte 370/165 would only run four concurrently executing regions, insufficient to keep machine busy and justified. Running MVT in a 16mbyte virtual memory would allow increasing concurrently executing regions by four times with little or no paging (similar to running MVT in CP67 16mbyte virtual machine).

archived post with pieces of email exchange with somebody that reported to IBM executive making the 370 virtual memory decision
https://www.garlic.com/~lynn/2011d.html#73

a few recent posts mentioning Boeing Huntsville MVT13 work and decision adding virtual memory to all 370s:
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System

360/67
https://en.wikipedia.org/wiki/IBM_System/360-67
CP67/CMS history
https://en.wikipedia.org/wiki/History_of_CP/CMS

science center (responsible for cp67/cms) posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

"Under God" (1954), "In God We Trust" (1956)

From: Lynn Wheeler <lynn@garlic.com>
Subject: "Under God" (1954), "In God We Trust" (1956)
Date: 23 Feb, 2024
Blog: Facebook
Pledge of Allegiance (1923)
https://www.ushistory.org/documents/pledge.htm
Under God (1954)
https://en.wikipedia.org/wiki/Pledge_of_Allegiance
Even though the movement behind inserting "under God" into the pledge might have been initiated by a private religious fraternity and even though references to God appear in previous versions of the pledge, historian Kevin M. Kruse asserts that this movement was an effort by corporate America to instill in the minds of the people that capitalism and free enterprise were heavenly blessed. Kruse acknowledges the insertion of the phrase was influenced by the push-back against Russian atheistic communism during the Cold War, but argues the longer arc of history shows the conflation of Christianity and capitalism as a challenge to the New Deal played the larger role.[28]
... snip ...

In God We Trust (1956) ... replacing "Out of many, one"
https://en.wikipedia.org/wiki/In_God_We_Trust

note that John Foster Dulles played major role rebuilding Germany economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their investments in Germany, persuaded the German government to accept a loan of nearly $500 million to prevent default. Foster was their agent. His ties to the German government tightened after Hitler took power at the beginning of 1933 and appointed Foster's old friend Hjalmar Schacht as minister of economics.

loc905-7:
Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism
... snip ...

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/

somewhat replay of the Nazi celebration, after the war, 5000 industrialists and corporations from across the US had conference at the Waldorf-Astoria, and in part because they had gotten such a bad reputation for the depression and supporting Nazis, as part of attempting to refurbish their horribly corrupt and venal image, they approved a major propaganda campaign to equate Capitalism with Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/

... other trivia, from the law of unintended consequences, when US 1943 Strategic Bombing program needed targets in Germany, they got plans and coordinates from wallstreet.

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

specific posts mentioning Facism and/or Nazism
https://www.garlic.com/~lynn/2023c.html#80 Charlie Kirk's 'Turning Point' Pivots to Christian Nationalism
https://www.garlic.com/~lynn/2022g.html#39 New Ken Burns Documentary Explores the U.S. and the Holocaust
https://www.garlic.com/~lynn/2022g.html#28 New Ken Burns Documentary Explores the U.S. and the Holocaust
https://www.garlic.com/~lynn/2022g.html#19 no, Socialism and Fascism Are Not the Same
https://www.garlic.com/~lynn/2022e.html#62 Empire Burlesque. What comes after the American Century?
https://www.garlic.com/~lynn/2022e.html#38 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2022.html#28 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021k.html#7 The COVID Supply Chain Breakdown Can Be Traced to Capitalist Globalization
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#80 "The Spoils of War": How Profits Rather Than Empire Define Success for the Pentagon
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021d.html#11 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021c.html#23 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021c.html#18 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#51 Sacking the Capital and Honor
https://www.garlic.com/~lynn/2021.html#46 Barbarians Sacked The Capital
https://www.garlic.com/~lynn/2021.html#44 American Fascism
https://www.garlic.com/~lynn/2021.html#34 Fascism
https://www.garlic.com/~lynn/2021.html#32 Fascism
https://www.garlic.com/~lynn/2020.html#0 The modern education system was designed to teach future factory workers to be "punctual, docile, and sober"
https://www.garlic.com/~lynn/2019e.html#161 Fascists
https://www.garlic.com/~lynn/2019e.html#112 When The Bankers Plotted To Overthrow FDR
https://www.garlic.com/~lynn/2019e.html#107 The Great Scandal: Christianity's Role in the Rise of the Nazis
https://www.garlic.com/~lynn/2019e.html#63 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#43 Corporations Are People
https://www.garlic.com/~lynn/2019d.html#98 How Journalists Covered the Rise of Mussolini and Hitler
https://www.garlic.com/~lynn/2019d.html#94 The War Was Won Before Hiroshima--And the Generals Who Dropped the Bomb Knew It
https://www.garlic.com/~lynn/2019d.html#75 The Coming of American Fascism, 1920-1940
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2017e.html#23 Ironic old "fortune"

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage REXX

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage REXX
Date: 23 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#1 Vintage REXX

Some of the MIT 7094/CTSS folks went to the 5th flr to do MULTICS
https://en.wikipedia.org/wiki/Multics
... others went to the IBM Science Center on the 4th flr, did virtual machines, CP40/CMS, CP67/CMS, bunch of online apps, etc
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/History_of_CP/CMS

... and the IBM Boston Programming System was on the 3rd flr that did CPS
https://en.wikipedia.org/wiki/Conversational_Programming_System
... although a lot was subcontracted out to Allen-Babcock (including the CPS microcde assist for the 360/50)
https://www.bitsavers.org/pdf/allen-babcock/cps/
https://www.bitsavers.org/pdf/allen-babcock/cps/CPS_Progress_Report_may66.pdf

When decision was made to add virtual memory to all 370s, we started on modification to CP/67 (CP67H) to support 370 virtual machines (simulating the difference in architecture) and a version of CP/67 that ran on 370 (CP67I, was in regular use a year before the 1st engineering 370 machine with virtual memory was operational, later "CP370" was in wide use on real 370s inside IBM). A decision was made to do official product, morphing CP67->VM370 (dropping and/or simplifying lots of features) and some of the people moved to the 3rd flr to take over the IBM Boston Programming Center ... becoming the VM370 Development group ... when they outgrew the 3rd flr, they moved out to the empty IBM SBC bldg at Burlington Mall on rt128.

After the FS failure and mad rush to get stuff back into the 370 product pipelines, the head of POK convinced corporate to kill vm370 product, shutdown the development group, and move all the people to POK for MVS/XA ... or supposedly MVS/XA wouldn't be able to ship on time ... endicott managed to save the VM370 product mission, but had to reconstitute a development group from scratch).

trivia: some of the former BPC people did get CPS running on CMS.

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some posts mentioning Boston Programming Center, CPS, and Allen-Babcock
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2016d.html#35 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2014f.html#4 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2014e.html#74 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2013l.html#28 World's worst programming environment?
https://www.garlic.com/~lynn/2013l.html#24 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013c.html#36 Lisp machines, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#8 OT: CPL on LCM systems [was Re: COBOL will outlive us all]
https://www.garlic.com/~lynn/2012n.html#26 Is there a correspondence between 64-bit IBM mainframes and PoOps editions levels?
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2008s.html#71 Is SUN going to become x86'ed ??

--
virtualization experience starting Jan1968, online at home since Mar1970

"Under God" (1954), "In God We Trust" (1956)

From: Lynn Wheeler <lynn@garlic.com>
Subject: "Under God" (1954), "In God We Trust" (1956)
Date: 24 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#4 "Under God" (1954), "In God We Trust" (1956)

The Admissions Game
https://www.nakedcapitalism.com/2023/07/the-admissions-game.html
Back in 1813, Thomas Jefferson and John Adams exchanged a series of letters on what would later come to be called metritocracy. Jefferson argued for a robust system of public education so that a "natural aristocacy" based on virtue and talents could be empowered, rather than an "artificial aristocracy founded on wealth and birth." Adams was skeptical that one could so easily displace an entrenched elite:
... snip ...

... "fake news" dates back to at least founding of the country, both Jefferson and Burr biographies, Hamilton and Federalists are portrayed as masters of "fake news". Also portrayed that Hamilton believed himself to be an honorable man, but also that in political and other conflicts, he apparently believed that the ends justified the means. Jefferson constantly battling for separation of church & state and individual freedom, Thomas Jefferson: The Art of Power,
https://www.amazon.com/Thomas-Jefferson-Power-Jon-Meacham-ebook/dp/B0089EHKE8/
loc6457-59:
For Federalists, Jefferson was a dangerous infidel. The Gazette of the United States told voters to choose GOD AND A RELIGIOUS PRESIDENT or impiously declare for "JEFFERSON-AND NO GOD."
... snip ...

.... Jefferson targeted as the prime mover behind the separation of church and state. Also Hamilton/Federalists wanting supreme monarch (above the law) loc5584-88:
The battles seemed endless, victory elusive. James Monroe fed Jefferson's worries, saying he was concerned that America was being "torn to pieces as we are, by a malignant monarchy faction." 34 A rumor reached Jefferson that Alexander Hamilton and the Federalists Rufus King and William Smith "had secured an asylum to themselves in England" should the Jefferson faction prevail in the government.
... snip ...

In the 1880s, Supreme Court were scammed (by the railroads) to give corporations "person rights" under the 14th amendment.
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiii/loc45-50:
IN DECEMBER 1882, ROSCOE CONKLING, A FORMER SENATOR and close confidant of President Chester Arthur, appeared before the justices of the Supreme Court of the United States to argue that corporations like his client, the Southern Pacific Railroad Company, were entitled to equal rights under the Fourteenth Amendment. Although that provision of the Constitution said that no state shall "deprive any person of life, liberty, or property, without due process of law" or "deny to any person within its jurisdiction the equal protection of the laws," Conkling insisted the amendment's drafters intended to cover business corporations too.
... snip ...

... testimony falsely claiming authors of 14th amendment intended to include corporations pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a scholar set out to identify every Fourteenth Amendment case heard by the Supreme Court, the justices decided 28 cases dealing with the rights of African Americans--and an astonishing 312 cases dealing with the rights of corporations.

pg36/loc726-28:
On this issue, Hamiltonians were corporationalists--proponents of corporate enterprise who advocated for expansive constitutional rights for business. Jeffersonians, meanwhile, were populists--opponents of corporate power who sought to limit corporate rights in the name of the people.

pg229/loc3667-68:
IN THE TWENTIETH CENTURY, CORPORATIONS WON LIBERTY RIGHTS, SUCH AS FREEDOM OF SPEECH AND RELIGION, WITH THE HELP OF ORGANIZATIONS LIKE THE CHAMBER OF COMMERCE.
... snip ...

False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates back as far as the Greek and Roman Empires, characteristics of the modern corporation began to appear in England in the mid-thirteenth century.[4] "Merchant guilds" were loose organizations of merchants "governed through a council somewhat akin to a board of directors," and organized to "achieve a common purpose"[5] that was public in nature. Indeed, merchant guilds registered with the state and were approved only if they were "serving national purposes."[6]
... snip ..

"Why Nations Fail"
https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/
original settlement, Jamestown ... English planning on emulating the Spanish model, enslave the local population to support the settlement. Unfortunately the North American natives weren't as cooperative and the settlement nearly starved. Then they switched to sending over some of the other populations from the British Isles essentially as slaves ... the English Crown charters had them as "leet-man" ... pg27:
The clauses of the Fundamental Constitutions laid out a rigid social structure. At the bottom were the leet-men, with clause 23 noting, "All the children of leet-men shall be leet-men, and so to all generations."
... snip ...

My wife's father was presented with a set of 1880 history books for some distinction at West Point by the Daughters Of the 17th Century
http://www.colonialdaughters17th.org/
which refer to if it hadn't been for the influence of the Scottish settlers from the mid-atlantic states, the northern/English states would have prevailed and the US would look much more like England with monarch ("above the law") and strict class hierarchy. His Scottish ancestors came over after their clan was "broken". Blackadder WW1 episode had "what does English do when they see a man in a skirt?, they run him through and nick his land". Other history was the Scotts were so displaced that about the only thing left for men, was the military.

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Tapes

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Tapes
Date: 24 Feb, 2024
Blog: Facebook
Melinda ... and her history efforts:
http://www.leeandmelindavarian.com/Melinda#VMHist

in mid-80s, sent me email asking if I had copy of the original CMS multi-level source update implementation ... which was exec implementation repeatedly/incrementally applying the updates ... first for original source, creating temporary file, and then repeatedly applying updates to series of the temporary files. I had nearly dozen tapes in the IBM Almaden Research tape library with (replicated) archives of files from the 60&70s ... and was able to pull off the original (multi-level source update) implementation. Melinda was fortunate since a few weeks later, Almaden had a operational problem where random tapes were being mounted as scratch ... and I lost all my 60&70s archive files.

trivia: late 70s, I had done CMSBACK (incremental backup/archive) for internal datacenters (starting with research and the internal US consolidated online sales&marketing support HONE systems up in Palo Alto) ... and emulated/used standard VOL1 labels. It went through a couple of internal releases and then a group did PC & workstation clients and it was released to customers as Workstations Datasave Facility (WDSF). Then GPD/AdStar took it over and renamed it ADSM ... and when disk division was unloaded, it became TSM and now IBM Storage Protect.
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
CMSBACK, backup/archived, storage management
https://www.garlic.com/~lynn/submain.html#backup

posts mentioning Almaden operations mounting random tapes as scratch and loosing my 60s&70s archive
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#61 File Backup
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021g.html#89 Keeping old (IBM) stuff
https://www.garlic.com/~lynn/2021.html#22 Almaden Tape Library
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2014b.html#92 write rings
https://www.garlic.com/~lynn/2013n.html#60 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
https://www.garlic.com/~lynn/2011m.html#12 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2009n.html#66 Evolution of Floating Point

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group SHARE

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group SHARE
Date: 25 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#111 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE

Melinda's response to earlier email
https://www.garlic.com/~lynn/2011c.html#email860407
https://www.garlic.com/~lynn/2021k.html#email860407
Date: 04/07/86 10:06:34
From: wheeler

re: hsdt; I'm responsible for an advanced technology project called High Speed Data Transport. One of the things it supports is a 6mbit TDMA satellite link (and can be configured with up to 4-6 such links). Several satellite earth stations can share the same link using a pre-slot allocated scheme (i.e. TDMA). The design is fully meshed ... somewhat like a LAN but with 3000 mile diameter (i.e. satellite foot-print).

We have a new interface to the earth station internal bus that allows us to emulate a LAN interface to all other earth stations on the same trunk. Almost all other implementations support emulated point-to-point copper wires ... for 20 node network, each earth station requires 19 terrestrial interface ports ... i.e. (20*19)/2 links. We use a single interface that fits in an IBM/PC. A version is also available that supports standard terrestrial T1 copper wires.

It has been presented to a number of IBM customers, Berkeley, NCAR, NSF director and a couple of others. NSF finds it interesting since 6-36 megabits is 100* faster than the "fast" 56kbit links that a number of other people are talking about. Some other government agencies find it interesting since the programmable encryption interface allows cryto-key to be changed on a packet basis. We have a design that supports data-stream (or virtual circuit) specific encryption keys and multi-cast protocols.

We also have normal HYPERChannel support ... and in fact have done our own RSCS & PVM line drivers to directly support NSC's A220. Ed Hendricks is under contract to the project doing a lot of the software enhancements. We've also uncovered and are enhancing several spool file performance bottlenecks. NSF is asking that we interface to TCP/IP networks.

... snip ... top of post, old email index, NSFNET email

note: the communication group was fighting the release of mainframe TCP/IP, but then possibly some influential customers got that reversed ... then the communication group changed their tactics; ... since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbites/sec aggregate using nearly whole 3090 processor. I then did the support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained nearly full 4341 channel throughput using only modest about of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
TCP RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

The Attack of the Killer Micros

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Attack of the Killer Micros
Newsgroups: comp.arch
Date: Mon, 26 Feb 2024 07:58:42 -1000
mitchalsup@aol.com (MitchAlsup1) writes:
I had the first 200 MHz Pentium Pro out of the Micron factory. It ran DOOM at 73 fps and Quake at 45+ fps both full screen. I would not call that a joke.

It was <essentially> the death knell for RISC workstations.


re:
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks

2003, 32 processor, max. configured IBM mainframe Z990 benchmarked aggregate 9BIPS 2003, Pentium4 processor benchmarked 9.7BIPS

some other recent posts mentioning pentium4
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#113 Cobol

Also 1988, ibm branch office asked if I could help LLNL standardized some serial stuff they were playing with which quickly becomes fibre channel standard (FCS, initial 1gbit, full-duplex, 200mbytes/sec aggregate). Then some IBM mainframe engineers become involved and define a heavy-weight protocol that significantly reduces the native throughput, which is released as FICON.

The most recent public benchmark I can find is "PEAK I/O" benchmark for max. configured z196 getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blades claiming over million IOPS (two having higher throughput than 104 FICON). Also IBM pubs recommend that System Assist Processors ("SAPs" that do the actual I/O), be kept to no more than 70% processor ... which would be about 1.5M IOPS).

FICON, FCS, posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Some NSFNET, Internet, and other networking background

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some NSFNET, Internet, and other networking background.
Date: 26 Feb, 2024
Blog: Facebook
Some NSFNET, Internet, and other networking background.

Overview: IBM CEO Learson trying (& failed) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Twenty years later, IBM has one of the largest losses in the history of US companies and was being reorganized in the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

Inventing the Internet
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
z/VM 50th - part 3
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

Some NSFNET supercomputer interconnect background leading up to the RFP awarded 24Nov87 (as regional networks connect in, it morphs into the NSFNET "backbone", precursor to the current "Internet").
Date: 09/30/85 17:27:27
From: wheeler
To: CAMBRIDG xxxxx

re: channel attach box; fyi;

I'm meeting with NSF on weds. to negotiate joint project which will install HSDT as backbone network to tie together all super-computer centers ... and probably some number of others as well. Discussions are pretty well along ... they have signed confidentiality agreements and such.

For one piece of it, I would like to be able to use the cambridge channel attach box.

I'll be up in Milford a week from weds. to present the details of the NSF project to ACIS management.

... snip ... top of post, old email index, NSFNET email

Date: 11/14/85 09:33:21
From: wheeler
To: FSD

re: cp internals class;

I'm not sure about 3 days solid ... and/or how useful it might be all at once ... but I might be able to do a couple of half days here and there when I'm in washington for other reasons. I'm there (Alexandria) next tues, weds, & some of thursday.

I expect ... when the NSF joint study for the super computer center network gets signed ... i'll be down there more.

BTW, I'm looking for a IBM IBM 370 processor in the wash. DC area running VM where I might be able to get a couple of userids and install some hardware to connect to a satellite earth station & drive PVM & RSCS networking. It would connect into the internal IBM pilot ... and possibly also the NSF supercomputer pilot.

... snip ... top of post, old email index, NSFNET email

In early 80s, I also had HSDT project (T1 and faster computer links) and was working with NSF director and was suppose to get $20M for NSF supercomputer center interconnects. Then congress cuts the budget, some other things happen and eventually NSF releases RFP (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn//2002k.html#12
https://www.garlic.com/~lynn//2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87). As regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Date: 04/07/86 10:06:34
From: wheeler
To: Princeton

re: hsdt; I'm responsible for an advanced technology project called High Speed Data Transport. One of the things it supports is a 6mbit TDMA satellite link (and can be configured with up to 4-6 such links). Several satellite earth stations can share the same link using a pre-slot allocated scheme (i.e. TDMA). The design is fully meshed ... somewhat like a LAN but with 3000 mile diameter (i.e. satellite foot-print).

We have a new interface to the earth station internal bus that allows us to emulate a LAN interface to all other earth stations on the same trunk. Almost all other implementations support emulated point-to-point copper wires ... for 20 node network, each earth station requires 19 terrestrial interface ports ... i.e. (20*19)/2 links. We use a single interface that fits in an IBM/PC. A version is also available that supports standard terrestrial T1 copper wires.

It has been presented to a number of IBM customers, Berkeley, NCAR, NSF director and a couple of others. NSF finds it interesting since 6-36 megabits is 100* faster than the "fast" 56kbit links that a number of other people are talking about. Some other government agencies find it interesting since the programmable encryption interface allows cryto-key to be changed on a packet basis. We have a design that supports data-stream (or virtual circuit) specific encryption keys and multi-cast protocols.

We also have normal HYPERChannel support ... and in fact have done our own RSCS & PVM line drivers to directly support NSC's A220. Ed Hendricks is under contract to the project doing a lot of the software enhancements. We've also uncovered and are enhancing several spool file performance bottlenecks. NSF is asking that we interface to TCP/IP networks.

... snip ... top of post, old email index, NSFNET email

Date: 05/05/86 07:19:20
From: wheeler

re: HSDT; For the past year, we have been working with Bob Shahan & NSF to define joint-study with NSF for backbone on the super-computers. There have been several meetings in Milford with ACIS general manager (xxxxx) and the director of development (xxxxx). We have also had a couple of meetings with the director of NSF.

Just recently we had a meeting with Ken King (from Cornell) and xxxxx (from ACIS) to go over the details of who writes the joint study. ACIS has also just brought on a new person to be assigned to this activity (xxxxx). After reviewing some of the project details, King asked for a meeting with 15-20 universities and labs. around the country to discuss various joint-studies and the application of the technology to several high-speed data transport related projects. That meeting is scheduled to be held in June to discuss numerous university &/or NSF related communication projects and the applicability of joint studies with the IBM HSDT project.

I'm a little afraid that the June meeting might turn into a 3-ring circus with so many people & different potential applications in one meeting (who are also possibly being exposed to the technology & concepts for the 1st time). I'm going to try and have some smaller meetings with specific universities (prior to the big get together in June) and attempt to iron out some details beforehand (to minimize the confusion in the June meeting).

... snip ... top of post, old email index, NSFNET email

... somebody in Yorktown Research then called up all the invitees and canceled the meeting

Date: 09/15/86 11:59:48
From: wheeler
To: somebody in paris

re: hsdt; another round of meetings with head of the national science foundation ... funding of $20m for HSDT as experimental communications (although it would be used to support inter-super-computer center links). NSF says they will go ahead and fund. They will attempt to work with DOE and turn this into federal government inter-agency network standard (and get the extra funding).

... snip ... top of post, old email index, NSFNET email

Somebody had collected executive email with lots of how corporate SNA/VTAM could apply to NSFNET (RFP awarded 24Nov1987) and forwarded it to me ... previously posted to newsgroups, but heavily clipped and redacted to protect the guilty

Date: 01/09/87 16:11:26
From: ?????

TO ALL IT MAY CONCERN-

I REC'D THIS TODAY. THEY HAVE CERTAINLY BEEN BUSY. THERE IS A HOST OF MISINFORMATION IN THIS, INCLUDING THE ASSUMPTION THAT TCP/IP CAN RUN ON TOP OF VTAM, AND THAT WOULD BE ACCEPTABLE TO NSF, AND THAT THE UNIVERSITIES MENTIONED HAVE IBM HOSTS WITH VTAM INSTALLED.

Forwarded From: ***** To: ***** Date: 12/26/86 13:41

1. Your suggestions to start working with NSF immediately on high speed (T1) networks is very good. In addition to ACIS I think that it is important to have CPD Boca involved since they own the products you suggest installing. I would suggest that ***** discuss this and plan to have the kind of meeting with NSF that ***** proposes.

... snip ... top of post, old email index, NSFNET email

< ... great deal more of the same; several more appended emails from several participants in the MISINFORMATION ... >

RFP awarded 24nov87 and RFP kickoff meeting 7Jan1988
https://www.garlic.com/~lynn/2000e.html#email880104

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall --
virtualization experience starting Jan1968, online at home since Mar1970

3033

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3033
Date: 27 Feb, 2024
Blog: Facebook
trivia: 3033 was quick&dirty remap of 168 logic to 20% faster chips. The 303x channel director (external channels) was 158 engine with integrated channel microcode code (and no 370 microcode). 3031 was two 158 engines ... one with just 370 microcode and one with just integrated channel microcode. A 3032 was 168-3 using channel director for external channels.

when I transferred out to SJR I got to wander around IBM and non-IBM datacenters in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running 7x24, pre-scheduled, stand-alone testing and had mentioned that they had recently tried MVS ... but it had 15min mean-time-between-failure (in that environment) requiring manual re-ipl. I offered to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing greatly improving productivity. Then bldg15 got 1st engineering 3033 outside POK processor engineering flr. Since testing took only percent or two of the processor, we scrounged up a 3830 disk controller and string of 3330 disk and setup our own private online service.

The channel director operation was still a little flaky and would sometimes hang and somebody would have to go over reset/re-impl to bring it back. We figured out if I did CLRCH in fast sequence to all six channels, it would provoke the channel director into doing its own re-impl (a 3033 could have three channel directors, so we could have online service on a channel director separate from the ones used for testing).

posts mentioning getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

... before transferring to SJR, I got roped into 16processor, tightly-coupled (shared memory) multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than the 168 logic remap). Everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-way support (POK doesn't ship a 16-way machine until after the turn of the century). Then the head of POK invited some of us to never visit POK again and told the 3033 processor engineers to keep their heads down and focused only on 3033. Once the 3033 was out the door, they start on trout/3090.

SMP, multiprocessor, tightly-coupled, shared memory posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few recent posts mentioning mainframe since the turn of century:
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#12 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2022.html#96 370/195
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

3033

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3033
Date: 28 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#11 3033

Note, POK favorite son operating system (MVT->SVS->MVS->MVS/XA) was motivation for doing or not doing lots of stuff, not just hardware, like not doing 16-way multiprocessor until over two decades later (after turn of the century with z900, dec2000).

A little over a decade ago, I was asked to track down decision to add virtual memory to all 370 ... basically MVT storage management was so bad that it required specifying region sizes four times larger than used ... as a result, typical 1mbyte 370/165 could only run four concurrent regions concurrently, insufficient to keep 165 busy and justified. Going to single 16mbyte virtual memory (SVS) would allow increasing number of concurrently running regions by a factor of four times with little or no paging (something like running MVT in a CP67 16mbyte virtual machine, precursor to VM370). However this still used 4bit storage keys to provide region protection (zero for kernel, 1-15 for regions).

Moving to MVS with a 16mbyte virtual memory for each application further increased concurrently running programs using different virtual address spaces for separation protection/security. Reference here to customers not moving to MVS like IBM required ... so sales&marketing were provided bonuses to get customers to move
http://www.mxg.com/thebuttonman/boney.asp

However, OS/360 had a pointer-passing API heritage ... so they placed a 8mbyte image of the MVS kernel in every virtual address space ... so the kernel code could directly address calling API data (as if it was still MVT running in real storage). That just left 8mbytes available for application. Also, subsystems were also moved into their own, separate address space. However for a subsystem to access application calling API data they invented the "Common Segment Area" ... a one mbyte segment mapped into every address space ... where applications would place API calling data so that subsystems could access it using the passed pointer. This reduced the the application 16mbyte by another mbyte leaving just seven mbytes. However the demand for "Common Segment Area" space turns out to be somewhat proportional to the number of subsystems and concurrently executing applications (machines were getting bigger and faster and needed increasing number of concurrent executing "regions" to keep them busy) ... and the "Common Segment Segment" quickly becomes "Common System Area" ... by 3033 it was pushing 5-6mbytes (leaving only 2-3mbytes for the application) and threatening to become 8mbytes (plus the 8mbyte kernel area) leaving zero/nothing (out of each application 16mbyte address space) for an application.

Lots of 370/XA and MVS/XA was defined (like 31bit addressing and access registers) to address structural/design issues inherited from OS/360. Even getting to MVS/XA was apparently the motivation to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually managed to obtain the VM370 charter for midrange, but had to reconstitute a development group from scratch; senior POK executives were also going around internal datacenters bullying them to move from VM370 to MVS, since while there would be VM370 for mid-range, there would no longer be VM370 for POK high-end machines).

However, even after MVS/XA was available in early 80s, similar to customers not moving to MVS (boney fingers reference), POK was having problems with getting customers to move to MVS/XA. Amdahl was having better success since they had HYPERVISOR (multiple domain facility, virtual machine subset done in microcode) that could run MVS and MVS/XA on the same machine. Now there had been the VMTOOL (& SIE, note SIE was necessary for virtual machine operation on 3081, but 3081 lacked the necessary microcode space, so had to do microcode "paging" seriously affecting performance) virtual machine subset, done in POK supporting MVS/XA development ... but w/o the features and performance for VM370-like production use. To compete with Amdahl, eventually this is shipped as VM/MA (migration aid) and VM/SF (system facility) for running MVS & MVS/XA concurrently for migration support.

recent posts mentioning CSA (common segment/system area)
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#69 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#64 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#49 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#19 Channel I/O
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370

other posts mentioning vmtool, sie, vm/ma, vm/sf
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

rusty iron why ``folklore''?

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: rusty iron why ``folklore''?
Newsgroups: alt.folklore.computers
Date: Wed, 28 Feb 2024 12:06:29 -1000
John Levine <johnl@taugh.com> writes:
This was on MVT which definitely let you specify your region size.

According to the manual I just read, each call to the SPIE macro returned the address of the previous PICA, program interruption control area, in the supervisor. The idea is that a later SPIE will use that PICA address to put the interrupts back they way they were before. The PICAs were in the supervisor so call SPIE enough and it runs out of space. Or so I've heard.


region size would otherwise default to something ... it was involved in the justification to add virtual memory to all 370s (I was asked to track down the decision a little over a decade and posted the result here, aka a.f.c. and bit.listserv.mainframe). Basically MVT storage management was so bad that regions sizes had to be four times larger than used, as a result typical 1mbyte 370/165 would only run four regions concurrently, insufficient to keep processor busy and justified.

Initially going to SVS single 16mbyte virtual memory (something like running MVT in a CP67 16mbyte virtual machine, precursor to VM370) would allow number of concurrently running regions to be inceased by a factor of four times with little or no paging.

problem then was region&kernel security/integrity was maintained by 4bit storage protection keys (zero for kernel, 1-15 for no. concurrent regions) and as systems got larger/faster, they needed further increase in concurrently running "regions" ... thus the SVS->MVS with each region getting its own 16mbyte virtual address space (which resulted in a number of additional problems).

old archived post:
https://www.garlic.com/~lynn//2011d.html#73

a few other recent posts referencing adding virtual memory to all 370s
https://www.garlic.com/~lynn/2024b.html#3 Bypassing VM
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2024.html#27 HASP, ASP, JES2, JES3
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#21 1975: VM/370 and CMS Demo
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#96 Conferences
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#24 Video terminals
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

Machine Room Access

From: Lynn Wheeler <lynn@garlic.com>
Subject: Machine Room Access
Date: 28 Feb, 2024
Blog: Facebook
similar but different ... people in bldg14 disk enginneering were not allowed in bldg15 disk product test (and vis-a-versa) to minimize complicit/collusion in product test certifying new hardware. My badge was enabled for just about every bldg and machine room (since most of them all ran my enhanced production systems). Recent comment mentioning the subject:
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2024b.html#12 3033

One monday morning I got an irate call from bldg15 asking what I did over the weekend to destroy 3033 throughput. It went back and forth a couple times about everybody denying having done anything ... until somebody in bldg15 admitted they had replaced the 3830 disk controller with a test 3880 controller (handling the 3330 string for private online service). 3830 had a super fast horizontal microcode processor for everything. The 3800 had a fast hardware path to handle 3380 3mbyte/sec transfers ... but everything else was handled by a really (, really) slow vertical microprocessor.

The 3880 engineers trying to mask how slow it really was, had a hack to present end-of-operation early ... hoping to be able to complete processing overlapped while the operating system was doing interrupt processing. It wasn't working out that way ... in my I/O supervisor rewrite ... besides making it bullet proof and never fail, I had radically cut the I/O interrupt processing pathlength. I had also added CP67 "CHFREE" back (in the morph of CP67->VM370, lots of stuff was dropped and/or greatly simplified; CP67 CHFREE was a macro that checked for channel I/O redrive was soon as it was "safe" ... as opposed to completely finishing processing latest device I/O interrupt). In any case, my I/O supervisor was attempting to restart any queued I/O while the 3880 was still trying to finalize processing of the previous I/O ... which required it to present CU busy (SM+BUSY), which required the system to requeue the attempted I/O, for retry later when the controller presented CUE interrupt (required after having presented SM+BUSY). The 3880 slow processing overhead reduced the number of I/Os compared to 3830 and the hack with presenting I/O complete early, drove up my system processing overhead.

They were increasingly blaming me for problems and I was increasingly having to play disk engineer diagnosing their problems. playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 5100

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 28 Feb, 2024
Blog: Facebook
5100
https://en.wikipedia.org/wiki/IBM_5100
IBM PALM processor (Program All Logic in Microcode)
https://en.wikipedia.org/wiki/IBM_PALM_processor
SCAMP (precursor)
https://en.wikipedia.org/wiki/Portable_computer#SCAMP

I've disclaimed that almost every time I make the wiki reference (let it slip this time). Even tho I was in research ... I was spending lot of time up at HONE (US consolidated online sales&marketing support, nearly all APL apps, across the back parking lot from PASC and PASC was helping a lot with APL consulting and optimization) ... as well as LSG (had let me have part of a wing with offices and labs)

As HONE clones were sprouting up all over the world ... I believe HONE became the largest use of APL anywhere.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

A more interesting LSG ref (I find it interesting since after leaving IBM, I was on the financial industry standards committee).
https://en.wikipedia.org/wiki/Magnetic_stripe_card#Further_developments_and_encoding_standards
LSG was ASD from the 60s ... it seems that ASD evaporated with the Future System implosion and the mad rush to get stuff back into the 370 product pipelines ... and appeared that much of ASD was thrown into the development breach (internal politics during FS period had been shutting down and killing 370 efforts).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

some posts mentioning 5100, PALM, SCAMP:
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2013o.html#11 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2011i.html#55 Architecture / Instruction Set / Language co-design
https://www.garlic.com/~lynn/2011h.html#53 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2010i.html#33 System/3--IBM compilers (languages) available?
https://www.garlic.com/~lynn/2010c.html#54 Processes' memory
https://www.garlic.com/~lynn/2007d.html#64 Is computer history taugh now?
https://www.garlic.com/~lynn/2003i.html#84 IBM 5100

some posts mentioning IBM LSG and magnetic stripe
https://www.garlic.com/~lynn/2017g.html#43 The most important invention from every state
https://www.garlic.com/~lynn/2016.html#100 3270 based ATMs
https://www.garlic.com/~lynn/2013j.html#28 more on the 1975 Transaction Telephone set
https://www.garlic.com/~lynn/2013g.html#57 banking fraud; regulation,bridges,streams
https://www.garlic.com/~lynn/2012g.html#51 Telephones--private networks, Independent companies?
https://www.garlic.com/~lynn/2011b.html#54 Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
https://www.garlic.com/~lynn/2010o.html#40 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2009q.html#78 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009i.html#75 IBM's 96 column punch card
https://www.garlic.com/~lynn/2009i.html#71 Barclays ATMs hit by computer fault
https://www.garlic.com/~lynn/2009h.html#55 Book on Poughkeepsie
https://www.garlic.com/~lynn/2009h.html#44 Book on Poughkeepsie
https://www.garlic.com/~lynn/2009f.html#39 PIN Crackers Nab Holy Grail of Bank Card Security
https://www.garlic.com/~lynn/2009e.html#51 Mainframe Hall of Fame: 17 New Members Added
https://www.garlic.com/~lynn/2008s.html#25 Web Security hasn't moved since 1995
https://www.garlic.com/~lynn/2006x.html#14 IBM ATM machines
https://www.garlic.com/~lynn/2004p.html#25 IBM 3614 and 3624 ATM's

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 5100

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 28 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100

also from 5100 wiki
https://en.wikipedia.org/wiki/IBM_5100#Emulator_in_microcode
IBM later used the same approach for its 1983 introduction of the XT/370 model of the IBM PC, which was a standard IBM PC XT with the addition of a System/370 emulator card.

Endicott sent an early engineering box ... I did a bunch of benchmarks and showed lots of things were page thrashing (aggravated by all I/O was done by requesting it to be performed by CP88 on the XT side, and for paging was mapped to the DOS 10mbyte hard disk that had 100ms access) ... then they blamed me for 6month slip in first customer ship, while they added another 128kbytes of memory.

I also did a super enhanced page replacement algorithm and a CMS page mapped filesystem ... both helping improve XT/370 throughput. The CMS page mapped filesystem I had originally done for CP67/CMS more than a decade earlier ... before moving it to VM370. FS had adopted a paged "single level store" somewhat from TSS/360; I continued to work on 360&370 all during the FS period, periodically ridiculing what they were doing ... including comments that for a paged-mapped filesystem, I learned some things not to do from TSS/360 (which nobody in FS appeared to understand).

My CMS page mapped filesystem would get 3-4 times throughput compared to standard CMS, on a somewhat moderate filesystem workload benchmark, and difference improved as filesystem workload got heavier ... was able to do all sorts of optimizations that the standard CMS filesystem didn't do, a couple examples: 1) contiguous allocation for multi-block transfers, 2) for one record files, FST pointed directly to the data block, 3) multi-user shared memory executables defined and loading directly from filesystem, 4) could do filesystem transfers overlapped with execution. The apparent problem, while contained in my enhanced production operating system distributed and used internally ... the FS implosion created bad reputation for anything that smacked, even a little, of (FS) single-level-store.

page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 5100

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 29 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2024b.html#16 IBM 5100

... this mentions "first financial language" at IDC (60s cp67/cms spinoff from the IBM Cambridge Science Center)
https://www.computerhistory.org/collections/catalog/102658182

as an aside, a decade later, person doing FFL joins with another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc

from 1969 interview I did with IDC

IDC First Financial Language

another 60s CP67 spinoff of the IBM Cambridge Science Center was NCSS ... which is later bought by Dun & Bradstreet
https://en.wikipedia.org/wiki/National_CSS

trivia: I was undergraduate at univ (had gotten 360/67 for tss/360 to replace 709/1401, but ran as 360/65), when 360/67 came in, I was hired fulltime responsible for os/360 (univ. shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made monday classes hard). Then CSC came out to install cp/67 (3rd after CSC itself, and MIT Lincoln labs) and I mostly played with it on weekends. Over the next 6months, I rewrote a lot of CP67 code, mostly improving running OS/360 in virtual machine; OS/360 benchmark ran 322secs bare machine, initially virtually ran 856secs, CP67 CPU 534secs. Six months later had CP67 CPU down to 113secs (from 534), and CSC announced a week class at Beverly Hills Hilton. I arrived for class on Sunday and was asked to teach the CP67 class ... the CSC people that were to teach it, had resigned on Friday for NCSS.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

NSCC ref also mentions
https://en.wikipedia.org/wiki/Nomad_software

even before SQL (& RDBMS) originally done on VM370/CMS ... I had worked with Jim Gray and Vera Watson on System/R at IBM SJR, later the tech transfer to Endicott for SQL/DS "under the radar" while company was focused on next great DBMS "EAGLE", then when "EAGLE" implodes, there was request for how fast could System/R be ported to MVS, eventually ships as DB2, originally for decision-support only

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

... there were other "4th Generation Languages", one of the original 4th generation languages, Mathematica made available exclusively through NCSS.
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base. FOCUS from Information Builders, Inc (IBI), did even better, with revenue approaching a reported $150M per year. RAMIS moved among several owners, ending at Computer Associates in 1990, and has had little limelight since. NOMAD's owners, Thomson, continue to market the language from Aonix, Inc. While the three continue to deliver 10-to-1 coding improvements over the 3GL alternatives of Fortran, Cobol, or PL/1, the movements to object orientation and outsourcing have stagnated acceptance.
... snip ...

other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica (also) makes Ramis available to TYMSHARE for their VM370-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to Mathematica's RAMIS, the first Fourth-generation programming language (4GL). Key developers/programmers of RAMIS, some stayed with Mathematica others left to form the company that became Information Builders, known for its FOCUS product
... snip ...

4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language

commercial online service providers
https://www.garlic.com/~lynn/submain.html#online

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 5100

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 29 Feb, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2024b.html#16 IBM 5100
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100

after 23jun1969 unbundling (starting to charge for software, se services, main. etc). As part of that, US HONE cp/67 datacenters were created for SEs to login and practice with guest operating systems in virtual machines. CSC had also ported APL\360 to CMS for CMS\APL. APL\360 workspaces were 16kbytes (sometimes 32k) with the whole workspace swapped. Its storage management allocated new memory for every assignment ... quickly exhausting the workspace and requiring garbage collection. Mapping to CMS\APL with demand page large virtual memory resulted in page thrashing and the storage management had to be redone. CSC also implemented an API for system services (like file i/o), combination enabled a lot of real-world applications. HONE started using it for deploying sales&marketing support applications ... which quickly came to dominate all HONE use ... and the largest use of APL in the world. The Armonk business planners loaded the highest security IBM business data on the CSC system for APL-based business applications (we had to demonstrate really strong security, in part because there were Boston area institution professors, staff, and students were also using the CSC system).

Later as part of morph from CP67->VM370, PASC then did APL\CMS for VM370/CMS and the US HONE datacenters were consolidated across the back parking lot from PASC ... and HONE VM370 was enhanced for max number of 168s in shared DASD, loosely-coupled, single-system-image operation with load-balancing and fall-over across the complex. After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters and HONE was long time customer back to CP67 days. In the morph of CP67->VM370 lots of features were dropped and/or simplified. During 1974, I migrated lots of CP67 stuff to VM370 Release 2 for HONE and other internal datacenters. Then in 1975, I migrated tightly-coupled multiprocessor support to VM370 Release 3, initially for US HONE so they could add a 2nd CPU to each system. Also HONE clones (with APL-based sales&marketing support applications) were cropping up all over the world.

23jun1969 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

The APL\360 people had been complaining that the CMS\APL API for system services didn't conform to APL design ... and they eventually came out with APL\SV ... claiming "shared variables" API for using system services conformed to APL design ... followed by VS/APL.

A major HONE APL-application was few hundred kilobyte "SEQUOIA" (PROFS-like) menu user interface for the branch office sales&marketing people. PASC managed to embed this in the shared memory APL module ... instead of one copy in every workspace, a single shared copy as part of the APL module. HONE was running my paged-mapped CMS filesystem that supported being able to create a shared memory version of CMS MODULES (a very small subset of the shared module support was picked up for VM370 Release 3 as DCSS). PASC had also done the APL microcode assist for the 370/145 ... claiming a lot of APL throughput ran like 168APL. The same person had also done the FORTRAN Q/HX optimization enhancement ... and was used to rewrite some of HONE's most compute intensive APL applications starting in 1974 ... the paged-mapped shared CMS MODULES support also allowed dropping out of APL, running a Fortran application and re-entering APL with the results (transparent to the branch office users).

paged-mapped CMS filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

Some HONE posts mentioning SEQUOIA
https://www.garlic.com/~lynn/2023g.html#71 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#93 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#42 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022.html#103 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2022.html#4 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2021k.html#34 APL
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#33 HONE story/history
https://www.garlic.com/~lynn/2019b.html#26 This Paper Map Shows The Extent Of The Entire Internet In 1973
https://www.garlic.com/~lynn/2019b.html#14 Tandem Memo
https://www.garlic.com/~lynn/2012.html#14 HONE
https://www.garlic.com/~lynn/2011e.html#72 Collection of APL documents
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2011.html#28 Personal histories and IBM computing
https://www.garlic.com/~lynn/2010i.html#13 IBM 5100 First Portable Computer commercial 1977
https://www.garlic.com/~lynn/2009j.html#77 More named/shared systems
https://www.garlic.com/~lynn/2007h.html#62 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006o.html#53 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006o.html#52 The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006m.html#53 DCSS
https://www.garlic.com/~lynn/2005g.html#30 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002j.html#3 HONE, Aid, misc
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?

posts discussing CSC APL analytical system model, used for the HONE Performance Predictor as well as the HONE single-system-image, loosely-coupled load-balancing
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2019c.html#85 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2016b.html#54 CMS\APL
https://www.garlic.com/~lynn/2012.html#50 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2011o.html#53 HONE
https://www.garlic.com/~lynn/2011m.html#63 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)

--
virtualization experience starting Jan1968, online at home since Mar1970

Machine Room Access

From: Lynn Wheeler <lynn@garlic.com>
Subject: Machine Room Access
Date: 01 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#14 Machine Room Access

One of their "fixes" trying to mask slow 3880 was something that involved violating channel architecture ... which I said they couldn't do. They argued and then demanded that I sit in on conference call with POK channel architects ... who confirmed it violated channel architecture. After that they insisted that I participate in all meetings that involved channel architecture. I asked why me. Their explanation was that all the senior engineers that really understood channel architecture had departed in the huge 1969 exit ... 200 following Shugart out the door.

My only excuse was that to do virtual machines, had to understand system architecture, not only processor architecture, but channel, controller, and device architecture (I got to wander around doing things that weren't my job, just fun hobby).

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

DB2, Spooling, Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: DB2, Spooling, Virtual Memory
Date: 01 Mar, 2024
Blog: Facebook
Decade ago, I was asked to track down decision to add virtual memory to all 370s (MVT storage management so bad that regions were specified four times larger than used, as result typical 1mbyte 370/165 only ran four executing regions concurrently insufficient to keep machine busy and justified, going to 16mbyte virtual memory would increase number of concurrently executing regions by factor of four times with little or no paging ... something like running MVT in a CP67 16mbyte virtual machine). Pieces of old email in archived post
https://www.garlic.com/~lynn/2011d.html#73

HASP, ASP, JES2, JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp

... which also mentions some past spooling (moonlight, hasp, asp, jes2, jes3) history and Simpson/Crabtree (responsible for HASP) ... had done RASP ... a prototype virtual memory MFT-II that included a page mapped filesystem. Also mentions MVS work overlap with Future System effort (was completely different from 370 and was going to completely replace it) ... PLS instead of assembler could mean pieces recompiled for FS (note: internal politics were killing off 370 efforts and the lack of new 370 during FS is credited with giving clone 370 makers their market foothold, also IBM sales/marketing had to fall back to huge amount of FUD).
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

when FS finally implodes (one of the final nails was analysis by IBM Houston Science Center that if applications from 370/195 were redone for FS machine made out of the fastest available technology, they would have throughput of 370/145, something like 30 times slowdown).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

oh ... and DB2 trivia, after transfer to SJR, worked with Jim Gray and Vera Watson on the original SQL/relational System/R ... and was able to do tech transfer to Endicott for SQL/DS ("under the radar" while company was preoccupied with the next great DBMS, "EAGLE"). Then when "EAGLE" implodes, there was request for how fast could System/R be ported to MVS ... which is eventually released as DB2, originally for "decision support" only.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

other recent 709, MPIO, OS360, HASP, WATFOR refs:
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#90 Vintage IBM HASP
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#29 Univ. Maryland 7094
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 01 Mar, 2024
Blog: Facebook
A couple things happened around the turn of the century. The killer micro (I86 processor) technology was redone with hardware layer that translated instructions into risc micro-ops for actual execution (largely negating the throughput difference with real RISC processor implementations). for 2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc), 2003 Pentium4 processor 9.7BIPS ("BIPS" based on same benchmark program number iterations compared to 370/158)
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#81 Benchmarks

Another was major system and RDBMS (including IBM) vendors had been doing significant throughput optimization for (non-mainframe) parallelization/cluster operation. Some demo (non-COBOL) "straight through settlement" implementations rewritten for SQL RDBMS (and relying on the cluster RDBMS parallelization rather than roll-your-own with public libraries) would show many times the throughput of any existing legacy operation. A cluster of six Pentium4 multiprocessors (with four processors each), aggregate of 24 Pentium4 processors and 233BIPS easily outperformed a max. configured z990 9BIPS.

Other trivia; 1988 the IBM branch office asked if I could help LLNL (national lab) with getting some serial stuff they are playing with, get standardized; which quickly becomes fibre-channel standard (FCS, initially 1gbit, full-duplex, 200mbyte/sec aggregate). IBM POK finally ships their serial stuff with ES/9000 as ESCON ... when it is already obsolete (17mbytes/sec). Then some POK engineers become involved in FCS and define a heavy weight protocol that drastically reduces the native throughput that eventually ships as "FICON". Latest public benchmark figures I can find is z196 "Peak I/O" that gets 2M IOPS with 104 FICON (over 104 FCS). At the same time there was a FCS announced for E5-2600 server blades claiming over million IOPS (two such FCS with higher throughput than 104 FICON). IBM documentation also recommends keeping SAPs (system assist processors that do actual I/O) to 70% CPU ... or about 1.5M IOPS. Further complicating things is no CKD DASD have been manufactured for decades, all being simulated on industry standard fixed-block disks.

At the time, IBM had price of $30M for max-configured z196 and benchmarked at 50BIPS (625MIPS/processor) or $600,000/BIPS and base list price of $1815 for E5-2600 server blade that benchmarks at 500BIPS (10 times max configured z196) or $3.63/BIPS. Note that large cloud operators claim that they have assembled their own server blades for a couple decades at 1/3rd the cost of brand name server blades ($1.21/BIPS). Not long later, there was industry news that server chip makers were shipping at least half their product directly to large cloud operations ... and IBM unloads its server business.

A large cloud operator will have a dozen or more megadatacenters around the world, each with half million or more server blades and staffed with 70-80 people (enormous automation). 2010 era, each megadatacenter something like aggregate of 250,000,000BIPS or equivalent of 5million max configured z196s.

earlier history
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor 1993: RS6000/990 : 126MIPS; 16-way cluster: 2016MIPS, 128-way cluster : 16,128MIPS

then Somerset/AIM reworked IBM power with multiprocessor capable bus and
1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each 2000Dec z900 processor) 1999 single Pentium3 (redone with hardware layer that translates instructions into RISC micro-ops for execution) hits 2,054MIPS (twice PowerPC)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FICON, FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

recent posts mentioning Somerset/AIM (apple, ibm, motorola), power getting multiprocessor bus, etc
https://www.garlic.com/~lynn/2024.html#85 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 01 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP

Jan1979, I was con'ed into doing a 60's CDC6600 benchmark on early engineering 4341 for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

almost decade later, Nick Donofrio came through and my wife showed him five hand drawn charts for HA/6000 and he approves it. HA/6000 was originally for the NYTimes newspaper system (ATEX) to move off DEC VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scaleup with national labs and commercial cluster scaleup with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992, in meeting with Oracle CEO, AWD/Hester says that we will have 16processor clusters by mid-1992 and 128processor clusters by ye-1992. Then end Jan1992, cluster scaleup is transferred for announce as IBM supercomputer (for technical/scientific *only*) and we are told that we couldn't work on anything with more than four processors (we leave a few months later). Complicating things was mainframe DB2 complaining that if we were allowed to go ahead it would be at least five years ahead of them.

Note: Based on experience working on original SQL/relational System/R (initial tech transfer to Endicott for SQL/DS and then after the next, great IBM DBMS "EAGLE" implodes, ported to MVS and ships as DB2) and input from the major RDBMS vendors, I significantly improved the throughput of the distributed protocol and distributed lock manager (compared to VAXcluster), but did implement API with the VAXcluster semantics to ease the ports to HA/CMP (also see my other post in this thread about 80s work on FCS, 200mbytes/sec). Also, the IBM S/88 product administrator had started taking us around to their customers and also got me to write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester and POK complained that they couldn't meet the requirements).

Then Computerworld news 17feb1992 (from wayback machne) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

From the early 80s, one of my other projects was HSDT, T1 and faster computer links (both satellite and terrestrial), early T1 was satellite link between IBM Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston that had a bunch of floating point system boxes (boxes had 40mbyte/sec disk arrays to keep up with the processing power, 3090 had 3mbyte/sec I/O channels).
https://en.wikipedia.org/wiki/Floating_Point_Systems

IBM AWD did their own 4mbit token-ring (PC/AT bus) card for the PC/RT. Then for the (microchannel) RS/6000, they were told they couldn't do their own microchannel cards, but had to use the (heavily performance kneecaped by the communication group fighting off client/server & distributed computing) PS2 microchannel cards. Turns out the PS2 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card. At the time, the new Almaden Research bldg was heavily provisioned with CAT4, presuming token-ring, but they found that 10mbit ethernet CAT4 had lower latency and higher aggregate throughput than 16mbit token-ring. Also the $69 10mbit ethernet card had higher throughput than the $800 16mbit token-ring card.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 01 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP

IBM 4300s competed with DEC VAX in the mid-range market and sold in about the same numbers in single and small number orders. Big difference was big corporations ordering hundreds of 4300s at a time for deployment out in departmental areas (inside IBM, conference rooms becomes scarce, so many converted to VM/4341 rooms) ... sort of the leading edge of the coming distributed computing tsunami.

Also they were at the knee of the price/performance curve (price/computation) and starting to see them being bought for compute farms ... sort of the leading edge of the coming cluster supercomputing tsunami.

some recent posts mentioning distributed and cluster tsunami
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 01 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP

upthread reference went into little more detail ... part of the issue from 2010 to now, stopped seeing mainframe benchmark numbers ... just press about increase in throughput since previous system ... so had to do some extrapolation over the machine generations.
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#81 Benchmarks

2000-2010, i86 benchmarks increased much faster than POK mainframes. 2010 e5-2600 had ten times the benchmark as max configured z196 with 1/5 the number of cores (each e5-2600 core 50 times z196 core). z196->z16 (with some extrapolation), 4.4 times the aggregate system MIPs, with combination of 2.5 times the number cores and 1.8times the individual core processing. The server blades have gone through several brand name changes and much larger increase in variety at different performance levels ... but at high end, approx. five times increase in number of cores and 2-4 times performance of each core ... so up to 20 times increase from 2010 to now, compared to 4.4 times increase for z196->z16 (or i86 server blades increasing from 10 times faster than z196, now pushing 50 times faster than z16).

Another one of the characteristics of the large cloud megadatacenter ... they had so drastically reduced the cost of systems and the cost/BIPS ... that power&cooling was becoming a major cost ... and they put enormous pressure on the chip makers to improve power/computation ... holding the threat of moving to much more power efficient chips originally designed for battery operation (so important that standard industry benchmarks that had total system cost/computation and cost/transaction ... have added system power per computation and transaction).

... a related characteristics was that megadatacenters along with radical reduction in system costs was part of over-provisioning to meet peak on-demand use (possibly 4-5 times nominal use) ... but then as part of increasing power costs and optimization, they demanded that idle system power use drop to zero ... but could be "instant on" when needed. Then along with big overlap in technologies between megadatacenters and cluster supercomputing ... they would offer the ability to use a credit card to spin-up a "supercomputer" on-demand (that might rank within top 40 in the world) for a few hrs.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

CTSS/7094, Multics, Unix, CP/67

From: Lynn Wheeler <lynn@garlic.com>
Subject: CTSS/7094, Multics, Unix, CP/67
Date: 02 Mar, 2024
Blog: Facebook
... CEO Learson was trying (and failed) to block the bureaucrats, careerists and MBAs from destroying watson legacy/culture ... 20yrs later IBM has one of the largest losses in US corporate history and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Some of the MIT CTSS/7094 people went to the 5th flr to do Multics (which begat UNIX). Others went to the IBM Cambridge Science Center on the 4th flr and did virtual machines (1st CP/40-CMS with virtual memory hardware modification for 360/40, which morphs into CP/67-CMS when 360/67 standard with virtual memory became available) and loads of other stuff. Some amount of friendly rivalry between 4th&5th flrs. I was undergraduate at univ that had 709/1401 that was being replaced with 360/67 for tss/360 ... however when it comes in, it was used as 360/65 and I was hired fulltime responsible for os/360 (the univ. shutdown datacenter on weekend and I would have it dedicated, although 48hrs w/o sleep made monday classes hard). Then some from CSC came out to install CP/67 (3rd after CSC itself and MIT Lincoln Labs) and I would mostly play with it on my weekends.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

a decade ago, I was asked to track down decision to add virtual memory to all 370s (basically MVT storage management was so bad that regions had to be specified four times larger than used, as a result a typical one mbyte 370/165 only ran four concurrently executing regions, insufficient to keep system busy and justified; going to 16mbyte virtual memory allowed concurrently running regions to be increased by factor of four times with little or no paging, sort of like running MVT in a CP/67 16mbyte virtual machine).

archived post with pieces of email exchange with somebody that reported to IBM executive making the 370 virtual memory decision
https://www.garlic.com/~lynn/2011d.html#73

then before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP/67-CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

a couple recent posts mentioning CTSS/7094, Multics, UNIX, CSC, CP/40, CP/67, CP/M, MS/DOS, and Opel
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?

The complete history of the IBM PC, part one: The deal of the century
https://arstechnica.com/gadgets/2017/06/ibm-pc-history-part-1/
The complete history of the IBM PC, part two: The DOS empire strikes
https://arstechnica.com/gadgets/2017/07/ibm-pc-history-part-2/

Total share: 30 years of personal computer market share figures
http://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/2/
http://arstechnica.com/features/2005/12/total-share/3/
http://arstechnica.com/features/2005/12/total-share/4/
http://arstechnica.com/features/2005/12/total-share/5
https://arstechnica.com/features/2005/12/total-share/6/
https://arstechnica.com/features/2005/12/total-share/7/
https://arstechnica.com/features/2005/12/total-share/8/
https://arstechnica.com/features/2005/12/total-share/9/
https://arstechnica.com/features/2005/12/total-share/10/

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 02 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#24 HA/CMP

In late 70s, POK 3033 hit clusters of 4341s ... small cluster of 4341s had lower power, smaller footprint, higher aggregate processing and I/O and much lower cost (there was some internal performance kneecapping of cluster implementations). Also a time when lots of datacenters were hitting resource limits and large corporations were able to place hundreds of distributed 4341s out in departmental areas (4341 much lower resource requirements also allowed non-datacenter deployments) ... they were doing scores of systems per support person (rather than scores of support people/system).

trivia: US branch office, online sales&marketing support HONE had consolidated their datacenters up in palo alto (trivia: when facebook 1st moved into silicon valley, it was into a new bldg built next door to the former HONE datacenter) and in the mid-70s evolved the largest single-system-image, loosely-coupled, load balancing and fall-over complex of 370/168s with large disk farm ... and then I added CP67 mutliprocessor support to VM/370, and they were able to add a 2nd 168 processor to each system. However, that level of sophisticated mainframe cluster support didn't ship for customers until possibly after the turn of century.

4341s could run MVS (but not MVS/XA) ... but required CKD DASD ... so it was locked out of the exploding distributed computing market. The only new CKD DASD was the (datacenter) 3380s. The new mid-range disks that could be deployed to non-datacenter were 3370 FBA. Eventually MVS got CKD simulation as 3375 for non-datacenter operation (even now, CKD hasn't been manufactured for decades, but simulated on industrial standard fixed-block disks) ... but it didn't do them much good. The distributed computing market was looking at scores of vm/4341s systems per support person ... while MVS was scores of support people per system (follow-on 4381 did add XA/370 support).

I had been con'ed into helping Endicott do VM microcode assist for 138/148 which then carried over to 4331/4341. In the early 80s, I got permission to give presentations at user group meetings on how ECPS was implemented. After meetings, the Amdahl people would grill me for additional details. They said that they were in process of implementing (microcoded VM subset) "HYPERVISOR" (multiple domain facility) in "MACROCODE" (370 like instructions running in microcode mode). They explained that they had developed MACROCODE mode to simplify and greatly shorten the time to respond to the plethora of trivial 3033 microcode changes constantly being required by MVS to run.

a few recent related posts
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 02 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#24 HA/CMP
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP

There was a study effort by GPD San Jose bldg26 ... which was a large machine room of MVS systems and bursting at the seams, looking at offloading workload into departmental MVS 4341s. Two issues came up ... 1) they had several heavy computing applications that were making use of lots of MVS services that weren't supported by the 64kbyte OS/360 simulation in CMS, 2) they only used MVS "capture" CPU for the projecting 4341 CPU requirements for running MVS 4341 systems (when the accounting "capture ratio" was running around 50% or a little less .... in MVS actual total CPU required taking elapsed time minus total wait state ... actual MVS CPU frequently could be twice that directly accounted for). What the IBM Los Gatos lab did was study all the additional MVS services required and found that with 12kbytes of additional simulation code in CMS, then all the bldg26 applications could be moved to vm/4341s (and w/o the uncaptured MVS CPU overhead).

The IBM Los Gatos lab then got involved with the IBM Burlington chip lab which had a large VLSI Fortran application running on multiple large MVS systems. Burlington had built custom 1mbyte CSA MVS systems so that the VLSI app could have 7mbyte virtual memory ... but was constantly banging its head at the 7mbyte brick wall anytime changes, fixes, and/or enhancements were made. Los Gatos showed that running the app on VM370/CMS systems, they could get almost the full 16mbyte virtual memory (XA & 31bits was still some years away). Long winded post in the move from SVS to MVS, giving each application a 16mbyte virtual address space ... required mapping an 8mbyte image of the MVS kernel into each virtual address space ... leaving only 8mbyte for application. Then MVS needed one mbyte Common Segment Area (CSA) in every address space, reducing application to 7mbyte. However CSA requirements turned out to be somewhat proportional to number of subsystems and concurrently running applications ... CSA quickly became "Common System Area" and was frequently 5-6mbytes and threatening to becomes 8mbyte (leaving zero for applications).
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2024b.html#12 3033

--
virtualization experience starting Jan1968, online at home since Mar1970

DB2

From: Lynn Wheeler <lynn@garlic.com>
Subject: DB2
Date: 02 Mar, 2024
Blog: Facebook
Original SQL/relational was System/R developed on VM370. When I transfer to SJR, I work with Jim Gray and Vera Watson on it. Then there was technology transfer to Endicott for SQL/DS (under the "radar", while the company was preoccupied with the next, great DBMS "EAGLE"). Then when EAGLE implodes, there was request for how fast could System/R be ported to MVS, eventually released as DB2, originally for decision support only.

When Jim Gray departs SJR for Tandem, he tries foisting off some number of things on me, BofA support that had early System/R joint study (was getting 60 vm/4341s) and DBMS consulting with the IMS group.

some history
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-Vera.html

above references some early (research paper 1975) enhancements to VM370 for System/R and were part of the tech transfer. Then Endicott decides they want to do SQL/DS w/o requiring any changes to VM370.

"EAGLE" super replacement for IMS with a lot more bells and whistles ... various discussions in sql reunion web pages

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

some recent posts specifically mentioning Jim Gray
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#79 Benchmarks
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023c.html#37 Global & Local Page Replacement
https://www.garlic.com/~lynn/2022c.html#13 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021k.html#129 Computer Performance
https://www.garlic.com/~lynn/2021k.html#54 System Availability
https://www.garlic.com/~lynn/2021k.html#15 Disk Failures
https://www.garlic.com/~lynn/2021i.html#19 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#92 Anti-virus
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening

--
virtualization experience starting Jan1968, online at home since Mar1970

DB2

From: Lynn Wheeler <lynn@garlic.com>
Subject: DB2
Date: 03 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#28 DB2

aka EAGLE (which imploded) ... was follow-on/enhanced IMS not SQL/relational

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Note starting (RS/6000) HA/CMP commercial with RDBMS in the late 80s, had to go with non-IBM. IBM Mainframe was PLS and not portable ... and IBM Toronto was just starting ("portable") SQL/RDBMS (in "C") for PS2 ... but it wasn't ready ... and had few features. Nick Donofrio had approved HA/6000 originally for NYTimes to port their newspaper system (ATEX) off DEC VAXCluster to RS/6000 (and required a RDBMS ... all four major vendors, Oracle, Sybase, Informix, and Ingres; had VAXCluster implementations in same source base with Unix). I rename it HA/CMP when I start doing technical/scientific cluster scaleup with national labs and commercial cluster scaleup with the major RDBMS vendors. I did a super enhanced distributed lock manager and distributed operation protocol based on experience with System/R and lots of suggestions from the portable RDBMS vendors, and also implemented VAXCluster API semantics to simplify their port. Early Jan1992, informed the vendors that we were predicting 16-way system clusters by mid-92 and 128-way by year end, but by the end of Jan1992, cluster scaleup was transferred to Kingston for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Note the IBM S/88 product administrator had started taking us around to their customers and also got me to write a section for the corporate continuous availability strategy document ... however it got pulled when both Rochester and POK complained that they couldn't meet the objectives ... also the mainframe DB2 people had started complaining that if allowed to proceed, it would be at least 5yrs ahead of them.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

After leaving IBM, some of the largest IMS (hot-standby) customers liked us to stop in and talk technology. My wife had served a short stint in POK responsible for loosely-coupled architecture, she didn't remain long because 1) repeated battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (except for IMS hot-standby) until much later with Parallel SYSPLEX. She has story working with Vern Watts (IMS inventor)
http://www.vcwatts.org/ibm_story.html

about asking who he would ask permission to do IMS hot-standby, he says nobody, he would just tell them when it was all done (approach that could account for his reputation for being able to get so much done). Trivia: while IMS hot-standby could fall over in minutes, but even max configured 3090s had problems with SNA/VTAM and large terminal configurations taking over an hour or two to re-establish the sessions.

loosely-coupled, peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

ACP/TPF

From: Lynn Wheeler <lynn@garlic.com>
Subject: ACP/TPF
Date: 03 Mar, 2024
Blog: Facebook
My wife did short stint chief architect for Amadeus (EU platform scaffolded of old Eastern System One) ... she didn't remain long because she sided with EU on x.25 and the IBM communication group got her replaced. However, it didn't do them much good, Amadeus went with x.25 anyway and their replacement got replaced.

it wasn't 1st run-in with IBM communication group ... in early 70s she was co-author of AWP39, peer-to-peer networking ... come out same time was SNA was published ... which doesn't have a network layer ... but had co-opted "Network" in the name ... so they had to use "peer-to-peer" which should have been redundant.

then later in the 70s, she was brought into POK responsible for loosely-couple architecture ... she didn't remain long because of 1] on-going battles with the IBM communication group trying to force her into using SNA/VTAM for loosely-coupled operation (and 2] little uptake until parallel sysplex ... except for IMS hot-standby)

loosely-coupled, peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

a few posts mentioning ACP/TPF, Amadeus, and AWP39
https://www.garlic.com/~lynn/2016d.html#48 PL/I advertising
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back

--
virtualization experience starting Jan1968, online at home since Mar1970

HONE, Performance Predictor, and Configurators

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HONE, Performance Predictor, and Configurators
Date: 03 Mar, 2024
Blog: Facebook
IBM 23June1969 unbundling announcement included charging for (application) software, SE services, maint, etc. SE training included being part of large group on customer premises ... but they couldn't figure out how not to charge for (trainee) SE time at customers ... so HONE was created ... branch office login CP67 systems, working with guest operating systems in virtual machines. One of my hobbies after joining IBM was enhanced production operating systems and HONE was long-time customer (1st for CP67 and then for VM370). Besides CP67/CMS, the science center had also ported APL\360 for CMS\APL and HONE started offering APL-based sales&marketing support applications ... which came to dominate all HONE activity (and SE use for guest operating systems just faded away).

The science center had also done a number of performance monitoring, simulation, analytical modeling, workload profiling, precursor to capacity planning, etc. One co-worker had done an APL analytical system model that was made available on HONE as the Performance Predictor ... customer workload profiles and configuration information could be entered and "what-if" questions could be asked about changes to system, configuration, workloads. Product groups also produced HONE APL-based "configurator" applications for their products (things like how many messages could a 3275 controller handle based on different options and features).

US HONE moved to VM370/CMS and all the US datacenters consolidated in Palo Alto and created the largest IBM single-system-image, loosely-coupled processor complex and disk farm anywhere (trivia: when facebook 1st moved into silicon valley, it was into a new bldg built next door to the former HONE datacenter) ... with load-balancing and fall-over across the complex (a modified version of the performance predictor was used to make the load balancing decisions). HONE cons me to go along for some of the initial non-US HONE deployments (Europe and Japan) ... then HONE clones started popping up all over the world.

trivia: With the Future System implosion and mad rush to get stuff back into 370 product pipelines, head of POK managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (endicott managed to save the VM370 product mission, but had to reconstitute a development group from scratch), then some higher ups in POK were going around internal datacenters (like HONE) trying to bully them into moving off VM370 to MVS.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

trivia: I had left IBM and around turn of century was looking at a large financial datacenter with 40+ max. configured mainframes ... all running the same 450K Cobol statement application ... and using some old science center technology was able to find 14% improvement. Also a performance consultant from Eureope was brought in with analytical model was able to find another 7% improvement. Turns out during the IBM troubles of the early 90s when IBM was unloading lots of stuff, he had acquired a descendant of the performance predictor and run it through an APL->C converter.

trivia: Learson was trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
20yrs later, IBM has one of the largest losses in the history of US corporations ... and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

trivia: as undergraduate in the 60s, I had rewritten lots of CP67 code, optimized pathlengths, new page replacement algorithms, new dispatch/scheduling (with dynamic adaptive resource management), etc. Then at science center created automated benchmark system that could vary configuration and parameters settings along with a synthetic benchmark program that could specify filesystem intensity, workingset sizes, paging intensity, interactive profiles. We had over decade of system activity data from dozens of different kinds and configurations of internal systems. For a customer release of some of my work (for VM370), we ran 2000 synthetic benchmarks that took three months elapsed time. For the first 1000 benchmarks we defined combinations of workloads and configuration that uniformly covered the complete workload/configuration combination space from decade of gathered data. A modified version of the performance predictor would predict the benchmark results and then compare the prediction with actual (validating both my algorithms and the predictions). For the next 1000 benchmarks, the predictor was modified to define benchmark characteristics searching for possible anomalous combinations as well as how gracefully it degraded under extreme stress conditions.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement, working set, thrashing posts
https://www.garlic.com/~lynn/subtopic.html#clock
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

We had big deal with getting quarter second interactive response with 3277/3272 where the 3272 channel attached controller had .089sec hardware response and I was able to get interactive system response down to .11sec ... letting users see .199sec response. The later 3278/3274 combo was more like .5sec hardware response ... impossible for users to see .25sec response. STL cons me into doing channel-extender support. In 1980, STL (since renamed SVL) was bursting at the seams and they were moving 300 people from IMS group to offsite bldg. They had tried "remote 3270" ... but found human factors unacceptable. I did channel-extender support to place 3270 channel attached controllers in the offsite bldg with no perceived human factor difference between STL and offsite.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

HA/CMP

From: Lynn Wheeler <lynn@garlic.com>
Subject: HA/CMP
Date: 03 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#24 HA/CMP
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP
https://www.garlic.com/~lynn/2024b.html#27 HA/CMP

Science Center besides responsible for virtual machines and the internal network ... also did a lot of online apps and GML was invented there in 1969 ... from one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Co-worker at the science center was responsible for the CP67 wide area network, technology also used for the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... also used for corporate sponsored univ BITNET/EARN (also for a time larger than internet).
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
sgml, gml, html, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Ed and I transfer out to SJR in 1977

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Early 80s, I got HSDT project, T1 and faster computer links (both satellite and terrestrial) and was working with NSF Director and was suppose to get $20M to interconnect the NSF supercomputer centers; then congress cuts the budget and some other things happen and finally an RFP is released (in part based on what we already had running), from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn//2002k.html#12
https://www.garlic.com/~lynn//2018d.html#33
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.

... snip ... top of post, old email index, NSFNET email

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
interenet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

trivia: 1996 MSDC at Moscone, all the banners said "Internet" but the constant refrain in all the sessions was "protect your investment" ... aka developers that implemented function with Visual Basic inside data files that automagically executed (like when mail files were read) ... used in small, isolated business networks ... but planning on turning it loose on the Internet (with no additional safeguards), creating wild anarchy (nearly all of the whole virus thing).

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 04 Mar, 2024
Blog: Facebook
IBM Cambridge Science Center besides responsible for virtual machines and the internal network ... also did a lot of online apps and GML was invented there in 1969 (decade later morphs into ISO standard SGML and after another decade morphs into HTML at CERN) ... from one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Co-worker at the science center was responsible for the CP67 wide area network, which morphs into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... also used for corporate sponsored univ BITNET/EARN (also for a time larger than internet).
https://en.wikipedia.org/wiki/Edson_Hendricks In June 1975, MIT
Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
http:/www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET/EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Ed and I transfer out to SJR in 1977

SJR was 1st IBM to get CSNET connection ... some SJR email about CSNET:
https://www.garlic.com/~lynn/98.html#email821022
https://www.garlic.com/~lynn/2002p.html#email821122
CSNET (arpanet cutover) status email
https://www.garlic.com/~lynn/2002p.html#email830109
https://www.garlic.com/~lynn/2000e.html#email830202

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

First part of 80s, Ed leaves IBM and I get HSDT project, T1 and faster computer links (both satellite and terrestrial) and was working with NSF Director and was suppose to get $20M to interconnect the NSF supercomputer centers; then congress cuts the budget and some other things happen and finally an RFP is released (in part based on what we already had running), from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn//2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet. I didn't take the follow offer:

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.

... snip ... top of post, old email index, NSFNET email

Supposedly it called for T1 (which we had running), but they installed 440kbit/sec links and telco multiplexers with T1 trunks (I periodically ridiculed that why don't they call it T5 "network" ... since possibly at some point, the T1 trunks might in turn be multiplexed over T5 trunks).

Later for T3 phase, I was asked to be the red team and several from half dozen IBM labs around the world were the blue team. Final review, I presented 1st then the blue team, a few minutes into their presentation, an executive pounds on the table and declared that he would lay down in front of a garbage truck before he let anything but the blue team proposal go forward. I (and some others) get up and walk out.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

After leaving IBM in early 90s, I was brought in as consultant to small client/server startup; two former Oracle people (that we had worked with on RDBMS cluster scale-up, before it was transferred for announce as IBM supercomputer and we were told we couldn't work on anything with more than four processors) were there responsible for something called "commerce server" and wanted to do payment transactions on the server, the startup had also invented something called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had responsibility for everything between webservers and financial payment networks. Based on what I had to do for "electronic commerce" (configurations, software, documentation), I put together a talk on "why the internet wasn't business critical dataprocessing" that Postel sponsored at ISI/USC.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning internet and business critical dataprocessing
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2016h.html#4 OODA in IT Security
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2011d.html#65 IBM100 - Rise of the Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 04 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#33 Internet

TYMSHARE ... online commercial service bureau
https://en.wikipedia.org/wiki/Tymshare
and its TYMNET with lots of local phone numbers around US and the world
https://en.wikipedia.org/wiki/Tymnet
In Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on the internal network and systems (biggest problem I had were lawyers concerned internal employees would be contaminated exposed to unfiltered customer information).

I was also getting blamed for online computer conferencing on the internal network ... which really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem ... only about 300 were participating but claims upwards of 25,000 were reading. One of the outcomes was official sanctioned software (supporting both distributed servers and mailing list modes) and moderated conference/forums. Folklore is that when corporate executive committee was told, 5of6 wanted to fire me.

online commercial service bureau posts
https://www.garlic.com/~lynn/submain.html#online
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 04 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet

The IBM communication group was fighting off release of mainframe TCP/IP support (part of fierce battle with client/server and distributed computing protecting their dumb terminal paradigm and install base), but apparently some influential customers got that reversed. They then changed their tactic and said that since they had corporate strategic responsibility for everything that crossed the datacenter walls it has to be shipped through them; what ships gets 44kbytes/sec aggregate using nearly whole 3090 processor. I then do support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, get sustained 4341 channel throughput using only modest amount of 4341 processor, something like 500 times improvement in bytes moved per instruction executed.

At Interop '88, I had an IBM PC/RT in (non-IBM) booth ... at immediate right angles to the SUN booth; Case was in the SUN booth with SNMP and con'ed him into installing on the PC/RT. Sunday before the show opens, the floor nets were crashing with packet flood ... eventually got diagnosed ... provision about it shows up in RFC1122 ... also Interop '88:

Date: 16 December 1988, 16:40:09 PST
From: somebody
Subject: Class A Network number

As a welcomed gift from the Internet, my request for a Class A network number for IBM has been approved. Initially we decided to go with multiple class B numbers because it would allow us to have multiple connections to the Internet. However, as time passed, and IP envy increased, I found it necessary to re-evaluate our requirements for a Class A number. My main concern was still the issue of connectivity to the rest of the Internet and the technical constraints that a Class A address would present. At Interop 88 I discussed my concerns with Jon Postel and Len Bosak. Len indicated that although a Class A number would still restrict us to 1 entry point for all of IBM from the Internet, it would not preclude multiple exit points for packets. At that point it seemed as if Class A would be ok and I approached Jon Postel and the network number guru at SRI to see if my request would be reconsidered. It turns out that the decision to deny us in the past was due to the numbers I projected for the number of hosts on our IBM Internet in 5 years. Based on that number, they couldn't justify giving us a full Class A. Can't blame them. So after Interop, I sent in a new request and increased our projected needs above a threshold which would warrant a Class A. Although I doubt we will ever use the full address space in 20 years let alone 5, I did what was necessary to get the number. However, the application went in quite some time ago and I still hadn't received a response. Yesterday I found out that it was because I had put down an incorrect U.S. Mail address for our sponsor!!! These people are tough. Anyway, after Postel informed me about my error, I corrected it and sent in the updated application again. The result was the issuance today of a Class A network number for IBM. Being an old Beatles fan, I asked for Number 9. Cute huh? Whatever. Anyway, that's what we got. Consider it a Christmas present from the Internet.

As many of you know, I will be leaving IBM at the end of this year. Obtaining this number was the last thing I wanted to do for IBM and the IBM Internet project. The hard part lies ahead. We still have 10 class B numbers. A lot of engineering of the network remains to be done. I will leave that up to you folks. xxxxx will be assuming responsibility for the project after I leave. I wish you all the best. It's been fun working with you on this!! My only regret is that I didn't have more time for it.

... snip ... top of post, old email index, NSFNET email

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
Interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 04 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet

... at time of arpanet 1jan1983 cutover there were approx. 100 IMPs and 255 hosts ... while the internal network was rapidly approaching 1000 hosts. old archived post with a list of corporate locations that added one or more new hosts to the internal network during 1983:
https://www.garlic.com/~lynn/2006k.html#8

arpanet growth had tended to be limited by cost & availability of IMPs ... a corresponding factor for the internal network was goverments resistance to corporate requirement usinglink encryptors ... especially resistance when links crossed national bounderies.

I really hated what I had to pay for T1 link encryptors and faster were hard to find ... so I got involved in doing link encryptors that could handle at least 3mbytes/sec and cost less than $100 to build. The corporate crypto office declared it was seriously weaker than the crypto standard. It took me three months to convince them that rather than weaker, it was much stronger. It was a hollow victory, I was then told that there was only one institution in the world that was allowed to use such crypto ... I could make as many as I wanted ... but they had to all be sent to them.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning realizing there were three kinds of crypto
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#5 IBM 370
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022.html#125 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#17 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting

desk ornament

1000th node globe

small piece of 1977 internal network listing

1977 network map

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 06 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet

I ran into Greg Chesson (known for involvement in UUCP) when he was at SGI and doing XTP ... IBM communication group was trying to block me being on XTP TAB, but were ignored. There were some gov orgs involved and this was when some agencies were advocating the elimination of TCP/IP & Internet, replacing with GOSIP, so XTP was taken to ISO chartered US X3S3.3 (responsible for layer 3&4) for standardization as HSP. They eventually said that ISO required that protocol standards had to conform to OSI ... and XTP didn't because 1) supported internetworking layer which doesn't exist in OSI, 2) skipped layer 4/3 interface, 3) went directly to LAN MAC, which doesn't exist in OSI, sitting somewhere in the middle of layer 3. There was saying about IETF required two interoperable implementations before proceeding in standardization while ISO could standardize something that couldn't even be implemented.

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet
Date: 06 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet

some more topic drift, slightly related ... 1972, Learson was trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

nearly 20yrs later (late 80s), senior disk engineer gets a talk scheduled at an annual, internal, world-wide communication group conference supposedly on 3174 performance, but opens the talk that the communication group was was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). The GPD/Adstar VP of software, as partial work-around, was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it).

Communication group stranglehold on datacenters wasn't just disk and a couple short years later (20yrs after Learson's efforts), IBM has one of the largest losses in history of US corporations and was being reorged into 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Tonight's tradeoff

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tonight's tradeoff
Newsgroups: comp.arch
Date: Fri, 08 Mar 2024 14:19:17 -1000
mitchalsup@aol.com (MitchAlsup1) writes:
{{I consider 360/67 as the beginning of paging; although Multics may be the beginning.}}

Science center thot IBM could get MIT MULTICS ... but went to GE. Then the IBM mission for virtual memory/paging went to "new" TSS/360 group. Science Center modified a 360/40 with virtual memory & paging pending availability of 360/67 standard with virtual memory (and cp40/cms morphs into cp67/cms).

Melinda's history web pages
http://www.leeandmelindavarian.com/Melinda#VMHist
from (lots of early history, some CTSS/7094 went to 5th flr and Multics and others went to IBM science center on 4th flr)
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
footnote from Les Comeau:
Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work.35

... snip ...

Atlas reference (gone?) but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:
Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.
... snip ...

I was undergraduate at univ and was hired fulltime responsible for OS/360. The univ. had gotten a 360/67 for tss/360, but was running as 360/65 (univ. shutdown datacenter on weekends and I had whole place dedicated, 48hrs w/o sleep did make monday classes hard).

Then CSC came out to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly played with it in my weekend time. This early release had very rudimentary page replacement algorithm and no page thrashing controls. I did (global LRU) reference bit scan and dynamic adaptive page thrashing controls.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
paging, page replacement algorithms, page thrashing control posts
https://www.garlic.com/~lynn/subtopic.html#clock

Nearly 15yrs later at Dec81 ACM SIGOPS, Jim Gray ask if I could help a Tandem co-worker get Stanford Phd .... it involved similar global LRU to the work that I had done in the 60s and there were "local LRU" forces from the 60s lobbying hard not to award a Phd (that wasn't "local LRU"). I had real live data from a CP/67 with global LRU on 768kbyte (104 pageable pages) 360/67 with 80users that had better response and throughput compared to a CP/67 (with nearly identical type of workload but 35users) that implemented 60s "local LRU" implementation and 1mbyte 360/67 (155 pageable pages after fixed storage) ... aka half the users and 50% more real paging storage.

a decade ago, I was asked to track down decision to add virtual memory to all 370s ... found somebody involved, archived posts with pieces of the email exchange:
https://www.garlic.com/~lynn//2011d.html#73

some posts mentioning Jim Gray asking if I could help Tandem co-worker
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
https://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM 3380s

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM 3380s
Date: 08 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s

Early 70s, Learson trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Nearly two decades later (end of the 80s), senior disk engineer gets a talk scheduled at an annual, internal, world-wide communication group conference supposedly on 3174 performance, but opens the talk that the communication group was was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls). The GPD/Adstar VP of software, as partial work-around, was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it).

Communication group stranglehold on datacenters wasn't just disk and a couple short years later (two decades after Learson's efforts), IBM has one of the largest losses in history of US companies and was being reorged into 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

getting to play disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk
communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 09 Mar, 2024
Blog: Facebook
Early 70s, Learson trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Nearly two decades later (end of the 80s), senior disk engineer gets a talk scheduled at an annual, internal, world-wide communication group conference supposedly on 3174 performance, but opens the talk that the communication group was was going to be responsible for the demise of the disk division. The disk division saw drop in disk sales with data fleeing to more distributed computing friendly platforms. The disk division had come up with solutions that were constantly being vetoed by the communication group (with their corporate responsibility for everything that crossed datacenter walls and fiercely fighting off client/server and distributed computing). The GPD/Adstar VP of software, as partial work-around, was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also funded the unix/posix implementation in MVS (since it didn't "directly" cross datacenter walls, communication group couldn't directly veto it).

Communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal

Communication group stranglehold on datacenters wasn't just disk and a couple short years later (two decades after Learson's efforts), IBM has one of the largest losses in history of US companies and was being reorged into 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Example: The communication group was fiercely fighting off releasing mainframe TCP/IP and they got overridden, they then changed their strategy and since they had corporate strategic responsibility for everything that crossed datacenters walls, it had to be shipped through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then added RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained channel throughput using only modest amount of 4341 processor (something like 500 times more bytes moved per instruction executed). Early 90s, the communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM. What he demo'ed had TCP/IP running much faster than LU6.2. He was then told that everybody knows that a "proper" TCP/IP implementation runs much slower than LU6.2 and they would only be paying for a proper implementation. Note somebody had done an analysis of VTAM/LU6.2 and (Unix BSD) TCP pathlengths; LU6.2 was something like 160k instructions compared to (Unix BSD) TCP's 5k.

Communication group stranglehold on datacenters posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

... also AWD had done their own (PC/AT bus) 4mbit token-ring cards for the PC/RT. Then for the microchannel RS/6000, they were told they couldn't do their own cards but had to use the PS/2 cards (that had been severely performance kneecapped by the communication group). The PS/2 microchannel 16mbit token-ring cards had lower card throughput than the PC/RT 4mbit token-ring cards. The new Almaden Research bldg had been heavily provisioned with CAT4, presumably 16mbit token-ring, but they found that the 10mbit ethernet had higher aggregate throughput and lower latency than 16mbit token-ring and $69 10mbit ethernet cards had much higher throughput than the $800 microchannel 16mbit token-ring cards (and it wasn't just T/R cards were kneecapped).

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Former co-worker at research had left SJR and was doing lots of work in silicon valley, including for senior VP of engineering at large chip shop. He had redone the AT&T 370 C-compiler, fixing lots of bugs and significantly improving code optimization for 370s. He then ported a lot of UCB C-language chip apps to the mainframe. One day the IBM marketing rep came through and asked him what he was doing. He said ethernet support so they could use SGI graphics workstations as front-ends. The IBM marketing rep then says he should be doing token-ring support instead or they might find their mainframe support not as timely as in the past. I then get an hour phone call having to listen to constant stream of four letter words. The next morning the senior engineering VP has press conference to say that they are moving everything off mainframes to SUN servers. IBM then had some taskforces to look at why silicon valley wasn't using mainframes (but they weren't allowed to look at marketing issues).

some posts mentioning silicon valley mainframes
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021h.html#69 IBM Graphical Workstation
https://www.garlic.com/~lynn/2021d.html#42 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2017g.html#12 Mainframe Networking problems
https://www.garlic.com/~lynn/2017.html#60 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing
https://www.garlic.com/~lynn/2014g.html#4 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013b.html#31 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2011h.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 09 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe

Large cloud operations will have a dozen or more megadatacenters around the world, each with half million or more blade servers and staffed with 70-80 people; enormous automation and >6000 servers per staff. Around turn of century the i86 processor chip makers went to hardware layer translating instructions into RISC micro-ops ... largely negating throughput difference with RISC processors.

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc), 2003 Pentium4 processor 9.7BIPS ("BIPS" based on same benchmark program number iterations compared to 370/158) ... aka a Pentium4 with processing of a max configured, 32-processor z990.

1988, branch office asks me to help LLNL (national lab) get some serial stuff they are playing with standardized, which quickly becomes fibre-channel standard (FCS, initially 1gbit, full-duplex, aggregate 200mbytes/sec). Then POK gets some of their serial stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved in FCS and define a heavy weight protocol that drastically reduces the native throughput which is eventually announced as FICON. Latest benchmark numbers I've found is (2010) z196 "peak I/O" that got 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 server blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). Note: IBM docs said to keep SAPs (system assist processors that do actual I/O) to max of 70% CPU which would be more like 1.5M IOPS. Further complicated is no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

2010 max configured z196 (80 processors), industry standard benchmark (aka number of program iterations compared to 370/158) at 50BIPS and went for $30M ($600,000/BIPS). IBM had base list price for E5-2600 blade of $1815 and could benchmark at 500BIPS (ten times max configured z196 and only $3.63/BIPS). Major cloud operations have claimed for couple decades that they assemble their own server blades at 1/3rd brand name server blades ($1.21/BIPS). A cloud megadatacenter could have had processing equivalent of something like five million max configured z196. Then industry press had article that processor chip makers were shipping at least half their product directly to cloud operations, IBM sells off its server blade business.

Current processor spread between a server blade and max configured z16 is more like 20-40 times (rather than just ten times).

Note there has also been some amount of technology overlap for cloud megadatacenters and cluster supercomputing ... with claims that a credit card can be used for automated megadatacenter processing to spin up a supercomputer (ranking in top 40 in the world) for a few hrs.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some posts mentioning risc micro-ops and pentium
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2015.html#44 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014m.html#170 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#166 Slushware
https://www.garlic.com/~lynn/2014m.html#106 [CM] How ENIAC was rescued from the scrap heap
https://www.garlic.com/~lynn/2012c.html#59 Memory versus processor speed

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 09 Mar, 2024
Blog: Facebook
re:
https:/www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https:/www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe

Jan1979 I was con'ed into doing 1960s CDC6600 benchmark on engineering 4341 for national lab looking at getting 70 for a compute farm (sort of the leading edge of the cluster supercomputing tsunami). Claims that 4341 was right at the knee of the price/performance curve.

Then in the early 80s, 4300s and DEC VAX/VMS were selling into the mid-range market and in about the same numbers for orders for single or few number. The big difference was large companies with orders for hundreds of vm/4341s for placing out in departmental areas (sort of the leading edge of the distributing computing tsunami). Again claims that 4341 was at knee of the price/performance curve as well as 4341 processors and 3370 FBA disks could be placed out in non-datacenter environments (within IBM, conference rooms were in short supply because so many had been converted to vm/4341 rooms).

As things evolved there was also a large overlap between large cloud megadatacenter technologies and cluster supercomputing technologies (looking at the latest "knee" on the price/performance curve). They also tended towards using LINUX ... because they wanted full, unrestricted system source for adapting to the evolving cluster paradigm .... with proprietary source lagging far behind.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

posts mentioning 4341, compute farms, distributed & cluster supercomputing tsunami, and cloud megadatacenter posts
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2022d.html#86 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2017k.html#30 Converting programs to accommodate 8-character userids and prefixes
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016h.html#48 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe?
https://www.garlic.com/~lynn/2016b.html#57 Introducing the New z13s: Tim's Hardware Highlights
https://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2014g.html#65 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014g.html#14 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014.html#97 Santa has a Mainframe!

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Career

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Career
Date: 09 Mar, 2024
Blog: Facebook
In high school I worked for the local hardware store and periodically loaned out to local contractors; cement (sidewalks, driveways, foundations), floor/ceiling/roof joists, framing, flooring, siding, roofing, plumbing, electrical, wallboard, finishing, etc. Saved enough to start college after high school. Summer after freshman year, got a job as foreman on construction job ... three nine-person crews. Spring was really wet and project was way behind schedule, quickly started 7day weeks and 12hr days (it was quite sometime after graduation, before take-home/week was more than that summer).

Sophomore year, took 2hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO for 360/30. Univ replacing 709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was replaced with 360/30 (pending availability of 360/67, 360/30 had microcode 1401 emulation). The univ shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard. They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 card assembly program. Then within year of taking intro class, the 360/67 comes in, I'm hired fulltime responsible for OS/360 (tss/360 never really came to production, so ran as 360/65).

Student Fortran jobs ran under second on 709, but over a minute on 360/65 (OS MFT9.5), I install HASP which cuts the time in half. I then start redoing STAGE2 SYSGEN placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs; it never got better than 709 until I install Univ. of Waterloo WATFOR.

CSC comes out to install CP67/CMS (3rd after CSC and MIT Lincoln Labs, precursor to VM370/CMS) and I mostly play with during my weekend dedicated time. First six months was working on optimizing OS/360 running in virtual machine, redoing a lot of pathlengths. Benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs). Then started redoing I/O (disk ordered seek and drum paging from about 70/sec to capable of 270/sec), page replacement, dynamic adaptive resource management and scheduling.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging related posts
https://www.garlic.com/~lynn/subtopic.html#clock

CP67 installed had 1052 and 2741 terminal support with automagic terminal type identification. Univ had some ASCII/TTY, so I integrate TTY support with automatic identification. I then want to have single "hunt group" (single dial-in number for all terminals) ... but while IBM allowed changing port terminal type scanner, it had hardwired port line speed. Univ starts project to do clone controller, build channel board for Interdata/3 programmed to emulate IBM controller with addition could do automatic line speed. It was then enhanced with Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin/Elmer) sold as IBM clone controller and four of us were written up responsible for (some part of) clone controller business.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join IBM Science Center (instead of staying with Boeing CFO).

Last product did at IBM started out HA/6000. Nick Donofrio came by and my wife presented five hand drawn charts that he approves. Initially, it was for NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992 meeting, AWD/Hester tells Oracle CEO that we would have 16-way mid92 and 128-way ye92. Then end Jan92, cluster scale-up is transferred for announce as IBM supercomputer (technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later); mainframe DB2 had also been complaining that it would be at least 5yrs ahead of them.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

for fun of it, recent internet posts ... references doing electronic commerce after leaving IBM
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some receent posts mentioning working for Boeing CFO:
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

--
virtualization experience starting Jan1968, online at home since Mar1970

Automated Operator

From: Lynn Wheeler <lynn@garlic.com>
Subject: Automated Operator
Date: 10 Mar, 2024
Blog: Facebook
Internal SPM was originally done for CP67 (precursor to vm370) ... sort of superset combination of the much later vm370 VMCF, IUCV, and SMSG ... and used for CP67 service virtual machines and 7x24 dark room operations with automated operator ... then (internally) ported to VM370 ... but never released to customers (the VNET/RSCS released to customers, actually contained SPM support, but the underlying customer VM370 didn't have it). Trivia: the author of REXX had also done a client/server multi-user spacewar game using SPM ... and because RSCS/VNET supported it, clients could be both on the same machine or any place in the internal network. Almost immediately "robot" clients appeared ... beating human players; eventually server was modified to increase power use non-linear as interval between client actions dropped below normal human response (to somewhat level the playing field).

old email with some SPM detail
https://www.garlic.com/~lynn/2006k.html#email851017

... other trivia: back in 60s w/360, ibm rented/leased, charges based on the system meter which ran whenever any processor or channel was operating ... and everything had to be totally idle for at least 400ms before meter would stop. Somewhat as part of CP67 7x24 available and possibly low usage during offshift operation ... there was both automated operator (for dark room operation) and channel programs that would allow system meter to stop (both minimizing costs associated with 7x24 during possibly low usage period) ... but immediately active when characters arriving. Note: long after IBM had converted from rent/lease system meter charges to sales, MVS still had a timer event that woke up every 400ms guaranteeing system meter would never stop.

Then in the early 80s, when large companies where ordering hundreds of VM/4341s for deployment out in departmental areas (sort of the leading edge of the coming distributed computing tsunami) ... automated operator was again critical for being able to run with scores of vm/4341 systems per staff person (rather than scores of staff per system) ... within IBM, conference rooms became scarce with so many being converted to VM/4341 rooms.

One other thing as part of CP67 fully automated operator was autolog command ... while system would auto-ipl ... even after system crash ... it required manual intervention to get things like service virtual machines back up and running. I had originally done CP67 autolog command for automated benchmarks where the system would re-ipl between each benchmark and bring up specified number of simulated users running specified workload scripts. Then autolog command was almost immediately adopted for automated operations and service virtual machines. In the morph of CP67->VM370 lots of features were greatly simplified or dropped. I then started migrating CP67 stuff to VM370 release 2 ... and the 1st was the autolog command (for running automated benchmarks). However even moderately heavy workload benchmarks would consistently crash VM370. I then had to migrate the CP67 integrity and kernel serialization to VM370 in order to get completed benchmark results ... before migrating lots of additional CP67 features to VM370. The development group did pick the autolog command for VM370R3.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

SPM interface program manual in this post along with discussion about autolog command, automated operator, CMSBACK, service virtual machines
https://www.garlic.com/~lynn/2006w.html#16
other posts
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2017k.html#37 CMS style XMITMSG for Unix and other platforms
https://www.garlic.com/~lynn/2016c.html#1 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"

posts mentioning 360 rent/leased charges based "system meter" and lots of CP67 work for 7x24 availability
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#26 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#25 IBM Mainframe time-sharing
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2018f.html#16 IBM Z and cloud
https://www.garlic.com/~lynn/2018.html#4 upgrade
https://www.garlic.com/~lynn/2017g.html#46 Windows 10 Pro automatic update
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2015b.html#18 What were the complaints of binary code programmers that not accept Assembly?

--
virtualization experience starting Jan1968, online at home since Mar1970

Companies paid top executives more than they paid in US taxes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Companies paid top executives more than they paid in US taxes
Date: 14 Mar, 2024
Blog: Facebook
Companies paid top executives more than they paid in US taxes - report. Compensation for senior bosses at firms from Tesla to T-Mobile US worth more than those companies' net tax payments, study finds
https://www.theguardian.com/business/2024/mar/13/top-us-executives-salaries-versus-tax-payments
Major US Companies Pay Executives More Than Uncle Sam
https://www.nakedcapitalism.com/2024/03/major-us-companies-pay-executives-more-than-uncle-sam.html
We Know We Need Change When Major US Companies Pay Execs More Than Uncle Sam. Until this self-reinforcing cycle is broken, we'll have a corporate tax and compensation system that works for top executives--and no one else.
https://www.commondreams.org/opinion/us-companies-execs-taxes

... since congress dropped the fiscal responsibility act in 2002 (spending can't exceed tax revenue, on its way to eliminating all federal debt), there has been explosion in federal spending & debt and major tax cuts for special interests and large corporations ... plays significant role in claims that congress is the most corrupt institution on earth (especially house tax & budget committees)

2010 CBO report was that 2003-2009, spending increased $6T and taxes were cut $6T ... for $12T gap compared to fiscal responsibility budget

... 2017-2020 further big corporate tax cuts (poster child corporation claiming it would go to employee bonuses ... their website claimed single year bonuses up to $1000 for employees; however if every employee got full $1000 it would have been around 2% of a single year's tax cut, the rest going for executive compensation and stock buybacks).

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

posts mentioning claims about tax cuts going to employee bonuses
https://www.garlic.com/~lynn/2018f.html#101 U.S. Cash Repatriation Plunges 50%, Defying Trump's Tax Forecast
ttps://www.garlic.com/~lynn/2018b.html#78 Important US technology companies sold to foreigners
ttps://www.garlic.com/~lynn/2018b.html#71 Important US technology companies sold to foreigners
ttps://www.garlic.com/~lynn/2018b.html#21 Important US technology companies sold to foreigners
ttps://www.garlic.com/~lynn/2018b.html#18 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2018.html#104 Tax Cut for Stock Buybacks

some recent posts referring to claims that congress most corrupt institution on earth
https://www.garlic.com/~lynn/2024.html#16 Billionaires Are Hoarding Trillions in Untaxed Wealth
https://www.garlic.com/~lynn/2024.html#0 Recent Private Equity News
https://www.garlic.com/~lynn/2023g.html#33 Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There
https://www.garlic.com/~lynn/2023f.html#74 Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit
https://www.garlic.com/~lynn/2022h.html#79 The GOP wants to cut funding to the IRS. We can't let that happen
https://www.garlic.com/~lynn/2022h.html#5 Elizabeth Warren to Jerome Powell: Just how many jobs do you plan to kill?
https://www.garlic.com/~lynn/2022g.html#78 Legal fights and loopholes could blunt Medicare's new power to control drug prices
https://www.garlic.com/~lynn/2022c.html#44 IRS, Computers, and Tax Code
https://www.garlic.com/~lynn/2021j.html#61 Tax Evasion and the Republican Party
https://www.garlic.com/~lynn/2021i.html#22 The top 1 percent are evading $163 billion a year in taxes, the Treasury finds
https://www.garlic.com/~lynn/2021i.html#13 Companies Lobbying Against Infrastructure Tax Increases Have Avoided Paying Billions in Taxes
https://www.garlic.com/~lynn/2021g.html#54 Republicans Have Taken a Brave Stand in Defense of Tax Cheats
https://www.garlic.com/~lynn/2021f.html#61 Private Inequity: How a Powerful Industry Conquered the Tax System
https://www.garlic.com/~lynn/2021f.html#49 The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax
https://www.garlic.com/~lynn/2021f.html#38 Microsoft's Irish subsidiary paid zero corporation tax on $315bn profit
https://www.garlic.com/~lynn/2021e.html#93 Treasury calls for doubling IRS staff to target tax evasion, crypto transfers
https://www.garlic.com/~lynn/2021e.html#29 US tax plan proposes massive overhaul to audit high earners and corporations for tax evasion
https://www.garlic.com/~lynn/2021e.html#1 Rich Americans Who Were Warned on Taxes Hunt for Ways Around Them

--
virtualization experience starting Jan1968, online at home since Mar1970

OS2

From: Lynn Wheeler <lynn@garlic.com>
Subject: OS2
Date: 14 Mar, 2024
Blog: Facebook
Date: Fri, 4 Dec 87 15:58:10 est
From: wheeler
Subject: os2 dispatching

fyi ... somebody in boca sent a message to endicott asking about how to do dispatch/scheduling (i.e. how does vm handle it) because os2 has several deficiencies that need fixing. VM Endicott forwarded it to VM Kingston and VM Kingston forwarded it to me. I still haven't seen a description of OS2 yet so don't yet know about how to go about solving any problems.

... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:58:10 est From: wheeler
To: somebody at bcrvmpc1 (i.e. internal vm network node in boca)
Subject: os2 dispatching

I've sent you a couple things that I wrote recently that relate to the subject of scheduling, dispatching, system management, etc. If you are interested in more detailed description of the VM stuff, I can send you some descriptions of things that I've done to enhance/fix what went into the base VM system ... i.e. what is there now, what its limitations are, and what further additions should be added.

... snip ... top of post, old email index

.... some history of PC market
http://arstechnica.com/articles/culture/total-share.ars
http://arstechnica.com/articles/culture/total-share.ars/3
http://arstechnica.com/articles/culture/total-share.ars/4
http://arstechnica.com/articles/culture/total-share.ars/5

The IBM communication group was fiercely fighting off client/server and distributed computer and had heavily performance kneecapped the PS2 microchannel cards. The IBM AWD workstation division had done their own (PC/AT bus) 4mbit token-ring cards for the PC/RT. However AWD was told for the RS/6000 and microchannel, they couldn't do their own cards, but had to use PS2 cards. It turns out the IBM PS2 16mbit token-ring (microchannel) card had lower card throughput than the PC/RT 4mbit token-ring card (aka RS/6000 16mbit T/R server would have lower throughput than PC/RT 4mbit T/R server). The joke was RS/6000 forced to use PS2 microchannel cards could have same performance as PS2/486 for lots of things. Also a $69 10mbit Ethernet card had higher throughput than $800 PS2/microchannel 16mbit T/R card.

communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 3033

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 3033
Date: 14 Mar, 2024
Blog: Facebook
trivia: after FS implosion, there was mad rush to get stuff back into the 370 product pipelines including quick&dirty 3033&3081 in parallel. 3033 was remap of 168 logic to 20% faster chips. The 303x channel director (external channels) was 158 engine with integrated channel microcode code (and no 370 microcode). 3031 was two 158 engines ... one with just 370 microcode and one with just integrated channel microcode. A 3032 was 168-3 using channel director for external channels.
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

when I transferred out to SJR I got to wander around IBM and non-IBM machine rooms in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. Bldg14 had multiple layers of physical security, including machine room had each development box inside heavy mesh locked cage ("testcells"). They were running 7x24, pre-scheduled, stand-alone testing and had mentioned that they had recently tried MVS ... but it had 15min mean-time-between-failure (in that environment) requiring manual re-ipl. I offered to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing greatly improving productivity. Then bldg15 got 1st engineering 3033 outside POK processor engineering flr. Since testing only took percent or two of the processor, we scrounged up a 3830 disk controller and string of 3330 disk and setup our own private online service.

one monday morning, I get a call from bldg15 asking me what I had done over the weekend to the 3033 system ... system response/throughput had significantly degraded ... we have several back&forth about nobody had done anything ... until we determine that somebody had replaced the scrounged 3830 with a development 3880. Now 3880 has special hardware path to handle 3380 3mbyte/sec transfer ... but everything else is being done by a really slow vertical microcode processor (rather than super fast horizontal microcode). New products have requirement that they are no more than 5% slower than previous ... but for lots of things 3880 is much slower than 3830 as well as much higher channel busy. In this particular case, they were trying to mask the controller operations that happen after end of data transfer and controller finally can signal end of operation interrupt ... they were hoping that they could signal end of operation interrupt as soon as data transfer had completed and overlap the rest with operating system interrupt processing and be done before system tried to redrive with the next I/O. Device redrive in VM370 was faster than MVS ... before I rewrote I/O supervisor and now it was close to ten times faster than MVS (I would joke that part of XA/370 was moving redrive out into hardware to help mask how slow MVS redrive was) ... I was trying to redrive 3880 while it was still busy, it had to respond SM+BUSY, forcing the redrive operation to be requeued, and then 3880 presenting another interrupt when it was really done and then redrive be tried again.

In any case it was back to the drawing board on further 3880 tweaks to mask latency for final interrupt. Note that still couldn't eliminate the actual extra channel overhead. Then trout/3090 group did discover how bad it was, they had assumed 3880 would be the same as 3830 but with 3mbyte/sec transfer ... and they had configured number of channels based on that assumption to achieve target system throughput ... and now had to increase the number of channels (to offset the increase in channel busy), the increase in number of channels required another TCM ... and they joke that they were going to bill the 3880 group for the increase in 3090 manufacturing costs. Later marketing respun the significant increase in channels as 3090 being wonderful I/O machine ... rather than it was required to offset 3880 inflated channel busy.

The 3033 channel director operation was still a little flaky and would sometimes hang and somebody would have to go over reset/re-impl to bring it back. We figured out if I did CLRCH in fast sequence to all six channels, it would provoke the channel director into doing its own re-impl (a 3033 could have three channel directors, so we could have online service on a channel director separate from the ones used for testing).

getting to play disk engineer in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

I did write an (internal-only) research report on the work for bldg14/15 and happened to mention the MVS 15min MTBF. I then got a call from the MVS group, I thought was going to be how to help fix the failures, but was later told that they really were trying to get me separated from IBM ... "shooting the messenger" ... bureaucrats managing the information flowing up ... tome about Learson trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy ... 20yrs later, IBM has one of the largest losses in history of US corporations and was being re-orged in preparation for breaking up the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

... before transferring to SJR, I got roped into 16processor, tightly-coupled (shared memory) multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than the 168 logic remap). Everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-way support (POK doesn't ship a 16-way machine until after the turn of the century). Then the head of POK invited some of us to never visit POK again and told the 3033 processor engineers to keep their heads down and focused only on 3033. Once the 3033 was out the door, they start on trout/3090.

3081D processor were supposedly about 10% faster than 3033, but several benchmarks found them slower. Fairly quickly 3081D replaced with 3081K with twice processor cache size ... supposedly about 40% faster than 3081D ... however the Amdahl single processor was about the same MIPS as the two processor 3081K ... and much higher MVS throughput (since MVS documents talked about two processor SMP having 1.2-1.5 throughput of single processor ... aka multiprocessor "overhead") and MVS worse with 3084 .... having interference from three other processors (rather than just 3081 one other processor interference).

smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few posts mentioning bldg15 3830 disk controller being replaced with new (faster?) 3880 and throughput cratered ... 1st had to diagnose the problem then lots of work to try masking how slow the 3880 really was
https://www.garlic.com/~lynn/2024b.html#14 Machine Room Access
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017g.html#64 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2016h.html#50 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2013n.html#56 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2011p.html#120 Start Interpretive Execution
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2009r.html#52 360 programs on a z/10
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage 2250

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage 2250
Date: 15 Mar, 2024
Blog: Facebook
Lots of customers got 2250s with 360/67s for tss/360 ... however most customers just used them as 360/65 for os/360 (although some later moved to virtual machine CP/67). I was undergraduate at one such univ. and had taken two credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ had 709/1401 (1401 unit record<->tape front end for 709) and the 1401 was replaced with 360/30 (getting 360/30 experience pending delivery of 360/67). The univ shutdown datacenter on weekends and I would have the place dedicate (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware and software manuals and got to design my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. ... and within a few weeks had a 2000 card program. Within a year of taking intro class, the 360/67 (w/2250) arrived and I was hired fulltime responsible for OS/360 (and continued to have my weekend dedicated time).

Later IBM CSC came out to install (virtual machine) CP/67 (3rd installation after CSC itself and MIT Lincoln Labs), which I mostly played with during my weekend dedicated time. I did interface the CMS editor to the Lincoln Labs 2250 driver library (originally written for fortran programs). Later (before I graduate), I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I thought Renton datacenter was largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between the Renton director and CFO, who only had a 360/30 up at Boeing Field for payroll (although they enlarge it for a 360/67 for me to play with when I wasn't doing other stuff).

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
recent posts mentioning univ work as undergraduate
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100

While at Boeing, they also bring up the two-processor 360/67 from Boeing Huntsville, which they originally got for TSS/360 with several 2250s for CAD/CAM ... but ran as two 360/65s. They had run into the MVT storage management problems ... aggravated by long-running 2250 CAD/CAM apps, and had modified MVT R13 to run in virtual memory as partial countermeasure (sort of like precursor to VS2/SVS, but w/o paging, but could re-org storage addressing).

Note a decade ago, I was asked to track down the decision to add virtual memory to all 370s and found a staff member that reported to the executive making the decision. Basically MVT storage management was so bad that regions tended to be specified four times larger than used ... as a result a typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to MVT in a 16mbyte virtual memory allowed number of concurrently running regions to be increased by factor of four, with little or no paging (aka VS2/SVS, something like running MVT in a CP67 16mbyte virtual machine).

Note that regions were isolated with four bit storage keys, but as systems got larger and more powerful, then needed further increase in the number of concurrently running regions resulting in separate address space for each region (for isolation) ... aka VS2/MVS (but spawned the common segment area, CSA, which exploded into multiple segments and became common system area ... and by 3033, along with the 8mbyte kernel image, was threatening to take over the whole 16mbyte address space, leaving nothing for programs).

Some posts mentioning Boeing Huntsville, 360/67, and 2250s
https://www.garlic.com/~lynn/2024b.html#3 Bypassing VM
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021g.html#39 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2017k.html#13 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
https://www.garlic.com/~lynn/2016c.html#9 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2014l.html#69 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014j.html#33 Univac 90 series info posted on bitsavers
https://www.garlic.com/~lynn/2014i.html#67 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014d.html#32 [OT ] Mainframe memories
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012k.html#55 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2012f.html#10 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#33 TINC?
https://www.garlic.com/~lynn/2011.html#73 Speed of Old Hard Disks - adcons

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 15 Mar, 2024
Blog: Facebook
IBM Dallas E&S center produced analysis showing 16mbit token-ring much better than ethernet ... only thing I could think of they used prototype 3mbit ethernet before "listen before transmit"

The communication group had also been fighting to block release of mainframe TCP/IP support. When that got reversed, they claimed that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes aggregate throughput using nearly whole 3090 processor. I then do RFC1044 support and in some tuning tests at Cray Research between a Cray and IBM 4341, got sustained IBM channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Former co-worker at research had left SJR and was doing lots of work in silicon valley, including for senior VP of engineering at large chip shop. He had redone the AT&T 370 C-compiler, fixing lots of bugs and significantly improving code optimization for 370s. He then ported a lot of UCB C-language chip apps to the mainframe. One day the IBM marketing rep came through and asked him what he was doing. He said ethernet support so they could use SGI graphics workstations as front-ends. The IBM marketing rep then says he should be doing token-ring support instead or they might find their mainframe support not as timely as in the past. I then get an hour phone call having to listen to constant stream of four letter words. The next morning the senior engineering VP has press conference to say that they are moving everything off mainframes to SUN servers. IBM then had some taskforces to look at why silicon valley wasn't using mainframes (but they weren't allowed to look at marketing issues).

some posts mentioning marketing threatening if ethernet was used instea of token-ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2017g.html#12 Mainframe Networking problems
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing

New Almaden Research Center was heavily provisioned with CAT4 supposedly for 16mbit token-ring ... however they found 10mbit (CAT4) ethernet had higher aggregate LAN bandwidth and lower latency than 16mbit token-ring.

IBM AWD (workstation) had done their own (at bus) 4mbit token-ring card for the PC/RT. However for RS/6000 and microchannel, AWD was told that they could only use standard PS2 microchannel cards (that were heavily performance kneecapped by the communication group), typical example was the 16mbit T/R microchannel card had lower card throughput than the PC/RT 4mbit token-ring card (and the $800 microchannel 16mbit token-ring card had really significantly lower throughput than $69 10mbit ethernet card that benchmarked at 8.5mbits/sec). I was told that CAT4 (& T/R) was originally developed to replace 3270 coax cables, because cable trays (carrying 3270 end-to-end coaxes) was starting to exceed bldg load limits.

My wife had been asked to co-author an IBM response to gov. agency request for high security, campus environment ... for which she included 3-tier networking architecture. We were then out making customer executive marketing presentations on 3-tier, TCP/IP and ethernet ... and taking arrows in the back from the communication group and token-ring forces, repeatedly claiming our comparison analysis was incorrect/flawed, but never explained how, lots of FUD (fiercely fighting off client/server, distributed computing and doing their best to limit PCs to 3270 emulation).

3tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

Late 80s, a senior disk engineer gets a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance ... but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had strangle hold on datacenters. The disk division was starting to see data fleeing the datacenter to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions to reverse the situation, but were constantly being vetoed by the communication group. As partial work-around the disk division executive was investing in distributed computing startups that would use IBM disks (and would periodically ask us to visit his investments). The communication group datacenter stranglehold wasn't just disks and a couple short years later, IBM has one of the largest losses in history of US corporations and was being re-orged in preparation for breaking up the company. The board then brings in the former AMEX president as CEO that somewhat reverses the breakup (although it wasn't long before the disk division was gone).

posts mentioning communication group fiercely fighting to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

trivia: early 80s, I had HSDT, T1 and faster computer (satellite and terrestrial) links and was working with NSF director, suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happened and RFP released (in part based on what we already had running) ... but internal IBM politics prevent us from bidding.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 16 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring

HSDT was formed in early 80s about same time ACIS was formed and I kept both ACIS and FSD informed on what was going on ... and NSF director was having us pitching to various univ and supercomputer centers. I was also blamed for online computer conferencing on the internal network in late 70s and early 80s ... which really took off spring of 81 when I distributed trip report of visit to Jim Gray at Tandem ... only about 300 participated but claims that 25,000 were reading. Folklore is that when corporate executive committee was told, 5of6 wanted to fire me. I'm transfereed to research hdqrts in Yorktown... continued to live in San Jose (with various offices in San Jose) ... but had to commute to ykt a couple times per month.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

Note when I 1st graduated and joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and lots of places ran it including the world-wide, online, branch office, sales&marketing HONE systems were long time customer back to 360/67 days ... which may tempered getting fired.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

NSF gave UC $120m for UCB supercomputer center and I was asked to pitch the NSF supercomputer interconnect to them. I was told that the UC Regents then say that the UC master bldg plan has UCSD getting next new bldg and it becomes UCSD supercomputer center. The IBM Berkeley branch also asks me to do some work with the "Berkeley 10M" group .... who were also doing work converting from film to CCD for observatory in Hawaii and wanted to do remote viewing. Had visits to Lick Observatory looking at some of the pilot work (at the time they had 200x200 CCD, but rumors that Spielberg was playing with 2048x2048). They then get large grants from foundation and it becomes the Keck "10M" (observatory).

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

misc. old 10M related email
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2004h.html#email860519

some other recent posts mentioning Berkeley/Keck 10m observatory
https://www.garlic.com/~lynn/2023d.html#39 IBM Santa Teresa Lab
https://www.garlic.com/~lynn/2023b.html#9 Lick and Keck Observatories
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#0 Internet
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 16 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#51 IBM Token-Ring

the PC/RT 4mbit t/r card designed/built by AWD for PC/RT AT-bus (16bit) .... had higher card throughput than the IBM PS2 16mbit t/r PS2 microchannel (32bit) card. AWD had been planning on doing their own (workstation performance) microchannel cards for the RS/6000 workstation ... but were then told they couldn't do their own microchannel cards ... but had to use standard PS2 cards ... that had much lower throughput .... not just 16mbit t/r, but graphics card, scsi cards, etc. It is claimed that AWD did the RS/6000 730 graphics workstation with VMEbus ... just so they could claim that couldn't use (communication group heavily performance kneecaped) PS2 microchannel cards, but were "forced" to use real high-performance workstation (vmebus) cards.

The only other thing RS/6000 workstations had was the SLA (serial link adapter), that was sort of tweaked mainframe ESCON (10% faster raw bandwidth, full-duplex, transmit and receive concurrently, etc) and totally incompatible with everybody else ... so only could talk to other RS/6000s. We con a high-end router vendor into offering SLA interface ... so SLA was useful interfacing to high-end distributed environments (their boxes supported multiple different vendors' channel interfaces, HIPPI (standardized version of Cray 100mbyte channel), T1 & T3 telco, FDDI, up to 16 10mbit Ethernet, etc). Then AWD was getting to have a microchannel card done for fibre-channel standard (initially 1gbit, full-duplex, 200mbyte/sec aggregate, not the heavily restricted FICON protocol that mainframe ran over fibre-channel) since there wasn't a PS2 FCS card to use

FICON & fibre channel posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning RS/6000 serial link adapter
https://www.garlic.com/~lynn/2023c.html#92 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2022b.html#66 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2017d.html#31 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2014h.html#87 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2011f.html#45 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2009s.html#32 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2009p.html#85 Anyone going to Supercomputers '09 in Portland?
https://www.garlic.com/~lynn/2009j.html#59 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2008q.html#60 Mainframe files under AIX etc
https://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC chip more expensive?
https://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006m.html#52 TCP/IP and connecting z to alternate platforms
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2000f.html#31 OT?

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 16 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe

... most of the MIPS&BIPS numbers are based on counting number of iterations of industry standard benchmark program compared to 370/158 (not actually counting instructions) ... more recent IBM mainframe numbers are from publications giving performance/throughput compared to previous generation and going forward.


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

Note original RS6000 didn't have bus for multiprocessor cache consistency.

some history


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-way cluster: 2016MIPS, 128-way cluster :
      16,128MIPS

Then Somerset/AIM (apple, ibm, motorola) reworked IBM power, including multiprocessor capable bus (apparently from Motorola RISC 88k). In the late 90s, the i86 processor vendors redid processor with a hardware layer that translated instructions into RISC micro-ops for actual execution, negating the throughput differences between i86 and RISC.


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (redone with hardware layer that translates
     instructions into RISC micro-ops for execution) hits 2,054MIPS
     (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configured z196, 80 processor aggregate 50BIPS (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS (31BIPS/proc)

The most recent IBM published numbers I have are for the 2010 z196 and E5-2600 server blades. IBM max configured z196 went for $30M (or $600,000/BIPS) and IBM base list price for E5-2600 server blade was $1815 (or $3.63/BIPS). Around turn of century large cloud operators were claiming they assembled their own server blades for 1/3rd the price of brand name blades ($1.21/BIPS) ... a large cloud operator will have a dozen or more megadatacenters around the world, each megadatacenter with half million or more blades with enormous automation ... a megadatacenter getting by with 70-80 staff. Articles in the 2010 era that a credit card could be used at a large cloud operator to spin up a cluster supercomputer (that would rank in the top 40 in the world) for a few hrs (with special rates for off-peak use).

IBM also published "Peak I/O" benchmark for z196 claiming 2M IOPS using 104 FICON (over 104 FCS). About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS). Note in 1988, IBM branch office asked if I could help LLNL get some serial stuff they were working with, standardized; which quickly becomes fibre channel standard ("FCS", initially 1gbit, full-duplex, aggregate 200mbytes/sec). Then POK announce some serial stuff (that they had been working with since at least 1980) with ES/9000 as ESCON when it was already obsolete (17mbyte/sec). IBM publications also recommended that the SAPs (service assist processors that actually do I/O) be kept to 70% CPU (or about 1.5M IOPS, instead of 2M IOPS).

Early last decade, there was industry press that the server chip vendors were shipping at least half their products directly to cloud megadatacenter and IBM unloads its server blade business. While E5-2600 server blades were ten times BIPS of max configured z196, current server blades are more like 20-40 times most recent IBM mainframe. z16 is only 4.44 times aggregate BIPS of z196 (222 vs 50), combination of no. processors 2.5 times (200 vs. 80), and 1.78 times individual processor BIPS (1.1 vs. .625).

FICON & fibre-channel posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning mainframes this century
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16

other posts mention pentiums and risc micro-ops
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024.html#113 Cobol
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2015.html#44 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014m.html#106 [CM] How ENIAC was rescued from the scrap heap
https://www.garlic.com/~lynn/2012c.html#59 Memory versus processor speed

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 17 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe

Note: 1990 standard unix clocked 5k instruction (and 5 buffer copies) total pathlength for TCP (and working hard to use scatter/gather to make it zero) ... while VTAM LU6.2 clocked around 160k instruction pathlengh (and 15 buffer copies, which could take more cache misses and processor cycles than the 160k instructions).

Over IBM communication group's objections, I was on Greg Chesson's XTP technical advisory board ... one of the TCP/IP issues was header CRC, XTP moved the CRC to trailer field and designed outboard FIFO that would calculate the CRC as the bytes were flowing out and append the CRC (reverse for incoming), eliminating it from the pathlength & latency. trivia: this was during the period that some gov. agencies was advocating eliminating TCP/IP and moving everything to OSI (GOSIP) ... and there were some military projects participating in XTP ... so XTP was taken to X3S3.3 (ISO chartered US ANSI standards group for OSI layer 3&4) as HSP (high speed protocol) for standardization. We were eventually told that ISO requirements that only protocols that conformed to OSI could be standardized. XTP/HSP failed because 1) it supported interneworking protocol that doesn't existing in OSI model, sitting between layer 3&4, 2) it bypassed the layer 3 and 4 interface, and 3) it went directly to LAN MAC interface, which doesn't exist in the OSI model (sitting somewhere in the middle of layer 3). There was also a joke that ISO could standardize stuff that couldn't be implemented while (internet) IETF required two interoperable implementations to proceed in standards process.

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

Other trivia: I got HSDT project early 80s (T1 and faster computer links, both terrestrial and satellite), and early on we went to dynamic rate-based pacing ... as opposed to window pacing algorithms ... so it could dynamically adapt to any transfer rate and any latency ... and I wrote it into the XTP specification.

Window pacing shows up in VTAM stuck at (terrestrial) 56kbit links and in mid-80s communication group submitted analysis to the corporate executive committee that customers weren't going to want T1 before the mid-90s ... showing how 37x5 "fat pipe" (parallel 56kbit links treated as single logical link) dropped to zero by six 56kbit. What they didn't know (or didn't want to tell the corporate executive committee) was that typical telco (terrestrial) tariffs for T1 were about the same as six 56kbit links (we found hundreds of customers that had gone to full T1 and switched to non-IBM hardware and software).

Eventually the communication group came out with the 3737 which had a boat load of memory and Motorola 68k processors) that ran a mini-VTAM ... and would fake the local host VTAM with immediate ACKs, before transmission ... trying to keep the packets flowing. Even then it maxed out at about 2/3rds a full-duplex (US) T1 (and half a EU T1) on short haul terrestrial (relatively low latency) links.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

posts mentioning unix tcp and vtam lu6.2 buffer copies
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#7 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023c.html#95 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2023c.html#70 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022g.html#48 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022c.html#71 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#80 Channel I/O
https://www.garlic.com/~lynn/2022.html#21 Departmental/distributed 4300s
https://www.garlic.com/~lynn/2021i.html#71 IBM MYTE
https://www.garlic.com/~lynn/2016b.html#104 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")

a few older references to 3737
https://www.garlic.com/~lynn/2012o.html#47 PC/mainframe browser(s) was Re: 360/20, was 1132 printerhistory
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#87 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012i.html#4 A joke seen in an online discussion about moving a box of tape backups
https://www.garlic.com/~lynn/2012g.html#57 VM Workshop 2012
https://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
https://www.garlic.com/~lynn/2012f.html#92 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012e.html#19 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011p.html#103 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011h.html#54 Did My Brother Invent E-Mail With Tom Van Vleck? (Part One)
https://www.garlic.com/~lynn/2011h.html#0 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Token-Ring

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Token-Ring
Date: 17 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#51 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#52 IBM Token-Ring

AWD? ... advanced workstation division IBU ... ROMP chip was going to be used for displaywriter followon. when. that got canceled, they decided to retarget for unix workstation market ... and got the company that did PC/IX (for IBM/PC) to do at&t unix for ROMP ... becomes AIX and PC/RT. Followon was RIOS chipset and RS/6000

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Communication group had been fighting release of mainframe TCP/IP, when they lost, they changed tactics and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec using nearly whole 3090 processor. Then it was ported from VM to MVS by implementing VM "diagnose" simulation for MVS. I then added RFC1044 support and in some tuning tests at Cray Research between a Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed). Later in the 90s, the communication group hires a silicon valley contractor to do TCP/IP directly in VTAM. What he demoed had TCP running much faster than LU6.2. He was then told that everybody knows that LU6.2 is much faster than a "proper" TCP/IP implementation, and they would only be paying for a "proper" TCP/IP implementation.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

The last product we did at IBM was HA/CMP. My wife had done five hand drawn charts for presentation to Nick Donofrio that he approved. Initially it was HA/6000 for the NYTimes to move their newspaper system (Oracle-based ATEX) from VAXcluster to HA/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). The RDBMS vendors had VAXcluster support in same source base with Unix ... I do an enhanced (IP-based) distributed protocol and distributed lock manager supporting VAXcluster semantics to ease the conversion. System/88 product administer also starts taking us around to their customers and gets me to write a section for the corporate continuous available strategy document (but it gets pulled when both Rochester/AS400 & POK/mainframe complain that they can't meet the requirements).

Early Jan92 have cluster scale-up meeting with Oracle where AWD/Hester tells Oracle CEO that we would have 16-way cluster mid92 and 128-way cluster ye92. Then late Jan92 cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Aggrevating it was mainframe DB2 had been complaining that if we were allowed to go ahead, it would be at least 5yrs ahead of them.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivable, geographic survivable
https://www.garlic.com/~lynn/submain.html#available

Note original RS6000 didn't have bus for multiprocessor cache consistency ... so cluster was initial scale-up strategy


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-way cluster: 2016MIPS, 128-way
      cluster: 16,128MIPS

Then Somerset/AIM (apple, ibm, motorola) reworked IBM power, including multiprocessor capable bus (apparently from Motorola RISC 88k). In the late 90s, the i86 processor vendors redid processor with a hardware layer that translated instructions into RISC micro-ops for actual execution, negating the throughput differences between i86 and RISC.


1999 single IBM PowerPC 440 hits 1,000MIPS
1999 single Pentium3 hits 2,054MIPS (twice PowerPC)

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Mainframe

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Mainframe
Date: 17 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe

more about window & rate congestion control. my wife was co-author of AWP39 about same time as SNA appeared ... joke was it wasn't a System, wasn't a Network, and wasn't an Architecture ... but because SNA had co-opted "network", they had to qualify AWP as "peer-to-peer" networking. window was sort of circuit direct to receiver based on number of receiver buffers ... however it didn't really account for large network with intermediate routers.

Summer 88 IETF meeting, Van Jacobson presented "slow start" ... slowly increasing "window" size/number. Also summer 88 ACM SIGCOMM had paper that "slow start" was non-stable in heavily loaded, heterogeneous large network; returning "ACKs" tended to bunch up at intermediate nodes with multiple ACKs arriving in group at sender. The sender would then would transmit multiple packets back-to-back ... increasing the likelyhood that it would saturate some intermediate node (or receiver).

Our (early 80s, HSDT) standard dynamic rate-based congestion countermeasure was control the interval between packet transmission ... allowing that both intermediate nodes and receiver having time to process incoming packets. The communication group's 3737 (with all its memory, multiple Motorola 68k processors, and psuedo-VTAM) didn't even handle a full, short haul, terrestrial, single T1 (by faking ACKs immediately to the host VTAM, before transmission)

Van Jacobson Denies Averting Internet Meltdown in 1980s. All Van Jacobson wanted to do was upload a few documents to the internet. Unfortunately, it was 1985.
https://www.wired.com/2012/05/van-jacobson/
The solution was essentially to slow down the startup process -- i.e. not sent so many packets so quickly. "The problem was that we had no clock at startup. We had to build a clock," Jacobson says. "You couldn't just send one packet and wait. But we had to figure out what you could do. Could you send two and wait? We needed a slow start that let the connection get going."

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning 1988, ietf, slow-start, acm sigcomm and non-stable
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2012g.html#39 Van Jacobson Denies Averting 1980s Internet Meltdown
https://www.garlic.com/~lynn/2009m.html#80 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage RISC
Date: 17 Mar, 2024
Blog: Facebook
RISC not only for blazing fast ... but also for improving throughput of execution scheduling. Late last century, i86 chip makers added a hardware layer that translated i86 instructions into RISC micro-ops for actual execution ... largely negating RISC processor throughput advantage (note MIPS/BIPS benchmarks are not actual instruction count but iterations of standard program compared to 370/158).

Note original 801/RISC RS6000 didn't have bus for multiprocessor cache consistency.


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS

Then Somerset/AIM (apple, ibm, motorola) reworked IBM power, including multiprocessor capable bus (apparently from Motorola RISC 88k).

1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configured z196, 80 processor aggregate 50BIPS (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS (31BIPS/proc)

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some recent posts mentiong z196:
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#21 HA/CMP
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage MVS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage MVS
Date: 18 Mar, 2024
Blog: Facebook
cross-memory ... access registers was part of 370/xa ("811" for the nov78 publication date of the documents) ... problem was that pointer passing API was thoroughly ingrained in os/360 ... real memory ... applications calling kernel or subsystem services passed pointer ... and the services could directly access the area using the pointer.

a decade ago, I was asked to track down the decision to make virtual memory for all 370s. I found staff member to executive making the decision. MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result, a typical one mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to 16mbyte virtual memory allowed concurrent running regions to be increased by a factor of four times with little or no paging (effectively like running MVT in a CP67 16mbyte virtual machine) ... aka VS2/SVS (kernel and subystems still access application area pointed to by passed pointer). archived post with some of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

However as systems got larger, needed to have even more concurrent regions ... but base OS/360 (and SVS) used 4bit storage protect keys to isolate/protect regions. Thus was born putting each application in its own (private) 16mbyte virtual address space, however to allow kernel to access pointer passed area, an 8mbyte image of the kernel was mapped into each application 16mbyte virtual address space (VS2/MVS). But subsystems were also moved to their own virtual address space, and to allow them to access area pointed to by application API pointer, a one mbyte common segment area was created mapped into every address space for applications to allocate area pointed to by API call (kernel 8mbyte, plus CSA 1mbyte, leaving 7mbytes for application).

Requirement for CSA space was somewhat proportional to number of running applications and number of subsystems ... and the CSA explodes into the common system area ... and by 3033 time, it was pushing 5-6mbytes (with kernel 8mbyte image leaving only 2-3mbytes for application) and threatening to become 8mbyes ... leaving zero for application. So there was huge race to get to 31bit architecture and MVS/XA. In the interim somebody retrofitted part of access registers to 3033 as "dual-address space" mode to try and alleviate some of the pressure on CSA. Trivia: the person that did that work left IBM shortly afterwards for HP to work on HP's RISC systems and then later was one of the Itanium architects.

note when Future System (internal politics during FS was killing off 370 efforts) imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel ... some more info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

at the same time, head of POK convinced corporate to kill the 370 product, shutdown the development group, and transfer all the people to POK for MVS/XA (claims for meeting MVS/XA delivery schedules) ... some POK upper management were also going around bullying internal VM370 datacenters to move to MVS, because VM370 would no longer be available (endicott eventually managed to save the VM370 product mission for the mid-range, but had to reconstitute a development group from scratch).

Burlington, Vt chip operation had a 7mbyte fortran chip design app, with multiple systems running special MVS with single mbyte CSA, however, any changes or fixes constantly were wrestling with the MVS 7mbyte brick wall. We offered them VM370 systems where they could get nearly full 16mbytes (minus CMS 128k) ... but it would have been major loss of face for the POK organization (having convinced corporate to kill VM370).

some recent posts mentioning VS2/MVS CSA and 3033 dual-address space mode
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2015h.html#116 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2014k.html#82 Do we really need 64-bit DP or is 48-bit enough?
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012o.html#30 Regarding Time Sharing
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2010p.html#21 Dataspaces or 64 bit storage
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage HSDT

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage HSDT
Date: 18 Mar, 2024
Blog: Facebook
IBM San Jose had T3 (collins digital radio) microwave for locations in south San Jose and early 80s, HSDT had some T1 subchannels. Then a group wanted similar HSDT T1 installed for Boulder, Co. between two bldgs and got Infrared T1. The Infrared signal had to be precisely aimed and the uneven heating by the sun of one bldg during the course of the day was enough to shift the signal. Took some trial and error to position the transmitter/receiver to minimize the effect.

Has some detractors about the effect of Boulder snow storms ... I had fireberd bit-error testers on 56kbit subchannels and did see some bit drops during white-out snow storm when nobody was able to get into work (I wrote Turbo Pascal program for PC/ATs that supported up to four ascii inputs from bit error testers for keeping machine readable logs).

hsdt posts
http://www.garilc.com/~lynn/subnetwork.html#hsdt

some recent posts mentioning collins digital radio, infrared modem, and/or bit error testors
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#93 IBM 4341
https://www.garlic.com/~lynn/2022f.html#117 IBM Downfall
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#52 IBM Personal Computing
https://www.garlic.com/~lynn/2022c.html#22 Telum & z16
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021b.html#73 IMS Stories
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
https://www.garlic.com/~lynn/2021.html#62 Mainframe IPL

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Selectric

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Selectric
Date: 18 Mar, 2024
Blog: Facebook
I had taken two credit hr intro to fortran/univ and end of semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ had 709/1401 and was replacing them with 360/67 for tss/360 ... but got a 360/30 replacing 1401 temporary until 360/67 arrived (getting 360 experience). The univ would shutdown datacenter on weekends and I got it dedicated (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 assembler program. Within a year of intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (tss/360 didn't really come to production use).

709 ran (tape->tape) student fortran in less than second. originally ran over minute with OS/360. I add HASP to MFT9.5 and cut the time in half. I then started redoing STAGE2 SYSGEN to place datasets and pds members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. I never got it better than 709 until I install Univ. Waterloo WATFOR.

CSC then came out to install (virtual machine) CP/67 (3rd installation after Cambridge itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend window ... 1st six months mostly concentrate on reWriting CP67 pathlengths for running OS/360 in virtual machines. OS/360 benchmark on real machine was 322secs and initially under CP/67 ran 856secs (CP67 CPU 534secs). After six months got CP67 CPU down to 113secs and then started on reWriting disk&drum I/O, paging algorithms, and dynamic adaptive resource management (scheduling) algorithms.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adapter resource management (& scheduling) posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
various posts mentioning paging performance
https://www.garlic.com/~lynn/subtopic.html#clock

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world with 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO, who only had a 360/30 for payroll up at Boeing field (although they enlarge the machine room for a 360/67 for me to play with when I'm not doing other stuff).

I have 2741 selectric in my office at both the univ. and Boeing ... and when I graduate, I leave Boeing and join IBM CSC. There I got 2741s both in my office and at home. One of my hobbies after I join CSC was enhanced production operating systems for internal datacenters ... and the branch office HONE was long time customer.

US marketing established HONE cp/67 datacenters for branch office SEs to login and practice with guest operating systems in virtual machines. CSC had also ported APL\360 to CMS for CMS\APL. APL\360 workspaces were 16kbytes (sometimes 32k) with the whole workspace swapped. Its storage management allocated new memory for every assignment ... quickly exhausting the workspace and requiring garbage collection. Mapping to CMS\APL with demand page large virtual memory resulted in page thrashing and the APL storage management had to be redone. CSC also implemented an API for system services (like file i/o), combination enabled a lot of real-world applications. HONE started using it for deploying sales&marketing support applications ... which quickly came to dominate all HONE use ... and the largest use of APL in the world (especially after HONE clones started sprouting up all over the world). The IBM hdqtrs Armonk business planners also loaded the highest security IBM business data on the CSC system for APL-based business applications (we had to demonstrate really strong security, in part because there were Boston area institution professors, staff, and students also using the CSC system).

hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

trivia: Initial CP/67 had 1052&2741 support with dynamic terminal identification ... resetting terminal port scanner type for each line. Univ. had some ASCII/TTY and I add ASCII/TTY terminal support integrated with dynamic terminal type support. I then want to have a "hunt group" with single dial-in phone number for all terminal types. Didn't quite work while terminal type scanner could be switched, short-cut had hardwired port line speed. The univ. then starts a clone controller project, build a channel attach board for Interdata/3, programmed to emulate a IBM controller with addition being able to do automatic line speed. It then gets upgraded to an Interdata/4 for the channel interface and cluster of Interdata/3s for port interfaces. Interdata (and later Perkin/Elmer) market it as a IBM clone controller and four of us get written up for (some part of) IBM clone controller business.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

did keep an apl golfball:

2741 apl typeball

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage MVS

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage MVS
Date: 19 Mar, 2024
Blog: Facebook
charlie invented compare&swap (name chosen because Charlie's initials are "CAS") at the cambridge science center when he was working on fine-grain multiprocessing locking for CP/67. Then in meetings with the 370 architecture owners, tried to get it added to 370. However they told us that the POK favorite son operating system people (MVT->SVS->MVS) said that test&set was sufficient (at the time they were just using test&set spinlock for entering the kernel) ... and to get it justified, we would have to come up with uses that weren't multiprocessor specific. Thus was born the example uses for multithread applications (like large DBMS) for atomic serialization updating various values/fields (that appeared in POO appendix).

... note that one of the responses about benchmarking (two processor) 3033MP was that MVS spin-lock overhead was (still) significant ... likely contributing to opinion that it could be decades before POK favorite son operating system (MVS) would have effective 16-processor support.

... aka after Future System imploded, I got con'ed into helping with a project to do a 16 CPU 370 MP and we con'ed the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support (POK doesn't ship a 16 processor machine until after the turn of the century). Then the head of POK invites some of us to never visit POK again (and tells the 3033 processor engineers to don't get distracted ... once the 3033 was out the doors, they then start on trout/3090).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

some recent posts mentioning effort to do 16 processor 370
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#31 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#12 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#58 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#104 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#82 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#92 IBM 4341
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2022h.html#49 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022h.html#35 360/85
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022h.html#2 360/91
https://www.garlic.com/~lynn/2022f.html#110 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022d.html#11 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Series/1

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Series/1
Date: 19 Mar, 2024
Blog: Facebook
trivia: the Series/1 folklore is that RPS was done when a few people from Kingston went to Boca to reinvent 360MFT on Series/1 ... while EDX was done by a physics summer intern in the basement of San Jose Research

I got HSDT (t1 and faster computer links, both terrestrial and satellite) in early 80s ... however some of the early funding came with strings that I had to show some IBM content (60s 360 2701 supporting T1 were long gone, replaced by SNA, VTAM, and 37xx boxes capped at 56kbit). I eventually find the FSD Series/1 Zirpel T1 card (done for gov. market) and try to order a few Series/1. I was told that there was a year delivery schedule ... apparently with IBM purchase of ROLM (primarily Data General), the only IBM box they found was Series/1 and their order drove Series/1 deliveries to year lead time. It turned out that I knew the ROLM datacenter director, who had left IBM some time previously and was now an IBMer again. In return for helping ROLM with some issues, they would let me have some of their early Series/1 delivery.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Later in mid-80s I got contacted by some people in IBM branch office about turning a "baby bell" PU4/PU5 emulation done on Series/1 into Type1 IBM product. Several people that got involved were familiar with how the communication group operated and did their best to deploy countermeasures to what the communication group would likely try to torpedo the effort. Fall 1986 I got scheduled to give presentation at the SNA ARB meeting in Raleigh ... part of the presentation in archived post
https://www.garlic.com/~lynn/99.html#67
part of presentation given by "baby bell" employee at spring 1986 COMMON user group (session 43U, Series/1 As A Front End Processor)
https://www.garlic.com/~lynn/99.html#70

The IMS group also really wanted it for IMS hot-standby, while IMS could "fall-over" in minutes, VTAM session establishment was really heavy weight and tended to grow non-linear, a large configuration, even on 3090, could take well over an hour to be fully back up and operational. The Series/1 implementation could keep shadow VTAM sessions on the hot-standby machine in sync ... so it was nearly instantaneous.

Communication group then was constantly claiming that my comparison was invalid. The Series/1 data was from a "baby bell" production operation. The VTAM/3725 configuration was generated by inputing the "baby bell" production operation into the communication group's HONE configurator ... so they couldn't really say that their HONE configurator was invalid ... so it was a lot of vague FUD references. Note in the same era, the communication group had prepared a report for the corporate executive groups that customers wouldn't want T1 links before sometime well into the 90s. They presented "fat pipe" (parallel 56kbit links treated as a single logical link) installations dropping to zero by seven links. What they didn't know (or didn't want to tell the executive committee), that typical telco T1 tariff were about the same as five or six 56kbit links. Simple survey, we found hundreds of customers that just went to full T1 with non-IBM hardware and software.

In any case, what the communication group did to torpedo the Series/1 PU4/PU5 Type1 effort can only be described as truth is stranger than fiction.

hone posts
https://www.garlic.com/~lynn/subtopic.html#hone

recent posts mentioning series/1 pu4/pu5
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2021b.html#74 IMS Stories

later in 80s, communication group did come out with 3737 for T1, but only sustained about 2/3rds of US T1, and 1/2 of EU T1 ... old email mentioning 3737 in old archived posts
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

recent posts mentioning 3737
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose

some other recent posts mentioning the communication group "fat pipe" study for corporate executive committee
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023f.html#82 Vintage Mainframe OSI
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#5 What is IBM SNA?
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021f.html#54 Switch over to Internetworking Protocol
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

Computers and Boyd

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computers and Boyd
Date: 20  Mar, 2024
Blog: Facebook
I took 2hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO (unit record<->tape front end for 709tape->tape) in assembler for 360/30. Univ replacing 709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was replaced with 360/30 (pending availability of 360/67, 360/30 had microcode 1401 emulation). The univ shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard. They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 card assembly program. Then within a year of taking the intro class, the 360/67 comes in, I'm hired fulltime responsible for OS/360 (tss/360 never really came to production, so ran as 360/65).

Student Fortran jobs ran under second on 709, but over a minute on 360/65, I install HASP (on MFT9.5) which cuts the time in half. I then start redoing STAGE2 SYSGEN placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs; it never got better than 709 until I install Univ. of Waterloo WATFOR. CSC comes out to install CP67/CMS (3rd after CSC and MIT Lincoln Labs, precursor to VM370/CMS) and I mostly play with during my weekend dedicated time. First six months was working on optimizing OS/360 running in virtual machine, redoing a lot of pathlengths. Benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs). Then started redoing I/O (disk ordered seek and drum paging from about 70/sec to capable of 270/sec), page replacement, dynamic adaptive resource management and scheduling.

CP67 installed had 1052 and 2741 terminal support with automagic terminal type identification. Univ had some ASCII/TTY, so I integrate TTY support with automatic identification. I then want to have single "hunt group" (single dial-in number for all terminals) ... but while IBM allowed changing port terminal type scanner, it had hardwired port line speed. Univ starts project to do clone controller, build channel board for Interdata/3 programmed to emulate IBM controller with addition could do automatic line speed. It was then enhanced with Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin/Elmer) sold as IBM clone controller and four of us were written up responsible for (some part of) clone controller business.

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join IBM Science Center (instead of staying with Boeing CFO).

Not long after joining CSC, IBM got a new CSO (formally in gov. service, head of presidential detail), and I'm asked to travel some with him talking about computer security (and little bit of physical security rubs off). There was a little rivalry between (IBM science center) 4th & (multics) 5th flrs ... one of their customers was USAFDC in the pentagon ...
https://www.multicians.org/sites.html
https://www.multicians.org/mga.html#AFDSC
https://www.multicians.org/site-afdsc.html

In spring 1979, some USAFDC wanted to come by to talk about getting 20 4341 VM370 systems. When they finally came by six months later, the planned order had grown to 210 4341 VM370 systems. Earlier in jan1979, I had been con'ed into doing a 6600 benchmark on an internal engineering 4341 (processor clock not running quite full-speed, before shipping to customers) for a national lab that was looking at getting 70 4341s for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami). The national lab benchmark had run 35.77sec on 6600 and 36.21secs on engineering 4341.

IBM science posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Early 1980s, am introduced to John Boyd and would sponsor his briefings at IBM. 1989/1990 commandant of marine corps leverages Boyd for make-over the corps ... at a time when IBM was desperately in need of make-over. After Boyd passes, former commandant (who passed early this morning) continued to sponsor Boyd conferences for us at Marine Corps Univ ... and some how I was added to the online weekly Quantico study group ... but they expect me to keep up on reading.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

One of Boyd stories is that he was very vocal that the electronics across the trail wouldn't work, possibly as punishment, he is put in command of "spook base" (about the same time I'm at Boeing), claimed it had the largest air conditioned building in that part of the world. His biography claims that "spook base" was a (60s) $2.5B windfall for IBM (ten times Renton) ... following ref besides computers has ref about "drones" at spook base.
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html

some recent posts mentioning computer intro, student fortran, boeing CFO renton, boeing computer services
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity Buying Up Accounting Practices

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity Buying Up Accounting Practices
Date: 20  Mar, 2024
Blog: Facebook
Private Equity Buying Up Accounting Practices. What Could Go Wrong? The Health Industry Gives Some Ideas
https://www.nakedcapitalism.com/2024/03/private-equity-buying-up-accounting-practices-what-could-go-wrong-the-health-industry-gives-some-ideas.html
This trend reflects the fact that private equity has money to burn and believes it can successfully "roll up" small accounting firms as well as wring more revenue and profits from the big ones. As we'll explain, the lack of much in the way of a pre-existing consolidation/corporatization trend in accounting gives reason to believe. And the bright ideas that private equity has for improving performance look, not unsurprisingly, to have the potential to be to client disadvantage and not so hot for one-time independent owners.
... snip ...

private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some recent specific private-equity and health care posts
https://www.garlic.com/~lynn/2024.html#99 A Look at Private Equity's Medicare Advantage Grifting
https://www.garlic.com/~lynn/2024.html#45 Hospitals owned by private equity are harming patients
https://www.garlic.com/~lynn/2024.html#0 Recent Private Equity News
https://www.garlic.com/~lynn/2023f.html#1 How U.S. Hospitals Undercut Public Health
https://www.garlic.com/~lynn/2023d.html#70 Who Employs Your Doctor? Increasingly, a Private Equity Firm
https://www.garlic.com/~lynn/2023d.html#68 Tax Avoidance
https://www.garlic.com/~lynn/2023b.html#93 Corporate Greed Is a Root Cause of Rail Disasters Around the World
https://www.garlic.com/~lynn/2023b.html#22 The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments, and Warps Our Economies
https://www.garlic.com/~lynn/2023.html#23 Health Care in Crisis: Warning! US Capitalism is Lethal
https://www.garlic.com/~lynn/2023.html#8 Ponzi Hospitals and Counterfeit Capitalism
https://www.garlic.com/~lynn/2022h.html#119 Patients for Profit: How Private Equity Hijacked Health Care
https://www.garlic.com/~lynn/2022h.html#76 Parasitic Private Equity is Consuming U.S. Health Care from the Inside Out
https://www.garlic.com/~lynn/2022h.html#53 US Is Focused on Regulating Private Equity Like Never Before
https://www.garlic.com/~lynn/2022g.html#50 US Debt Vultures Prey on Countries in Economic Distress
https://www.garlic.com/~lynn/2022g.html#25 Another Private Equity-Style Hospital Raid Kills a Busy Urban Hospital
https://www.garlic.com/~lynn/2022f.html#100 When Private Equity Takes Over a Nursing Home
https://www.garlic.com/~lynn/2022e.html#30 The "Animal Spirits of Capitalism" Are Devouring Us
https://www.garlic.com/~lynn/2022d.html#42 Your Money and Your Life: Private Equity Blasts Ethical Boundaries of American Medicine
https://www.garlic.com/~lynn/2022d.html#41 Your Money and Your Life: Private Equity Blasts Ethical Boundaries of American Medicine
https://www.garlic.com/~lynn/2022d.html#26 How Private Equity Looted America
https://www.garlic.com/~lynn/2022c.html#103 The Private Equity Giant KKR Bought Hundreds Of Homes For People With Disabilities
https://www.garlic.com/~lynn/2021k.html#82 Is Private Equity Overrated?
https://www.garlic.com/~lynn/2021h.html#20 Hospitals Face A Shortage Of Nurses As COVID Cases Soar
https://www.garlic.com/~lynn/2021g.html#64 Private Equity Now Buying Up Primary Care Practices
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021f.html#7 The Rise of Private Equity
https://www.garlic.com/~lynn/2021e.html#48 'Our Lives Don't Matter.' India's Female Community Health Workers Say the Government Is Failing to Protect Them From COVID-19
https://www.garlic.com/~lynn/2021c.html#44 More Evidence That Private Equity Kills: Estimated >20,000 Increase in Nursing Home Deaths
https://www.garlic.com/~lynn/2021c.html#7 More Evidence That Private Equity Kills: Estimated >20,000 Increase in Nursing Home Deaths

--
virtualization experience starting Jan1968, online at home since Mar1970

MVT/SVS/MVS/MVS.XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVT/SVS/MVS/MVS.XA
Date: 21 Mar, 2024
Blog: Facebook
Remember that TSO started out with MVT & 360s, when there wasn't virtual memory ... just single real memory ... and really slow. A decade ago, I was ask to track down the decision to add virtual memory to all 370s and I found staff to executive that made the decision; basically MVT storage management was so bad that regions had to be specified four times larger than used ... so typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Putting MVT into a single 16mbyte virtual address space allowed increasing concurrently running regions by a factor of four times (something like running MVT in a CP67 16mbyte virtual machine), begat VS2/SVS before VS2/MVS and multiple virtual memory. CERN did report to SHARE about comparison of MVS/TSO and VM370/CMS (morphed CP67 from 360/67 to 370s although some number of features were simplified or dropped, like multiprocessor support). The SHARE CERN MVS/TSO - VM370/CMS report was freely available, except inside IBM where they were stamped "IBM Confidential - Restricted" (2nd highest security level, available on need to know only).

The IBM Science Center had originally done virtual machines on 360/40 with hardware modifications for virtual memory as CP40/CMS. Then when 360/67 became available, it morphed into CP67/CMS. Lots of customers got 360/67s for TSS/360, which didn't really came to production so they ran as 360/65s with OS/360. Two univ. implemented their own virtual memory operating systems for 360/67, Michigan (did MTS, later ported to 370s as MTS/370) and Stanford (Orvyl/Wylbur, Wylbur later ported to MVS). Later some number of 360/67 customers brought up CP/67. There were two CP/67 online commercial service bureau spin-offs of the science center, IDC and NCSS ... that specialized in financial services. IDC was offering "First Financial Language", one of the people responsible for FFL, a decade later joined with another person to do Visicalc, originally for Apple2.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Commercial online CP/67 spinoffs of science center
https://www.garlic.com/~lynn/submain.html#online

In the 1st half of the 70s, IBM went through the "Future System" period, that was completely different from 370 and was going to completely replace it (internal politics during the period was shutting down 370 efforts, the lack of new 370 during the period is credited with giving the clone 370 makers their market foothold). When FS implodes there was mad rush to get stuff back in the 370 product pipelines including kicking off the quick&dirty 3033&3081 efforts in parallel, some
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

The head of POK (high-end 370s), also managed to convince corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (claiming needed to ship MVS/XA on time, possibly also in retaliation for the CERN SHARE report). Endicott (mid-range), eventually managed to save the VM370 product mission, but had to recreate a development group from scratch.

trivia: no CKD DASD have been made for decades, all being simulated on industry standard fixed block disks ... transition had even started with 3380 ... where the records/track calculation has the record size rounded up to fixed "cell" size.

tome about mad rush to get MVS/XA out
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
references customers not converting to MVS/XA as fast as planned
https://www.garlic.com/~lynn/2024b.html#12 3033

misc. other posts mentioning Amdahl HYPERVISOR/MACROCODE and/or MVS CSA
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

--
virtualization experience starting Jan1968, online at home since Mar1970

New data shows IRS's 10-year struggle to investigate tax crimes

From: Lynn Wheeler <lynn@garlic.com>
Subject: New data shows IRS's 10-year struggle to investigate tax crimes
Date: 21 Mar, 2024
Blog: Facebook
New data shows IRS's 10-year struggle to investigate tax crimes. The number of cases the wider U.S. tax agency refers to its criminal unit has plummeted.
https://www.icij.org/inside-icij/2024/03/new-data-shows-irss-10-year-struggle-to-investigate-tax-crimes/
The IRS's civil divisions, which comprise the vast majority of the agency's workforce, are supposed to flag egregious tax cases for potential prosecution from the volumes of returns they process and audit. These referrals are often associated with large dollar figures and wealthier taxpayers, who in recent years have seen weak enforcement from the depleted tax agency.
... snip ...

... early last decade, the new Republican speaker of the house publicly said he was cutting the budget for the agency responsible for recovering $400B in taxes on funds illegally stashed in overseas tax havens by 52,000 wealthy Americans. Later there was news on a few billion in fines for banks responsible for facilitating the illegal tax evasion ... but nothing on recovering the owed taxes or associated fines or jail sentences.

Early the previous decade (shortly after the new century), congress had dropped the fiscal responsibility act (spending couldn't exceed tax revenue, on its way to eliminating all federal debt). 2010 CBO report that 2003-2009, spending increased $6T and taxes cut $6T for $12T gap compared to fiscal responsible budget (1st time taxes were cut to not pay for two wars) ... sort of confluence of special interests wanting huge tax cut, military-industrial complex wanting huge spending increase, and Too-Big-To-Fail wanting huge debt increase (since then the debt has more than doubled).

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

--
virtualization experience starting Jan1968, online at home since Mar1970

The Communication Group Datacenter Stranglehold

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Communication Group Datacenter Stranglehold
Date: 21 Mar, 2024
Blog: Facebook
Late 80s, a senior disk engineer gets a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance ... but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had strangle hold on datacenters with their corporate responsibility for everything that crossed datacenter walls and were fiercely fighting off client/server, distributed computing, etc ... trying to preserve their dumb terminal paradigm. The disk division was starting to see data fleeing the datacenter to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions to reverse the situation, but were constantly being vetoed by the communication group. As partial work-around the disk division executive was investing in distributed computing startups that would use IBM disks (and would periodically ask us to visit his investments). The communication group datacenter stranglehold wasn't just disks and a couple short years later, IBM has one of the largest losses in history of US corporations and was being re-orged into the 13 "baby blues" (take off on the AT&T "baby bell" breakup) in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone)

communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal

longer winded account, two decades prior to IBM having one of the largest losses in history of US corporations, Learson was trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Hardware Stories

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Hardware Stories
Date: 21 Mar, 2024
Blog: Facebook
Programming horizontal microcode (used in high-end 370s) is much more difficult than vertical microcode or sequential 370 assembler. After FS implodes, I was (also) con'ed into helping Endicott with microcode assist (vertical microcode) ECPS for 138/148 ... then used for 4331/4341. In the early 80s, I got approval for presenting how ECPS was done at user group meetings. After some meetings, Amdahl grilled me for more details. They said they had developed MACROCODE (370 like instructions that ran in microcode mode) to quickly respond to series of trivial 3033 (horizontal) microcode changes (required by latest versions of MVS to run). They were then working on implementing HYPERVISOR (multiple domain facility) allowing MVS/XA and MVS to be run concurrently on Amdahl machines (MACROCODE was significantly faster and simpler to program than native horizontal microcode). IBM wasn't able to respond with PR/SM and LPAR until almost a decade later on 3090.

After FS implodes (completely different from 370s and going to replace all 370s),
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

I get con'ed into helping with 16processor 370 and we con'ed the 3033 processor engineers working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thot it was great, until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) could have effective 16-way support (POK doesn't ship 16-way until after turn of the century) and the head of POK invites some of us to never visit POK again (and told the 3033 processor engineers to don't be distracted again, trivia: after 3033 was out the door, they start on trout/3090).

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

I transfer to SJR and wander around a lot of IBM and non-IBM places in silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. They were doing 7x24, prescheduled, stand-alone testing and mention that they had recently tried MVS but it had 15min mean-time-between failure (in that environment, requiring manual re-ipl). I offer to redo I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity, (downside, they would blame me for problems and I had to spend increasing amount of time shooting their hardware problems).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

1980, STL (since renamed SVL) was bursting at the seams and they were moving 300 people from the IMS group to offsite bldg with dataprocessing back in STL. They had tried "remote 3270s" but found human factors totally unaccpetable. I get con'ed into doing channel-extender support so they can place channel-attached 3270 controllers at the offsite bldg with no difference in human factors between offsite and in STL. STL had spread their 3270 controller across all channels with disks and it turned out their channel busy was interfering with DASD I/O. The channel-extender significantly cut the 3270 traffic channel busy, increasing system throughput by 10-15% (and they were considering using channel-extender with 3270s for all systems). The hardware vendor then wants to get IBM to allow the release of my support, however there is a group in POK playing with some serial stuff and get it vetoed because they were afraid it might impact them being able to release their stuff.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

Roll forward to 1988, the IBM branch office asks if I could help LLNL (national lab) with standardize some serial stuff they have been playing with ... which quickly becomes fibre-channel standard (FCS, initially full-duplex 1gbit, 200gbyte aggregate, including some stuff I had done in 1980). Then in 1990, POK gets their serial stuff released with ES/9000 as ESCON (when it is already obsolete). Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces the throughput, which is eventually released as FICON. The most recent public numbers I can find is "Peak I/O" for z196 which used 104 FICON (104 FCS) getting 2M IOPS. About same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note also IBM documents recommending holding SAPs (system assist processors that do actual I/O) to no more than 70% CPU (or about 1.5M IOPS).

FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

There was recent thread about compare&swap instruction where somebody mentions that they had benchmarked MVS on two processor 3033 ... and found excessive spinlocks overhead ... which would have increased non-linear as number of processors increased (at the time, IBM docs would claim that MVS two processor had about 1.2-1.5 times throughput of single processor)
https://www.garlic.com/~lynn/2024b.html#61 Vintage MVS

MIPS benchmark (not actual instruction count, but number of program iterations compared to 370/158) most recent mainframe have just given increase in throughput compared to previous generation so used that just run forward from most recent known benchmark


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

the z196 era e5-2600 server blade was 500BIPS (ten times max configured z196 50BIPS), current server blades are more in the range of 20-40 times max configured z16.

... note FS, planning on completely replacing 370, internal politics was killing off 370 efforts, lack of new 370 is credited with giving clone 370 makers their market foothold, then when FS implodes there was mad rush to get stuff back into the 370 product pipelines including kicking off quick&dirty 3033 & 3081 in parallel

Initially 3081D processor was supposedly faster than 3033 ... but various benchmarks had it slower ... and 3081K was fairly quickly released with twice the processor cache size claiming 40% faster than 3081D. However Amdahl single processor MIPS was about the same as aggregate of two processor 3081K ... and with much higher single processor Amdahl MVS throughput (because of the significant 3081 IBM MVS multiprocessor overhead).

--
virtualization experience starting Jan1968, online at home since Mar1970

3270s For Management

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270s For Management
Date: 21 Mar, 2024
Blog: Facebook
70s, friday's after work, one of the hot topics was how to get largely computer illiterate workforce (especially management) using computers. Also at the time 3270 terminals were part of annual budget process, requiring justification and VP-level sign-off. Then at one point, there was rapidly spreading rumor that members of corporate executive committee were using email to communicate ... and all of a sudden nearly every manager/director/executive in the company were diverting 3270 terminal deliveries to their desks (as part of demonstrating how computer savy they were). 3270s would be powered on in the morning, logged in, with unused PROFS menu spending the day being burned into the screen (while some admin assistance actually handled any email).

trivia: the PROFS group had been collecting internal apps to wrap menus around ... and acquired a very early version of VMSG source for the email client. Later the VMSG author tried to offer them a much enhanced version ... and they tried to get him fired (possibly having taken credit for the PROFS apps). The whole thing quiets down when the VMSG author demonstrates his initials in every PROFS email (in non-displayed field). After that, the VMSG author only shared his source with me and one other person.

Trivia: When I first joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters. Since MULTICS was on the 5th flr and had done single-level-store filesystem, and the science center was on the 4th flr, I decided I also needed to do a page-mapped filesystem for CP67/CMS (although I would comment I learned what not to do from observing TSS/360) ... including being able to provide (R/O) shared segment operations for executibles ("MODULES") in the CMS filesystem ... so would collect application code to rework for shared segment operations. Archived post with some email about "VMSG"
https://www.garlic.com/~lynn/2006n.html#email790312
https://www.garlic.com/~lynn/2006n.html#email790403
https://www.garlic.com/~lynn/2012e.html#email791206

With the decision to add virtual memory to all 370s, there had also been decision to do VM370 and in the morph of CP67->VM370, lots of features were simplified or dropped (like multiprocessor support). Then in 1974, I started migrating a bunch of stuff from CP67 to VM370 release 2 base ... for internal CSC/VM (including CMS paged-mapped filesystem, shared executables, kernel re-org necessary for multiprocessor, but not actual multiprocessor itself). Then in 1975, I initially added multiprocessor to what was then a VM370 release 3 base CSC/VM ... originally for the world-wide, branch office, online, sales&marketing support HONE systems so they could add 2nd processor to their systems.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some archived posts mentioning PROFS, VMSG and 3270 justification
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021d.html#48 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2021b.html#37 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018f.html#25 LikeWar: The Weaponization of Social Media
https://www.garlic.com/~lynn/2018c.html#15 Old word processors
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017b.html#74 The ICL 2900
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2015g.html#98 PROFS & GML
https://www.garlic.com/~lynn/2015d.html#9 PROFS
https://www.garlic.com/~lynn/2015c.html#94 VM370 Logo Screen
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
https://www.garlic.com/~lynn/2012e.html#55 Just for a laugh... How to spot an old IBMer
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2008k.html#59 Happy 20th Birthday, AS/400
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That

--
virtualization experience starting Jan1968, online at home since Mar1970

HSDT, HA/CMP, NSFNET, Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: HSDT, HA/CMP, NSFNET, Internet
Date: 23 Mar, 2024
Blog: Facebook
periodically reposted NSFNET/Internet post

Last product we did at IBM was HA/CMP, it started out Nick Donofrio had approved HA/6000 originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when start doing cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that have VAXcluster support in the same source base with Unix (I do enhanced distributed protocol and distribute lock manager with VAXcluster API semantics to ease the port). Eartly Jan1992 we have meeting with Oracle, AWD/Hester tells Oracle CEO that we would have 16processor cluster mid92 and 128processor cluste ye92. Then end of Jan1992, cluster scale-up is transferred for announce as IBM supercomputer (technical/scientific *only*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Not long after leaving IBM, we are brought in as consultants for a small client/server company. Two of the former Oracle people (that we worked with on HA/CMP) are there responsible for something called "commerce server" and they want to do payment transactions on the server, the startup had also done this technology they called "SSL", the result is now frequently called "electronic commerce". I had responsibility for everything between servers and the financial industry payment networks. The company was originally called Mosaic, but NCSA complained and they rename it "NETSCAPE" (name that they got from a local internet company). Based on all the stuff I had did for the interfacing to payment networks (configurations, software, documentation), I put together a talk "why the internet wasn't business critical dataprocessing" that Postel sponsored at ISI/USC.

electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

Note at 1996 MSDC Moscone conference, all the banners said "Internet", but the constant refrain in all the sessions was "protect your investment" ... aka automatic execution of "Visual Basic" apps embedded in data files will continue in the transition from small, safe business networks to the wild anarchy of the Internet (with little or no additional safety measures).

Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

trivia: In early 80s, I got HSDT project, T1 and faster computer links, both terrestrial and satellite, also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally a RFP is released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Internet and Vintage APL

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Internet and Vintage APL
Date: 24 Mar, 2024
Blog: Facebook
After 23june1969 unbundling announce which included starting to charge for SE services. SEs training had included being part of large SEgroup at customer site, but couldn't figure out how to not charge for that SE training time. US marketing established HONE CP/67 (virtual machine) datacenters for branch office SEs to login and practice with guest operating systems in virtual machines. One of my hobbies after joining IBM, was enhanced production operating systems for internal daacenters and HONE was long time customer.

CSC had also ported APL\360 to CMS for CMS\APL. APL\360 workspaces were 16kbytes (sometimes 32k) with the whole workspace swapped. Its storage management allocated new memory for every assignment ... quickly exhausting the workspace and requiring frequent garbage collection. Mapping to CMS\APL with demand page large virtual memory resulted in page thrashing and the APL storage management had to be redone. CSC also implemented an API for system services (like file i/o), combination enabled a lot of real-world applications. HONE started using it for deploying sales&marketing support applications ... which quickly came to dominate all HONE use ... and the largest use of APL in the world (especially after HONE clones started sprouting up all over the world).

The IBM hdqtrs Armonk business planners also loaded the highest security IBM business data on the CSC system for APL-based business applications (we had to demonstrate really strong security, in part because there were Boston area institution professors, staff, and students also using the CSC system).

... actual recent internet related post
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet

Selectric 2741 APL typeball

2741 apl typeball

23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage Internet and Vintage APL

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage Internet and Vintage APL
Date: 24 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#71 Vintage Internet and Vintage APL

The science center had also done a number of performance monitoring, simulation, analytical modeling, workload profiling, precursor to capacity planning, etc. One co-worker had done an APL analytical system model that was made available on HONE as the Performance Predictor ... customer system, configuration, & workload profiles could be entered and "what-if" questions could be asked about changes to system, configuration, workloads.

US HONE moved to VM370/CMS and all the US HONE datacenters consolidated in Palo Alto and created the largest IBM single-system-image, loosely-coupled processor complex and shared disk farm anywhere (trivia: when facebook 1st moved into silicon valley, it was into a new bldg built next door to the former HONE datacenter) ... with load-balancing and fall-over across the complex (a modified version of the APL performance predictor was used to make the load balancing decisions)

trivia: as undergraduate in the 60s, I had rewritten lots of CP67 code, optimized pathlengths, new page replacement algorithms, dynamic adaptive scheduling and resource management, etc. Then at science center created automated benchmark system that could vary configuration and parameters settings along with synthetic benchmark program that could specify filesystem intensity, workingset sizes, paging intensity, interactive profiles, etc. CSC had years of of system and workload activity data from scores/hundreds of different internal systems. I had done the autolog command originally for automated benchmarking (although it was almost immediately picked up for production operation) which would update script and then reboot system between each benchmark.

23june1969 unbundling also started to charge for software, but they managed to make the case that kernel software should still be free. In the early 70s during the FS period (completely different and was to completely replace 360/370), internal politics was killing off 370 efforts (lack of new 370 during the period is credited with given the clone 370 makers their market foothold). Then when FS implodes there was mad rush to get stuff back into the 370 product pipeline ... and with the rise of the clone 370 makers, there was decision to transition to charge for kernel software .. starting with incremental addons and some of the stuff that I had been doing for internal datacenters was selected for guinea pig (and I got to spend time with lawyers and business people on kernel software charging policies).

Preparing for release, we ran 2000 synthetic benchmarks that took three months elapsed time. We defined an N-space of system, configuration and workload from the archives of collected activity data ... with extension out to extreme values and then selected 1000 benchmark points that were uniformly distributed through that space. The APL performance predictor was modified to predict benchmark results and then compare it with actual results. Then for the 2nd 1000 benchmarks, performance predictor was further modified to specify configuration&workload based on all previous results, searching for possible anomalous combinations.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hone (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive scheduling & resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage IBM, RISC, Internet

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage IBM, RISC, Internet
Date: 26 Mar, 2024
Blog: Facebook
I would claim that John went to the opposite in hardware complexity of the (failed) Future System project (which was completely different from and going to replace all 360s&370s)
http://www.jfsowa.com/computer/memo125.htm

it had no protection domains ... ran CP/r operating system and programs compiled with PL.8 ... claim it didn't need any protection domains (or kernel calls, everything could be done "inline") because CP/r would only load&execute correct "valid" PL.8 programs. 801/RISC ROMP chip was going to be used for the follow-on to Displaywriter. When that got canceled, they decided to pivot to the unix workstation market and hired the company that had done (AT&T Unix) port for IBM/PC PC/IX to do one for ROMP ... however ROMP needed to have traditional hardware protection domains with seperation between supervisor/kernel and applications ... released as AIX and PC/RT. Follow-on was RIOS (POWER) chipset for RS/6000 ... however hardware still lacked some traditional hardware features.

This resulted in Somerset/AIM (Apple, IBM, Motorola) .... where got it some more traditional hardware features borrowed from Motorola RISC 88k ... for Power/PC and later versions of Power (including being able to implement multiprocessor configurations).

Note UC was really slow processor and used for the things like low-end 8100 systems. Early on at the science center we had battle trying to get IBM to use the Series/1 Peachtree processor because it was significantly better than UCs. Later IBM senior executive had my wife audit 8100/UC and a short time later 8100 was canceled.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Note in the later part of last century, the i86 chip makers went to a hardware layer that translated i86 instructions into RISC micro-ops for actual execution largely negating throughput difference between i86 and RISC (note benchmarks aren't actual instruction counts but number of program iterations compared to standard).


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS

Then Somerset/AIM (apple, ibm, motorola) rework IBM power, including multiprocessor capable bus

1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configured z196, 80 processor aggregate 50BIPS (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS (31BIPS/proc)

The last project we did at IBM was HA/CMP ... my wife did five hand drawn charts that got approved, started out HA/6000 for NYTimes to move their newspapere system (ATEX) off VAXcluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up (i.e. RIOS/Power didn't have cache consistency bus, so couldn't do tightly-coupled shared-memory multiprocessor ... so fall back was loosely-coupled non-shared-memory cluster) with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres; aka they had vaxcluster support in same source base with unix ... making port easier). Early Jan1992, meeting with Oracle CEO, AWD/Hester tells them that there would be 16-processor cluster mid92 and 128-processor cluster ye92. Then late Jan1992, cluster scaleup is transferred for announce as IBM supercomputer (for technical/scientific only) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

Not not long later (warning some internet content), I'm brought into small client/server startup as consultant ... two of the former Oracle people (in Oracle CEO meetings, we worked with on commercial cluster scale-up) are there responsible for something called "commerce server" and want to do payment transactions on the server, the startup had also done something they call "SSL" they want to use, it is now sometimes called "electronic commerce". I have responsibility for everything between webservers and financial industry payment networks. Later I do talk on "why internet wasn't business critical dataprocessing" (based on software, documentation, procedures I had to do) that Postel (IETF/Internet RFC standards editor) sponsors at ISI/USC.

electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

recent post/thread with other internet content
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet

other "internet" content ... I did have a PC/RT (w/megapel screen) in non-IBM booth at Interop88 ... booth was at immediate right angles next to the Sun booth where Case was doing SNMP ... and con Case into installing SNMP on the RT.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Interop88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

some old posts mentioning CSC trying to get CPD to use S/1 Peachtree processor rather than the UC stuff
https://www.garlic.com/~lynn/2023c.html#60 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2022e.html#37 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2021f.html#89 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2019.html#52 Series/1 NCP/VTAM
https://www.garlic.com/~lynn/2017i.html#52 IBM Branch Offices: What They Were, How They Worked, 1920s-1980s
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016b.html#82 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2015e.html#86 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015e.html#84 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2013d.html#57 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2004p.html#27 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2003c.html#76 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003b.html#16 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#5 Card Columns
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002h.html#65 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#66 oddly portable machines
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#63 System/1 ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet DNS Trivia

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet DNS Trivia
Date: 26 Mar, 2024
Blog: Facebook
Invent DNS trivia:
https://en.wikipedia.org/wiki/Paul_Mockapetris

... a decade earlier he was MIT co-op student at IBM cambridge science center ... did some work with CMS multi-level source update as well as GML. GML trivia: GML had been invented at the science center in 1969 and then GML tag processing support added to CMS SCRIPT (CTSS RUNOFF processing had been rewritten for CMS as SCRIPT), after another decade it morphs into ISO standard SGML and after another decade morphs into HTML at CERN ...

1st webserver in the US was on the SLAC VM370 system (virtual machine CP/40 had been developed at science center on 360/40 with hardware mods for virtual memory in mid-60s, it then morphs into CP/67 when 360/67 standard with virtual memory becomes available, CP/67 morphs into VM/370 when it was decided to add virtual memory to all 370s).
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

past posts mentioning MIT co-op student later inventing DNS
https://www.garlic.com/~lynn/2019c.html#90 DNS & other trivia
https://www.garlic.com/~lynn/2011p.html#49 z/OS's basis for TCP/IP
https://www.garlic.com/~lynn/2008r.html#42 Online Bill Payment Website Hijacked - Users were redirected to a page serving malware
https://www.garlic.com/~lynn/2008q.html#13 Web Security hasn't moved since 1995
https://www.garlic.com/~lynn/2007u.html#45 Folklore references to CP67 at Lincoln Labs
https://www.garlic.com/~lynn/2007r.html#48 Half a Century of Crappy Computing
https://www.garlic.com/~lynn/2007k.html#33 Even worse than UNIX
https://www.garlic.com/~lynn/2004n.html#42 Longest Thread Ever
https://www.garlic.com/~lynn/2003.html#49 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/aadsm15.htm#11 Resolving an identifier into a meaning
https://www.garlic.com/~lynn/aadsm13.htm#10 X.500, LDAP Considered harmful Was: OCSP/LDAP
https://www.garlic.com/~lynn/aepay12.htm#36 DNS, yet again
https://www.garlic.com/~lynn/aepay12.htm#18 DNS inventor says cure to net identity problems is right under our nose
https://www.garlic.com/~lynn/aepay11.htm#43 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Financial Engineering

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Financial Engineering
Date: 27 Mar, 2024
Blog: Facebook
speaking of RJR, AMEX was in competition with KKR for private equity (LBO) take-over of RJR and KKR wins. then KKR was having trouble with RJR and hires away the AMEX president to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

Then IBM has one of the largest losses in the history of US corporations and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone) ... and uses some of the techniques used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

post referencing Learson tried (and failed) to block the bureaucrats, careerists, and MBAs from destroying watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

some posts referencing IBM becoming financial engineering company
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023c.html#13 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#105 IBM 360

--
virtualization experience starting Jan1968, online at home since Mar1970

Software vendors dump open source, go for the cash grab

From: Lynn Wheeler <lynn@garlic.com>
Subject: Software vendors dump open source, go for the cash grab
Date: 28 Mar, 2024
Blog: Facebook
Software vendors dump open source, go for the cash grab. First, they build programs with open source. Then they build their business with open source. Then they abandon it and cash out.
https://www.computerworld.com/article/3714821/software-vendors-dump-open-source-go-for-the-cash-grab.html
Essentially, all software is built using open source. By Synopsys' count, 96% of all codebases contain open-source software.
...
Finally, hidden behind the financial curtain, venture capitalists don't want hugely successful companies; they want unicorns. If a business isn't worth a billion dollars before its initial public offering (IPO), it's not a winner in their books.
... snip ...

note: jan1999 I was asked to help prevent the coming economic mess (we failed). I was told some of the investment bankers that had walked away "clean" from the (80s) "S&L Crises" were then running Internet IPO "mills", invest a few millions, hype for year or so, IPO for a couple billion (needed to then fail to make the field clear for the next round) ... and were predicted next to get into securitized loans/mortgages.

Jan2009, decade later, was asked to HTML'ize the Pecora Hearings (30s senate hearings into the '29 crash, had been scanned the fall before at Boston Public Library) with lots of internal HREFs and URLs between what happened this time and what happened then (comment that the new congress might have appetite to do something). I work on it awhile and then get a call saying it won't be needed after all (comment that capital hill is totally buried under enormous mountains of wall street cash).

S&L Crises posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Percora/'29crash hearings and/or Glass-Steagall posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Mexican cartel sending people across border with cash to buy these weapons

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mexican cartel sending people across border with cash to buy these weapons
Date: 28 Mar, 2024
Blog: Facebook
Mexican cartel sending people across border with cash to buy these weapons
https://warisboring.com/mexican-cartel-sending-people-across-border-with-cash-to-buy-these-weapons/
Federal authorities say a Mexican drug cartel is sending buyers with cash into North Texas to acquire as many high-powered rifles as possible for use in its ongoing wars with rivals and Mexican security forces. Here are two primary weapons they have sought.
... snip ...

... more than decade ago, Too Big To Fail were found to be money laundering for terrorists and drug cartels ... and being blamed for cartels being able to acquire large stock piles of military grade equipment as well behind the upswing in violence on both sides of the border. Some conjecture about gov. leaning over backwards to keep the TBTF in business, gov. was just repeatingly finding "deferred prosecution" (supposedly each time promising not to repeat, and any repetition would result in jail sentences and shutdown).

money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture

How a big US bank laundered billions from Mexico's murderous drug gangs. As the violence spread, billions of dollars of cartel cash began to seep into the global financial system
https://www.theguardian.com/world/2011/apr/03/us-bank-mexico-drug-gangs
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
https://www.bloomberg.com/news/articles/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal
How A Major U.S. Bank Laundered Billions In Mexican Drug Money
https://www.businessinsider.com/how-wachovia-laundered-billions-in-mexican-drug-money-2011-4
U.S. banks fail to monitor Mexican drug-gang money
https://www.seattletimes.com/nation-world/us-banks-fail-to-monitor-mexican-drug-gang-money/
Drugs, Elites and Impunity: The Paradoxes of Money Laundering and the Too-Big-To-Fail Concept
https://smallwarsjournal.com/jrnl/art/drugs-elites-and-impunity-paradoxes-money-laundering-and-too-big-fail-concept
Gangster Bankers: Too Big to Jail. How HSBC hooked up with drug traffickers and terrorists. And got away with it
https://www.rollingstone.com/politics/politics-news/gangster-bankers-too-big-to-jail-102004/
Merkley Blasts Too Big to Jail Policy for Lawbreaking Banks
https://www.merkley.senate.gov/merkley-blasts-too-big-to-jail-policy-for-lawbreaking-banks/
Laundromat Banks Too Big To Fail
https://www.financialsense.com/contributors/richard-mills/laundromat-banks-too-big-to-fail
Bankers in the drug trade: too big to fail, too big to jail?
https://roarmag.org/essays/banker-impunity-drug-money-laundering-fraud/
Too big to jail? Big-bank execs avoid laundering charges
https://www.houmatoday.com/story/news/2012/12/18/too-big-to-jail-big-bank-execs-avoid-laundering-charges/27036201007/
HSBC Money Laundering Case: Too Big To Fail does not mean Too Big to Jail
https://sevenpillarsinstitute.org/hsbc-money-laundering-case-too-big-to-fail-does-not-mean-too-big-to-jail/
The bank that grew fat on cocaine. Bespoke wealth management services were made available to some of the world's nastiest criminals
https://thecritic.co.uk/issues/february-2023/the-bank-that-grew-fat-on-cocaine/
Grassley: Justice Department's Failure to Prosecute Criminal Behavior in HSBC Scandal is Inexcusable
https://www.grassley.senate.gov/news/news-releases/grassley-justice-departments-failure-prosecute-criminal-behavior-hsbc-scandal
HSBC Critic: Too Big To Indict May Mean Too Big To Exist
https://www.npr.org/2012/12/13/167174208/hsbc-critic-too-big-to-indict-may-mean-too-big-to-exist
Matt Taibbi on Big Banks' Lack of Accountability
https://billmoyers.com/segment/matt-taibbi-on-big-banks-lack-of-accountability/
Wall Street Is Laundering Drug Money And Getting Away With It. Wachovia Bank is accused of laundering $380 billion in Mexican drug cartel money, and is expected to emerge with a slap on the wrist thanks to a government policy which protects megabanks from criminal charges.
https://www.huffpost.com/entry/megabanks-are-laundering_b_645885

--
virtualization experience starting Jan1968, online at home since Mar1970

ARPANET Directory 1982

From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET Directory 1982
Date: 28 Mar, 2024
Blog: Facebook
ARPANET Directory 1982
https://www.google.com/books/edition/_/M6opAQAAIAAJ?hl=en&gbpv=0

I had been estimating that arpanet had approx. 100 IMPs and 255 hosts around the time of 1jan1983 cutover to tcp/ip ... the '82 directory has 236 "host" addresses (but 25 are labeled "TIP" ... which may or may not be terminal interface processor ... leaving possibly 211 "hosts"?). This was at the time the internal world-wide corporate network was rapidly approaching 1000 hosts (larger than arpanet/internet from just about beginning until sometime mid/late 80s) ... recent posts/thread
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

other posts mentioning estimating ARPANET 100 IMPs and 255 host at 1jan1983 cutover to internet/tcpip
https://www.garlic.com/~lynn/2023f.html#5 Internet
https://www.garlic.com/~lynn/2023e.html#63 Early Internet
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#5 What is IBM SNA?
https://www.garlic.com/~lynn/2022.html#125 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2021h.html#66 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2018d.html#72 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017f.html#105 The IBM 7094 and CTSS
https://www.garlic.com/~lynn/2016g.html#6 INTERNET
https://www.garlic.com/~lynn/2015h.html#73 Miniskirts and mainframes
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2015e.html#57 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015d.html#43 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015b.html#45 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014j.html#77 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2013g.html#22 What Makes code storage management so cool?

--
virtualization experience starting Jan1968, online at home since Mar1970

Difference between NCP and TCP/IP protocols

From: Lynn Wheeler <lynn@garlic.com>
Subject: Difference between NCP and TCP/IP protocols
Date: 29 Mar, 2024
Blog: Facebook
Feb2000 thread "Difference between NCP and TCP/IP protocols" that ran in "alt.folklore.computers,comp.protocols.tcp-ip,alt.culture.internet" .... I made a number of posts .... including list of some related RFCs:

rfc60 ... A simplified NCP Protocol
rfc215 .. NCP, ICP, and TELNET:
rfc381 .. TWO PROPOSED CHANGES TO THE IMP-HOST PROTOCOL
rfc394 .. TWO PROPOSED CHANGES TO THE IMP-HOST PROTOCOL
rfc550 .. NIC NCP Experiment
rfc618 .. A Few Observations on NCP Statistics
rfc660 .. SOME CHANGES TO THE IMP AND THE IMP/HOST INTERFACE
rfc687 .. IMP/Host and Host/IMP Protocol Change
rfc704 .. IMP/Host and Host/IMP Protocol Change
rfc773 .. COMMENTS ON NCP/TCP MAIL SERVICE TRANSITION STRATEGY
rfc801 .. NCP/TCP TRANSITION PLAN

..... also (aka TCP implementation for NCP)

also ... rfc721 Out-of-Band Control Signals in a Host-to-Host Protocol

This note addresses the problem of implementing a reliable out-of-band signal for use in a host-to-host protocol. It is motivated by the fact that such a satisfactory mechanism does not exist in the Transmission Control Protocol (TCP) of Cerf et. al. [reference 4, 6] In addition to discussing some requirements for such an out-of-band signal (interrupts) and the implications for the implementation of the requirements, a discussion of the problem for the TCP case will be presented.

While the ARPANET host-to-host protocol does not support reliable transmission of either data or controls, it does meet the other requirements we have for an out-of-band control signal and will be drawn upon to provide a solution for the TCP case.

The TCP currently handles all data and controls on the same logical channel. To achieve reliable transmission, it provides positive acknowledgement and retransmission of all data and most controls. Since interrupts are on the same channel as data, the TCP must flush data whenever an interrupt is sent so as not to be subject to flow control.

... snip ...

archived:
https://www.garlic.com/~lynn/2000.html#67 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#72 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#73 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#74 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000.html#85 Difference between NCP and TCP/IP protocols

recent post/thread with other internet content that reference some CSNET before and after the cutover that was "more trouble than the ARPANET people had antificpate"
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DBMS/RDBMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DBMS/RDBMS
Date: 29 Mar, 2024
Blog: Facebook
The Tale of Vern Watts. The long, inspired career of an IBM Distinguished Engineer and IMS inventor
http://www.vcwatts.org/ibm_story.html
http://vcwatts.org/IDUGSolJ_Vern_Watts_2009April13d.pdf

San Jose Research was doing original SQL/RDBMS System/R for VM370/CMS on 370/145 in the 70s, after I transferred to SJR in 1977, I would do some work with Jim Gray and Vera Watson on it ... and System/R was taking criticism from the IMS & EAGLE (next great new DBMS followon) people in STL (now SVL). Was able to do System/R tech transfer to Endicott for SQL/DS "under the radar" (while company was pre-occupied with EAGLE). Then when EAGLE implodes there is request for how fast could System/R be ported to MVS (which is eventually released as DB2, originally for "decision support only".
https://en.wikipedia.org/wiki/Jim_Gray_(computer_scientist)
https://en.wikipedia.org/wiki/Vera_Watson
https://en.wikipedia.org/wiki/IBM_System_R

My wife had been in the gburg JES group and was one of the ASP/JES3 "catchers" ... and then was con'ed into going to POK responsible for "loosely-coupled" (mainframe for cluster) architecture. She didn't remain long because of 1) sporadic, periodic battles with the communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (except for IMS "hot standby") until much later with SYSPLEX and Parallel Sysplex. She has story about asking Vern who he would ask permission of to do "hot-standby", he replies "nobody, I will just tell them when it is all done".

When Jim Gray leaves IBM for Tandem, he tries to palm off on me DBMS consulting with the IMS group and support for BofA that was getting 60 VM/4341s for System/R joint study with SJR.

Late 80s, she did five hand drawn charts for Nick Donofrio for HA/6000 that he approves, original for NYTimes to move newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP, when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres) who have VAXCluster in same source base with Unix. I do enhanced distributed cluster protocol and distributed lock manager with VAXCluster API semantics (to simplify the ports). Early Jan1992 in Oracle meeting, AWD/Hester tells Oracle CEO we would have 16processor cluster by mid92 and 128processor cluster ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific only) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Contributing were complaints from mainframe DB2 that if we were allowed to continue, it would be at least 5yrs ahead of them.

a little topic drift, referencing transfer to SJR in 1977 (public group)
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet

another about RISC and non-RISC throughput
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
ASP/HASP, JES2/JES3, NGE/NGI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

rusty iron why ``folklore''?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: rusty iron why ``folklore''?
Newsgroups: alt.folklore.computers
Date: Sat, 30 Mar 2024 21:55:19 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
That doesn't sound so bad, until you discover that the overall "control program" (the "CP" part) would also blindly execute this same privileged code as well. And so a single nonprivileged user could subvert the entire system.

re:
https://www.garlic.com/~lynn/2024b.html#13 rusty iron why ``folklore''?

some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
https://multicians.org/thvv/7094.html
people went to the 5th flr for Multics
https://en.wikipedia.org/wiki/Multics

others went to the IBM scientific center on the 4th flr
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
and did virtual machines, internal network, numerous interactive apps, monitoring&performance work, invented GML in 1969 (decade later morphs into ISO standard SGML, and after another decade morphs into HTML at CERN).

Initially, CP40/CMS was done on a 360/40 with virtual memory hardware mods ... then morphs into CP67/CMS when 360/67 standard with virtual memory becomes available. It would simulate a virtual machine's priviledge instructions ... but isolated within that virtual machine's domain ... not the real machine's domain.
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/History_of_CP/CMS
More history/details at Melinda's website
http://www.leeandmelindavarian.com/Melinda#VMHist

Naturally, there was some amount of friendly rivalry between the two efforts on the 4th & 5th flrs. CTSS "RUNOFF" had been redone for CMS as "SCRIPT" ... and after GML was invented in 1969, GML tag processing added to SCRIPT. There was CP67/CMS MIT Urban lab in the bldg across the quad. The CP67 ASCII support had been for TTY devices with lines(/transmission) shorter then 255 chars (1byte length field). Somebody down at Harvard got a new ASCII device (plotter?) and they needed to patch CP67 to handle up to 1200(?) chars ...a flaw in the patch crashed the system 27times in one day
https://www.multicians.org/thvv/360-67.html
"Multics was also crashing quite often at that time, but each crash took an hour to recover because we salvaged the entire file system. This unfavorable comparison was one reason that the Multics team began development of the New Storage System.)"
... snip ...

In 60s, there was two commercial online service bureaus spinoffs of the science center ... specializing in services for the financial industry and had to demonstrate strong security because (in part) competing financial companies were making use of the services.

there were also various gov. agencies installing CP67/CMS because of its strong security.

Another commercial online service bureau was TYMSHARE in the 70s
https://en.wikipedia.org/wiki/Tymshare
and in Aug1976 they provided their CMS-based online computer conferencing system (precursor to modern social media), "free" to the IBM user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare

a 3-letter gov. agency was large customer (and required gov. level security) and very active in the VM group at SHARE and on vmshare.

The science center had also ported APL\360 to CP67/CMS as CMS\APL, opening workspaces from "swapped 16kbyes" to demand page large virtual memory ... and also implemented an API for system services like file I/O ... enabling many real world applications. Then the business planners in Armonk corporate hdqtrs loaded the most valuable corporate data on the Cambridge system for doing CMS\APL-based business applications. Strong security had to also be demonstrated since professors, staff, and students from Boston area educational institutions were also using the Cambridge CP67/CMS system.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online commercial service bureau posts
https://www.garlic.com/~lynn/submain.html#online
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

rusty iron why ``folklore''?

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: rusty iron why ``folklore''?
Newsgroups: alt.folklore.computers
Date: Sun, 31 Mar 2024 07:03:34 -1000
John Levine <johnl@taugh.com> writes:
Yes, I used it back in the day to develop and run some economic simulations. CP actually was a multi-user OS, by the way, with the communication between users via shared disks and what we called virtual card chutes, connecting the simulated punch on one virtual machine to the reader on another.

re:
https://www.garlic.com/~lynn/2024b.html#13 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?

and RSCS/VNET also forwarded the (virtual unit record) spool files to users on different machines ... was used for several different "email" implementations.

GML website seems to have gone 404, but lives on at wayback machine
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
"Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge."
... snip ...

CSC member responsible for CP67 wide-area network (morphs into the corporate internal network larger than arpanet/internet from just about beginning until sometime mid/late 80s ... technology also used for the corporate sponsored univ bitnet/earn, also for a time larger than internet)
https://en.wikipedia.org/wiki/Edson_Hendricks
"In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today."
... snip ...

SJMerc article about Edson (passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed INTERENET) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

corporate sponsored univ bitnet/earn
https://en.wikipedia.org/wiki/BITNET

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET/EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

Note, Pisa Science Center had done "SPM" for CP/67 (inter virtual machine communication) ... which never ships to customers ... although the VM/370 RSCS/VNET that shipped to customers included support for using "SPM". "SPM" was superset combination of the later VM/370 "VMCF", "IUCV", and "SMSG" (functions).

triva: internally within IBM a VM370/CMS, 3270 client/server "space war" game was implemented using "SPM" ... and since RCSC/VNET supported "SPM", clients could be almost anywhere on the internal network. aside: almost immediately robotic clients appeared, beating all humans (with their faster response time) ... server then was modified to increase energy use non-linearly as command intervals dropped below nominal human response ... somewhat leveling the playing field.

other triva: 1st webserver in the US was on Stanford SLAC VM370 system
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

SLAC also sponsored the monthly VM370 user group meetings ("BAYBUNCH")

--
virtualization experience starting Jan1968, online at home since Mar1970

rusty iron why ``folklore''?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: rusty iron why ``folklore''?
Newsgroups: alt.folklore.computers
Date: Sun, 31 Mar 2024 08:07:00 -1000
re:
https://www.garlic.com/~lynn/2024b.html#13 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?

there was early case where a MIT student "hung" the IBM cambridge system by writing a channel program that looped, locking up the i/o channel. system had to be re-ipled and the student contacted to not do that again, he did it again and his user login was disabled. he then complained that IBM wasn't allowed to block his user (apparently didn't realize that it wasn't a MIT system).

the commercial online CP67 services dealt with it early on by adding a new security class that wouldn't simulate the SIO instruction that invoked channel programs ... restricting CMS I/O to just "diagnose" instruction (which already started being used for CMS I/O) ... some vmshare discussion about sandbox'ing users.

mid-90s there was small conference of cal. univ computer science graduate school professors ... who complained that graduate students were getting more "peer points" by demonstrating how to crash systems as opposed to building systems that were crash proof.

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DBMS/RDBMS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DBMS/RDBMS
Date: 31 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#80 IBM DBMS/RDBMS

mid-90s there was small conference of cal. univ computer science graduate school professors ... who complained that graduate students were getting more "peer points" by demonstrating how to crash systems as opposed to building systems that were crash proof.

Original System/R (before and after porting to MVS for DB2) was mainframe only. When we started on HA/6000 (renamed HA/CMP), IBM Toronto was just starting on (initially very simple) C-language RDBMS for OS2 (later ported to AIX and rebranded DB2) ... which was long ways from real business critical production use and even much further from cluster (loosely-coupled) resulting in having to work with the RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had both VAXcluster and Unix support.

Fairly early in HA/CMP, the IBM S/88 product administrator started taking us around to their customers and also had me write a section for the corporate continuous availability strategy document .... however it got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't (then) meet the requirements.

System/R posts
https://www.garlic.com/~lynn/submisc.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/submisc.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

past posts mentioning s/88
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#115 IBM RAS
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#38 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2022b.html#55 IBM History
https://www.garlic.com/~lynn/2021d.html#53 IMS Stories
https://www.garlic.com/~lynn/2021.html#3 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2012.html#85 IPLs and system maintenance was Re: PDSE
https://www.garlic.com/~lynn/2010f.html#68 But... that's *impossible*
https://www.garlic.com/~lynn/2009q.html#26 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2008j.html#16 We're losing the battle
https://www.garlic.com/~lynn/2007q.html#67 does memory still have parity?
https://www.garlic.com/~lynn/2007f.html#56 Is computer history taught now?
https://www.garlic.com/~lynn/2003d.html#10 Low-end processors (again)
https://www.garlic.com/~lynn/2001k.html#11 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#10 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#9 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 31 Mar, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#39 IBM AIX
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#72 IBM AIX
https://www.garlic.com/~lynn/2024.html#95 IBM AIX

re: UNIX EREP;

"SSUP" was stripped down TSS/370 kernel for the required hardware support, with AT&T UNIX interface layered on top. had lots of discussions with Amdahl people after monthly SLAC BAYBUNCH meetings .... about 370 hardware layer needed for unix .... "native" would have required effort several times the straight unix port to 370; SSUP for AT&T UNIX port, and VM370 for AIX/370 and (amdahl's) UTS (claim was both IBM and Amdahl FEs/CEs wouldn't provide field support for mainframes w/o the necessary EREP ... aka hardware error recovery and reporting).

re: MVS POSIX support;

late 80s, senior disk engineer got talk scheduled at annual, internal, world-wide, communication group conference supposedly on 3174 performance ... however he opened the talk with statement that the communication group was going to be responsible for the demise of the disk division ... they were seeing data fleeing mainframes to more distributed computing friendly platforms with drop in disk sales. The problem was the communication group had corporate responsibility for everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). The disk division had come up with several solutions to the distributed computing problem, but they were constantly vetoed by the communication group.

As partial work-around, the disk division software executive was investing in distributed computing startups that would use IBM disks and would periodically ask us to drop by his investments. He also paid for the MVS POSIX support (since it didn't directly cross the datacenter walls, the communication group couldn't veto it, but it was enabler for non-IBM doing MVS distributed computing products). However the communication group datacenter stranglehold wasn't just disks and a couple years later, IBM had one of the largest losses in the history of US corporations and was being reorganized into the 13 "baby blues".
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone) ... and uses some of the techniques used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

communication group stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Vintage BITNET

From: Lynn Wheeler <lynn@garlic.com>
Subject: Vintage BITNET
Date: 31 Mar, 2024
Blog: Facebook
some wiki
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

IBM Cambridge Science Center CP67/CMS wide-area network morphs into the corporate internal network and then technology used for corporate sponsored univ bitnet (& bitnet in europe, "EARN")
https://www.garlic.com/~lynn/2023g.html#24 Vintage ARPANET/Internet
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
EARN (bitnet in europe) post
https://www.garlic.com/~lynn/2024.html#55 EARN 40th Anniversary Conference

Science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Along the way, JES2 NJE (carry over from univ HASP implementation that had "TUCC" in cols 68-71 of the cards) systems wanted to hook into the internal network. First problem was that NJE would trash traffic where the origin or destination hosts weren't in the local configuration table. The implementation used spare entries in the JES2/HASP 255 entry pseudo device table ... typical some 60-80 psuedo device entries leaving some 170-200 entries ... however this was at time when the internal network was way past 255 entries. As a result, the JES NJE hosts were relegated to edge nodes (behind VM370 RSCS/VNET) to minimize the amount of traffic that it would trash. In order to include JES2/NJE, there had to be a RSCS/VNET simulation NJE driver done.

However, JES2/NJE had a 2nd problem ... network information was intermixed in headers with job control information ... and began seeing a problem where traffic between JES2 systems at different release levels would crash host MVS systems (because of slight changes in headers). Also, as a result JES2/NJE systems were being hidden behind special RSCS/VNET systems with enhanced NJE drivers that understood the different release formats and how to reformat headers before sending over a link to directly connected JES2 systems.

In the early 80s, there was infamous case of San Jose MVS/JES2 being slightly modified that resulted in crashing MVS systems in Hursley, England ... and the Hursley management blaming the Hursley VM370/RSCS group because they hadn't installed new RSCS NJE drivers (to keep MVS from crashing, even though they had no knowledge about the San Jose changes).

Eventually JES2/NJE was upgraded to handle 999 host definitions, but it was well after the internal network had passed 1000 hosts (and it also left the problem of incompatible headers between different JES2 releases). Then marketing mitigated the difference between RSCS native drivers and JES2 NJE drivers ... eliminated the native drivers in the RSCS product (leaving just NJE simulation drivers) ... although the internal network backbone kept using the native RSCS drivers, in part because they had higher throughput.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HASP/ASP, JES2/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

some posts mentioning bitnet, rscs/vnet, jes nje/nji, MVS crashing
https://www.garlic.com/~lynn/2023d.html#118 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2022h.html#73 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#38 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022c.html#27 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022.html#78 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021b.html#75 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2019b.html#30 This Paper Map Shows The Extent Of The Entire Internet In 1973
https://www.garlic.com/~lynn/2014d.html#75 NJE Clarifications
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2013g.html#65 JES History
https://www.garlic.com/~lynn/2011p.html#81 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011e.html#78 Internet pioneer Paul Baran
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control
https://www.garlic.com/~lynn/2006x.html#8 vmshare
https://www.garlic.com/~lynn/2006k.html#10 Arpa address
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2002q.html#35 HASP:

--
virtualization experience starting Jan1968, online at home since Mar1970

Dialed in - a history of BBSing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dialed in - a history of BBSing
Newsgroups: alt.folklore.computers
Date: Mon, 01 Apr 2024 11:50:30 -1000
TYMSHARE ... online commercial service bureau
https://en.wikipedia.org/wiki/Tymshare
and its TYMNET with lots of local phone numbers around US and the world
https://en.wikipedia.org/wiki/Tymnet
In Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with tymshare to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal corporate network and systems (biggest problem was lawyers who were concerned that internal employees being contaminated being exposed to unfiltered customer information).

In the late 70s and early 80s, I was also blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about from the beginning until sometime mid/late 80s), it really took off the spring of 1981 when I distributed of trip report to see Jim Gray at Tandem (although only about 300 participated, claims were that upwards of 25,000 were reading). Also supposedly when corporate executive committee was told, 5of6 wanted to fire me.

From IBMJARGON ... copy here
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.
... snip ...

linkedin post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

early 70s, IBM CEO Learson tries (and fails) to block bureaucrats, careerists, and MBAs from destroying Watson culture/legacy. Then decade later, "tandem memos" references some of the problems with non-technical management. And then two decades after Learson's failed effort, IBM has one of the largest losses in the history of US corporations and was being re-orged into the 13 "baby blues" in preparation for breaking up the company
http://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
http://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
had already left IBM, but get a call from the bowels of (IBM corporate hdqtrs) Armonk asking if could help with breakup of the company. However, before getting started, the board brings in the former AMEX president as new CEO, who (somewhat) reverses the breakup.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online commercial service bureau posts
https://www.garlic.com/~lynn/submain.html#online
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Dialed in - a history of BBSing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Dialed in - a history of BBSing
Newsgroups: alt.folklore.computers
Date: Tue, 02 Apr 2024 16:39:05 -1000
re:
https://www.garlic.com/~lynn/2024b.html#87 Dialed in - a history of BBSing

early 80s, the scientific center wide-area network and corporate network technology was used for the corporate sponsored univ "BITNET" (which was also larger than internet for a period)
https://en.wikipedia.org/wiki/BITNET
and EARN (bitnet in europe)
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

which spawned something that had some of the features of the internal network online computer conferencing
https://en.wikipedia.org/wiki/LISTSERV

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group Share

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group Share
Date: 02 Apr, 2024
Blog: Facebook
MVS trivia (MVS song sung at SHARE HASP sing-along) ... I was there for 1st performance
http://www.mxg.com/thebuttonman/boney.asp
from above:


Words to follow along with... (glossary at bottom)

If it IPL's then JES won't start,
And if it gets up then it falls apart,
MVS is breaking my heart,
Maybe things will get a little better in the morning,
Maybe things will get a little better.
The system is crashing, I'm having a fit,
and DSS doesn't help a bit,
the shovel came with the debugging kit,
Maybe things will get a little better in the morning,
Maybe things will get a little better.
Work Your Fingers to the Bone and what do you get?
Boney Fingers, Boney Fingers!

from glossary:
$4K - MVS was the first operating system for which the IBM Salesman got a $4000 bonus if he/she could convince their customer to install VS 2.2 circa 1975. IBM was really pissed off that this fact became known thru this
... snip ...

HASP/ASP, JES2/JES3, and/or NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

a few recent posts mentioning "boney fingers"
https://www.garlic.com/~lynn/2023e.html#66 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms

this was also about the time that CERN presented a study comparison of MVS/TSO and VM370/CMS ... copies freely available at SHARE (outside IBM) ... inside IBM, copies were stamped "IBM Confidential - Restricted" (2nd highest security classification, available on need to know basis only, aka limiting IBMers that saw it possibly because conflicted with corporate party line ... part of joke about inside IBM, employees were treated like mushrooms, kept in the dark and fed "...."

a few posts mentioning CERN MVS/TSO-VM370/CMS comparison SHARE report
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2020.html#28 50 years online at home
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives

Slightly before, Learson trying(& failing) to block the bureaucrats, careerists, MBAs from destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... two decades later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Not long after Boney Fingers and CERN MVS/TSO-VM370/CMS comparison, Tymshare
https://en.wikipedia.org/wiki/Tymshare
and its TYMNET with lots of local phone numbers around US and the world
https://en.wikipedia.org/wiki/Tymnet
in Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal networks and systems ... biggest problem was lawyers concerned internal employees being exposed to unfiltered customer information.

(virtual machine) commercial online service bureau posts
https://www.garlic.com/~lynn/submain.html#online
ibm internal online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Later there was SHARE LSRAD report, I scanned my copy for putting up on SHARE directory at bitsavers .... problem was that it was published just after congress extended the copyright period and so I had to track done some SHARE person allowing it to be put up.

bitsavers lsrad report
http://www.bitsavers.org/pdf/ibm/share/The_LSRAD_Report_Dec79.pdf

posts mentioning 1979 LSRAD report
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2015f.html#82 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#82 Vintage IBM Manuals
https://www.garlic.com/~lynn/2013e.html#52 32760?
https://www.garlic.com/~lynn/2012p.html#58 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012f.html#58 Making the Mainframe more Accessible - What is Your Vision?
https://www.garlic.com/~lynn/2011p.html#146 IBM Manuals
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#70 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#89 Make the mainframe work environment fun and intuitive
https://www.garlic.com/~lynn/2011.html#88 digitize old hardcopy manuals
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2007d.html#40 old tapes
https://www.garlic.com/~lynn/2006d.html#38 Fw: Tax chooses dead language - Austalia
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual

Song book

song books 360/67 and vmshare cards

67 blue card and vmshare card

--
virtualization experience starting Jan1968, online at home since Mar1970

7Apr1964 - 360 Announce

From: Lynn Wheeler <lynn@garlic.com>
Subject: 7Apr1964 - 360 Announce
Date: 02 Apr, 2024
Blog: Facebook
Note Amdahl wins battle to make ACS 360 compatible ... folklore is that executives then shutdown the operation because they were afraid that it would advance the state of the art too fast and IBM would loose control of the market ... then Amdahl leaves IBM shortly later. Following lists some ACS/360 features that show up more than 20yrs later in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html

ACS
https://people.computing.clemson.edu/~mark/acs.html
ACS Legacy
https://people.computing.clemson.edu/~mark/acs_legacy.html

recent posts mentioning shutdown of ACS/360
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM User Group Share

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM User Group Share
Date: 03 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share

... also 1st half of 70s, there was "Future System" project (totally different than 370 and was going to completely replace it), internal politics was killing off 370 efforts and lack of new 370 products during the period is credited with giving clone 370 makers (like Amdahl) their market foothold.
http://www.jfsowa.com/computer/memo125.htm

Then when FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 in parallel. Possibly in retaliation for the CERN analysis, the head of POK convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (or possibly MVS/XA wouldn't ship on time) ... Endicott eventually manages to save the VM370 product mission for the entry&mid-range ... but had to reconstitute a development group from scratch. However there were POK executives going around internal datacenters bullying them that they had to convert to MVS (because VM370 wasn't going to be available on next generation high-end POK machines).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

recent posts mentioning shutdown of vm370
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024b.html#5 Vintage REXX
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#106 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#86 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#61 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2024.html#21 1975: VM/370 and CMS Demo

--
virtualization experience starting Jan1968, online at home since Mar1970

PC370

From: Lynn Wheeler <lynn@garlic.com>
Subject: PC370
Date: 03 Apr, 2024
Blog: Facebook
XT/370 (& AT/370) got early evaluation release ... benchmarking showed lots of page thrashing, they blamed me for six month slip in schedule while they added another 128kbytes to 370 memory. I also provided them with a much more efficient page replacement algorithm and CMS page mapped filesystem (from my internal enhanced production releases, but neither otherwise ever shipped to customers).

The CMS page mapped filesystem for moderate filesystem use for single user was about three times the throughput with 3380 disks ... increasing number of users increased I/O optimization ... but also throughput increase was better for single user XT/370 because all I/O was messages between VM/370 and CP/88 running on the PC (in the XT case doing used the 100ms/access disk).

As undergraduate in the 60s, univ had hired me responsible for os/360 (running on 360/67, originally for tss/360), univ shutdown datacenter on the weekend, and I would have datacenter dedicated (although 48hrs w/o sleep, impacted monday classes). Then science center came out to install CP67 (3rd after CSC itself and MIT Lincoln Lab), which I mostly played with during my dedicated window. Over the next six months, I started on mostly optimized CP67 pathlengths for running OS/360 in virtual machine, OS/360 standalone benchmark was 322secs, initially in virtual machine 858secs, CP67 CPU 534secs ... after 6months got CP67 CPU down to 113secs. Then I start on I/O optimized ... ordered seek for movable arm DASD and chained page requests maximizing transfers/rotation when no arm motion required (got fixed head 2301 drum from about 70/secs to peak of 270/sec).

After graduation I joined IBM and one of my hobbies was enhanced production operating systems (including CMS paged mapped filesystem, 1st for CP67/CMS then ported to VM370) for internal datacenters (although some stuff managed to leak out in CP67 and VM370).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CMS paged-map filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock

Later did some VM/370 stuff for A74 ... old email of A74 announce
https://www.garlic.com/~lynn/2000e.html#email880622
... including the mods for 4k storage protect keys (rather than 2k).

Note these were PC fixed-block disks before CKD "software" emulation, required mainframe software that supported FBA (all disks were in the process of moving to fixed-block, can be seen with 3380 in record/track calculations required rounding up to cell size) ... now even the CKD hardware simulation haven't been made for decades ... all CKD is software simulation on industry standard fixed block disks.

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

... trivia: low&mid range (vertical) microcode 370s avg 10 native instructions per 370 ... and the various PC 370 emulators have been similar. Trivia: after leaving IBM in the early 90s, was brought into the largest airline reservation systems to look at the ten impossible things they couldn't do. They gave me a complete softcopy of the OAG (all commercial airline schedules in the world) to start with "ROUTES". I went away to implement on RS/6000. I first did the existing implementation that ran 20 times faster, then did processor cache line of data structures that got it up to 100 times faster ... then did the ten impossible things that dropped it down to ten times faster. Projection was ten RS6000/990s could handle all ROUTE requests for all airlines in the world (benchmarks aren't actual instruction count but number of program interations compared to benchmark reference program).


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS

The last product we had done at IBM was HA/CMP ... and the executive we reported to went over to head-up Somerset/AIM (apple, ibm, motorola) for single chip power/pc ... including some features from motorola risc 88K like cache consistency protocol for multiprocessor.

For HA/CMP, the IBM S/88 product administrator started taking us around to their customers and also got me to write a section for the corporate continuous available strategy document, however it gets pulled when both Rochester (AS/400) and POK (mainframe) complain that they couldn't meet the requirements.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Then in later half of the 90s, the i86 makers added a hardware layer for translating i86 instructions into risc micro-ops for actual execution (largely negating difference between i86 and risc in throughput).


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configured z196, 80 processor aggregate 50BIPS (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS (31BIPS/proc)

... 1999 Pentium3 processor thirteen times 2001 z900 processor, 2003 Pentium4 processor 34 times 2003 z990 processor, 2010 XEON processor 50 times 2010 z196 processor.

--
virtualization experience starting Jan1968, online at home since Mar1970

How investment firms shield the ultrawealthy from the IRS

From: Lynn Wheeler <lynn@garlic.com>
Subject: How investment firms shield the ultrawealthy from the IRS
Date: 03 Apr, 2024
Blog: Facebook
How investment firms shield the ultrawealthy from the IRS. An ICIJ investigation examining hundreds of leaked tax forms offers a glimpse into the huge challenges the U.S. agency faces in tackling the favorite new global investment vehicles of the world's most wealthy.
https://www.icij.org/inside-icij/2024/04/how-investment-firms-shield-the-ultra-wealthy-from-the-irs/

tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion

some recent posts mentioning tax fraud
https://www.garlic.com/~lynn/2024b.html#66 New data shows IRS's 10-year struggle to investigate tax crimes
https://www.garlic.com/~lynn/2023g.html#33 Interest Payments on the Ballooning Federal Debt vs. Tax Receipts & GDP: Not as Bad as in 1982-1997, but Getting There
https://www.garlic.com/~lynn/2023f.html#74 Why the GOP plan to cut IRS funds to pay for Israel aid would increase the deficit
https://www.garlic.com/~lynn/2023e.html#101 Mobbed Up
https://www.garlic.com/~lynn/2023e.html#67 Wonking Out: Is the Fiscal Picture Getting Better or Worse? Yes
https://www.garlic.com/~lynn/2023e.html#60 Since 9/11, US Has Spent $21 Trillion on Militarism at Home and Abroad
https://www.garlic.com/~lynn/2023e.html#18 A U.N. Plan to Stop Corporate Tax Abuse

--
virtualization experience starting Jan1968, online at home since Mar1970

Ferranti Atlas and Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: Ferranti Atlas and Virtual Memory
Date: 04 Apr, 2024
Blog: Facebook
Melinda Varian's history
http://www.leeandmelindavarian.com/Melinda#VMHist
http://www.leeandmelindavarian.com/Melinda/neuvm.pdf
from above, Les Comeau has written (about TSS/360):
Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work.35

... snip ...

Atlas reference (gone 403?, but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
from above:
Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.
... snip ....

... referencing ATLAS used paging for large virtual memory ... but not multiprogramming (multiple concurrent address spaces). Cambridge had modified 360/40 with virtual memory and associative lookup that included both process-id and page number.
http://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

CP40 morphs into CP67 when 360/67 becomes available, standard with virtual memory. As an undergraduate in 60s, I had been hired fulltime for OS/360 running on 360/67 (as 360/65, originally was suppose to be for TSS/360). The univ shutdown datacenter on weekends and I would have it dedicated (although 48hrs w/o sleep made Monday classes difficult). CSC then came out to install CP/67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it during my dedicated time ... spent the 1st six months or so redoing pathlengths for running OS/360 in virtual machine. OS/360 benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs).

I redid scheduling&paging algorithms and added ordered seek for disk i/o and chained page requests to maximize transfers/revolution (2301 fixed-head drum from peak 70/sec to peak 270/sec). CP67 page replacement to global LRU (at a time when academic literature was all about "local LRU"), which I also deployed at Cambridge after graduating and joining IBM. IBM Grenoble Scientific Center modified CP67 to implement "local" LRU algorithm for their 1mbyte 360/67 (155 page'able pages after fixed memory requirements). Grenoble had very similar workload as Cambridge but their throughput for 35users (local LRU) was about the same as Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and global LRU) ... aka global LRU outperformed "local LRU" with more than twice the number of users and only 2/3rds the available memory.

Jim Gray had departed IBM SJR for Tandem in fall of 1980. A year later, at Dec81 ACM SIGOPS meeting, he asked me to help a Tandem co-worker get his Stanford PHD that heavily involved global LRU (and the "local LRU" forces from the 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving global LRU). Jim knew I had detailed stats on the Cambridge/Grenoble global/local LRU comparison (showing global LRU significantly outperformed "local LRU"). IBM executives stepped in and blocked me sending a response for nearly a year (I hoped it was part of the punishment for being blamed for online computer conferencing in the late 70s through the early 80s on the company internal network ... and not that they were meddling in the academic dispute).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
scheduling/dispatching, dynamic adaptive resource management post
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

posts specifically mentioning problems for Stanford PHD involving Global LRU page replacement
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2021i.html#82 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#62 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021e.html#3 IBM Internal Network
https://www.garlic.com/~lynn/2019b.html#5 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016e.html#2 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360
https://www.garlic.com/~lynn/2015c.html#48 The Stack Depth
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011p.html#53 Odd variant on clock replacement algorithm

--
virtualization experience starting Jan1968, online at home since Mar1970

Ferranti Atlas and Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: Ferranti Atlas and Virtual Memory
Date: 04 Apr, 2024
Blog: Facebook
re:
http://www.garilc.com/~lynn/2024b.html#94 Ferranti Atlas and Virtual Memory

... after decision to add virtual memory to all 370s, the decision was made to also do VM370, however the morph from CP67->VM370, simplified and/or eliminated lots of features (including multiprocessor support). In 1974, I then decide to migrating lots of the missing CP67 features to VM370 release2 ... including kernel re-org for multiprocessor operation (but not the actual multiprocessor support) ... aka one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (and the online worldwide sale&marketing support systems were long time customers) ... and this would be my 1st CSC/VM. Then in 1975, I migrate multiprocessor support to a Release3-based CSC/VM, initially for the US HONE datacenter (the US HONE datacenters had recently been consolidated in Palo Alto ... trivia: when facebook 1st moves into Silicon Valley, it is into a new bldg built next door to the old consolidated HONE datacenter)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 05 Apr, 2024
Blog: Facebook
IBM 360 Announce 7Apr1964

... both Boeing and IBMers told story about 360 announce day, Boeing walks into IBM rep office and placed a order that made the rep the highest paid IBMer that year (for somebody that they claimed didn't know what a 360 was, back in the days of straight commission), the next year IBM converts to "quota" and Boeing places another order in Jan, making the "rep's quota" for the year ... which was "adjusted" and the rep leaves IBM.

Took two credit hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO in assembler for 360/30. Univ replacing 709/1401 with a 360/67 for tss/360 ... temporarily the 1401 was replaced with 360/30 (pending availability of 360/67, 360/30 for starting to get familiar with 360, 360/30 also had microcode 1401 emulation). The univ shutdown datacenter on weekends and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard. They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. and within a few weeks had a 2000 card assembler program. Then within a year of intro class, the 360/67 comes in and I'm hired fulltime responsible for OS/360 (tss/360 never really came to production, so ran as 360/65, I continue to have my 48hr dedicated datacenter on weekends).

Student Fortran jobs ran under second on 709, but over a minute on 360/65 (OS MFT9.5), I install HASP which cuts the time in half. I then start redoing STAGE2 SYSGEN placing/ordering datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Trivia: a few months of heavy PTF activity could impact careful PDS member ordering with time creeping up towards 20secs, and I would have to rebuild system. It never got better than 709 until I install Univ. of Waterloo WATFOR.

CSC comes out to install CP67/CMS (3rd after CSC and MIT Lincoln Labs, precursor to VM370/CMS) and I mostly play it with during my weekend dedicated time. First six months was working on optimizing OS/360 running in virtual machine, redoing a lot of CP/67 pathlengths. OS/360 benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs). Then Science Center has one week CP67/CMS class at the Beverley Hills Hilton Hotel. I arrive Sunday and am asked to teach the CP67 class, the people that were supposed to had resigned on Friday to go with one of the online commercial spinoffs of the science center.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

About same time started redoing I/O (disk ordered seek and drum paging from about 70/sec to capable of 270/sec), page replacement, dynamic adaptive resource management and scheduling.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#clock

CP67 installed had 1052 and 2741 terminal support with automagic terminal type identification. Univ had some ASCII/TTY, so I integrate TTY support with automatic identification. I then want to have single "hunt group" (single dial-in number for all terminals) ... but while IBM allowed changing port terminal type scanner, it had hardwired port line speed. Univ starts project to do clone controller, build channel board for Interdata/3 programmed to emulate IBM controller with addition could do automatic line speed. It was then enhanced with Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and then Perkin/Elmer) sold it as IBM clone controller and four of us were written up responsible for (some part of) clone controller business.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in a independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (somebody jokes that Boeing was bringing in 360/65s like other companies bring in keypunch machines). Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join IBM Science Center (instead of staying with Boeing CFO).

a couple recent posts mentioning the Univ and Boeing
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

In the early 80s, I'm introduced to John Boyd and would sponsor his briefings at IBM. Boyd would tell story about being very vocal that electronics across the trail wouldn't work and possibly as punishment he is put in command of "spook base" (about the same time I'm Boeing). One of his biographies has "spook base" a $2.5B wind fall for IBM (ten times Renton datacenter).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd posts and web URL
https://www.garlic.com/~lynn/subboyd.html

... early 70s, CEO Learson trying (and failing) to block the bureaucrats, careerists, and MBAs from destroying Watson culture/legacy. In the late 70 and early 80s I was blamed for online computer conferencing that includes discussion about issues with non-technical executives. Two decades after Learson, IBM has one of the largest losses in the history of US companies and was being re-orged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup (although it wasn't long before the disk division is gone) ... and uses some of the techniques used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 05 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964

Late 80s, a senior disk engineer gets a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance ... but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had strangle hold on datacenters with their corporate responsibility for everything that crossed datacenter walls and were fiercely fighting off client/server, distributed computing, etc ... trying to preserve their dumb terminal paradigm. The disk division was starting to see data fleeing the datacenter to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions to reverse the situation, but were constantly being vetoed by the communication group. As partial work-around the disk division executive was investing in distributed computing startups that would use IBM disks (and would periodically ask us to visit his investments). The communication group datacenter stranglehold wasn't just disks and a couple short years later, IBM has one of the largest losses in history of US corporations.

communication group stanglehold posts
https://www.garlilc.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Turn of century IBM financials, mainframe hardware was few percent of IBM revenue and dropping. 2012 IBM financials, mainframe hardware was a couple percent of IBM revenue and still dropping. However the mainframe group was 25% of IBM revenue (mainly software and services) and 40% of profit ... compared to early 80s when mainframe hardware was over half IBM revenue. The following benchmark numbers are iterations of benchmark software compared to reference platform (stopped seeing them for most recent mainframes, so had to take pubs about increase in throughput compared to previous models).


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS (1000MIPS/proc), Sep2019
z16, 200 processors, 222BIPS (1111MIPS/proc), Sep2022

... trivia: low&mid range (vertical) microcode 370s avg 10 native instructions per 370 ... and the various PC 370 emulators have been similar. Trivia: after leaving IBM in the early 90s, was brought into the largest airline reservation systems to look at the ten impossible things they couldn't do. They gave me a complete softcopy of the OAG (all commercial airline schedules in the world) to start with "ROUTES". I went away to implement on RS/6000. I first did the existing implementation that ran 20 times faster, then did processor cache line of data structures that got it up to 100 times faster ... then did the ten impossible things that dropped it down to ten times faster. Projection was ten RS6000/990s could handle all ROUTE requests for all airlines in the world (benchmarks aren't actual instruction count but number of program iterations compared to benchmark reference program).


1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS

The last product we had done at IBM was HA/CMP ... and the executive we reported to went over to head-up Somerset/AIM (apple, ibm, motorola) for single chip power/pc ... including some features from motorola risc 88K, like cache consistency protocol for multiprocessor.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Then in later half of the 90s, the i86 makers added a hardware layer for translating i86 instructions into risc micro-ops for actual execution (largely negating difference between i86 and risc in throughput). For HA/CMP, the IBM S/88 product administrator started taking us around to their customers and also got me to write a section for the corporate continuous available strategy document, however it gets pulled when both Rochester (AS/400) and POK (mainframe) complain that they couldn't meet the requirements.


1999 single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
     z900 processor)
1999 single Pentium3 (translation to RISC micro-ops for execution)
     hits 2,054MIPS (twice PowerPC)

2003 max. configured z990, 32 processor aggregate 9BIPS (281MIPS/proc)
2003 single Pentium4 processor 9.7BIPS (>max configured z990)

2010 max configured z196, 80 processor aggregate 50BIPS (625MIPS/proc)
2010 E5-2600 server blade, 16 processor aggregate 500BIPS (31BIPS/proc)

... 1999 Pentium3 processor thirteen times 2001 z900 processor, 2003 Pentium4 processor 34 times 2003 z990 processor, 2010 XEON processor 50 times 2010 z10 processor.

Note Amdahl wins battle to make ACS, 360 compatible ... folklore is that executives then shutdown the operation because they were afraid that it would advance the state of the art too fast and IBM would loose control of the market ... shortly later Amdahl leaves IBM. Following lists some ACS/360 features that show up more than 20yrs later in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html
ACS
https://people.computing.clemson.edu/~mark/acs.html
ACS Legacy
https://people.computing.clemson.edu/~mark/acs_legacy.html

trivia: ... little over a decade ago I was asked to track down 370 virtual memory decision, found staff to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, as a result standard 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (sort of like running MVT in CP67 16mbyte virtual machine) allowed to increase the number of concurrent running regions by factor of four times with little or no page (i.e. VS2/SVS). As machines got bigger, even that wasn't sufficient ... but with single virtual memory, protection was still limited by 4bit storage protection keys and had to give each region its own (16mbyte) virtual address space (VS2/MVS ... which then runs into a different limitation, MVS system software facilaties threatening to consume all 16mbyte in each virtual address space leaving nothing for applications).

archived post w/pieces of email exchange on MVT->SVS->MVS
https://www.garlic.com/~lynn//2011d.html#73

... as part of virtual memory for all 370s ... which was sort of kicked off by 370/165 ... then they (370/165) were complaining that if they had to implement the full 370 virtual memory architecture, it would slip the planned announce by six months (the POK favorite operating system peopled, MVT->SVS claimed that they couldn't see the need) and they removed the additional features. Then the models that had already implemented the full architecture had to remove those parts dropped and any software that would use the drop features had to be redone.

trivia: 1st part of 70s, IBM had the "Future System" effort, completely different than 370 and was going to replace all 370s; internal politics during the period was killing off 370s efforts ... lack of new 370s during the period is credited with giving the clone 370 markers (like Amdahl) their market foothold
http://www.jfsowa.com/computer/memo125.htm
then when "FS" implodes there is a mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel .... then a mad rush to get to 370/XA and MVS/XA before MVS system completely takes over every 16mbyte virtual address, leaving nothing for applications.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... after FS imploded, I got roped into working on 16 processor 370 tightly-coupled, shared memory system and we con the 3033 processor engineers into working on it in their spare time, a lot more interesting than remapping 168 logic to 20% faster chips. Everybody thought it was really great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support (at the time MVS documentation was that 2-way SMP was only 1.2-1.5 times throughput of single processor ... excessive tightly-coupled MVS software overhead which also increases non-linear as number of processor increase). POK doesn't ship 16-way SMP until after turn of century.

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 05 Apr, 2024
Blog: Facebook
OSI: The Internet That Wasn't ... How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ....

reference communication group on IBM mainframe business, so might conjecture something similar for OPI.
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964

above mentions HSDT, T1 and faster computer links (both terrestrial and satellite) ... and IBM's communication group's SNA mainframe VTAM implementation stuck at 56kbits (sometimes less) for numerous reasons ... so there were periodic battles.

misc. past post
https://www.garlic.com/~lynn/2024b.html#33 Internet

... also in the 80s, I was on Chesson's XTP TAB .... there were some military types involved and gov. was starting to push GOSIP ... so took XTP to ANSI X3S3.3 (ISO Chartered US standards body for level 3&4 standards) as HSP for standardization. Eventually were told that ISO required that standards work had to conform to OSI Model; XTP/HSP didn't because 1) XTP/HSP supported internetworking, non-existed layer between level 3&4, 2) XTP/HSP skipped the interface between layer 3&4, and 3) XTP/HSP went directly to LAN MAC layer (non-existing layer somewhere in the middle of level3). Also had joke that ISO could standardize stuff that weren't even implementable while IETF required two interoperable implementations before proceeding in standards process.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internalnet posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 05 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't

a IBM reference from the article:
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

in contrast to Ed's references
https://www.garlic.com/~lynn/2024b.html#33 Internet
and long winded here
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

- virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 06 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#100 OSI: The Internet That Wasn't

this mentions some work on IBM mainframe tcp/ip in 80s
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet

posts mentioning adding RFC1044 to IBM mainframe tcp/ip
https://www.garlic.com/~lynn/

GML also invented at the science center in 1969, which morphs into ISO standard SGML a decade later, and after another decade morphs into HTML at CERN. First webserver in the states is at Stanford SLAC VM370 system (CP67 done at science center then morphs into VM370 when decided to add virtual memory to all 370s).
https://www.slac.stanford.edu/history/earlyweb/history.shtml
https://www.slac.stanford.edu/history/earlyweb/firstpages.shtml

also references working with NSF director for NSF Supercomputer center interconnect, NSFnet, precursor to modern internet
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

... funding new software at NCSA begat MOSAIC & HTTP
http://www.ncsa.illinois.edu/enabling/mosaic

last product we did at IBM was HA/CMP ... and after the cluster scaleup work was transferred for announced as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors ... a few months later we leave IBM. Not long later this also mentions
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subnetwork.html#hacmp

being brought into small client/server startup (when their name was still "MOSAIC" before change to "NETSCAPE") for doing "electronic commerce".

at MOSAIC/NETSCAPE, had a problem with HTTP/HTTPS being built on TCP sessions. HTTP/HTTPS were mostly "atomic" transaction request, open session, do transaction, shutdown session. Session shtudown included FINWAIT list of pending session shutdowns ... dangling packets that might arrive (via different route) ... the implementation was linear search for every arriving packet (assumed very small lists). As load ramped up, HTTP/HTTPS drive list to thousands ... and systems were spending 95% of CPU scanning FINWAIT list. It took about six months before vendors started shipping FINWAIT performance fixes (trivia: NETSCAPE found that Sequent had fixed it in DYNIX some time before)

electronic commerce gateway implementation
https://www.garlic.com/~lynn/subnetwork.html#gateway

My wife was co-author of IBM AWP39 in same time frame that SNA appeared ...joke was it wasn't a System, wasn't a Network, and wasn't an Architecture ... but since SNA had co-opted "Network" ... they had to qualify it as "Peer-To-Peer" Networking. Upthread reference in article that IBM had cleverly manipulated "OSI" work to keep it in line with the communication group "SNA" business interests

... similarly to IBM disk division claims that the communication group was going to be responsible for the demise of the disk division (aka disk division seeing data fleeing datacenter to more distributed computing friendly platforms while the communication group was fiercely fighting off client/server and distributed computing)
https://www.garlic.com/~lynn/subnetwork.html#terminal

some posts mentioning AWP39, peer-to-peer networking and peer-coupled shared data
https://www.garlic.com/~lynn/2024b.html#30 ACP/TPF
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network
https://www.garlic.com/~lynn/2019d.html#119 IBM Acronyms
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017e.html#62 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#55 The ICL 2900
https://www.garlic.com/~lynn/2016d.html#48 PL/I advertising
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012k.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011m.html#6 What is IBM culture?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2008i.html#97 We're losing the battle
https://www.garlic.com/~lynn/2008e.html#73 Convergent Technologies vs Sun
https://www.garlic.com/~lynn/2008d.html#71 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007p.html#12 JES2 or JES3, Which one is older?
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications
https://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. "Server" (Was Just another example of mainframe
https://www.garlic.com/~lynn/2006u.html#55 What's a mainframe?
https://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006l.html#4 Google Architecture
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 06 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964

goes into some detail about IBM CEO Learson trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the Watson culture&legacy, two decades later, IBM has one of the worst losses in the history of US corporation history.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

above includes Learson's: "Number 1-72: January 18,1972", which also appears in
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.
... snip ...

... then before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP/67-CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 06 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964

I mention in the post had worked on clone controller as undergraduate in the 60s and four of us get written up for (some part) of clone controller business. This article
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
mentions that major motivation for the Future System effort was countermeasure to clone controllers ... making things so complex that clone makers couldn't keep up ... but as it turns out neither could IBM. After graduating and joining IBM in the 70s, I continued to work on 360/370 during FS, even periodically ridiculing the FS activities (which wasn't exactly a career enhancing). Then (& Learson unable to block the destruction of the Watson culture&legacy):
https://www.amazon.com/Computer-Wars-Future-Global-Technology/dp/0812923006/
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."
... snip ...

Note that FS was completely different from 360/370 was going to complete replace 360/370 and internal politics was killing off 370 efforts; the lack of new 370 during FS is credited with giving the clone system makers their market foothold ... aka countermeasure to clone controllers ... along with shutdown of ACS/360
https://people.computing.clemson.edu/~mark/acs_end.html
gave rise to clone 370s. Then mad rush to get stuff back into the 370 product pipelines, including kicking off 3033&3081 quick&dirty efforts in parallel
http://www.jfsowa.com/computer/memo125.htm

.... and the FS implosion continued to cast shadow over IBM through the 80s.

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 06 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#100 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't

In 60s at the science center, Ed was responsible for the Cambridge CP67 wide-area network ... ref by one of the GML (invented at the science center in 1969) inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...

Ed's wide-area network evolves into the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... also used for corporate sponsored univ BITNET/EARN (also for a time larger than internet).
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.
... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

aka Ed working towards morphing both corporate internal network and bitnet into TCP/IP ... while the company communication group working on forcing them into SNA/VTAM (part of fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). Could claim that internet passed internal network in number nodes was at least partially when workstation/PC-based TCP/IP were increasingly appearing while company communication had locked host mainframes into SNA/VTAM (and most everything else was being limited to terminal emulation) ... at the same time (from the article) communication group and OSI:
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 06 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964

There has been lots from RISC about being able to schedule instructions execution for better throughput (but little about it from IBM mainframe). Part of RISC was also doing out-of-order execution (to keep cache misses from stalling execution), branch prediction (to keep conditional branches from stalling out-of-order execution) ... misc. other details. More recent articles that memory latency (on cache misses) when measured in count of processor cycles, is comparable to 60s disk latency when measured in count of 60s processor cycles (aka memory is the new disk) ... and (hardware) out-of-order execution (and other techniques) for compensating for memory access stalls is comparable for 60s software multiprogramming/multitasking.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

The 60s software multiprogramming in fact shows up in motivation to add virtual memory to all 370s ... aka MVT storage management was so bad that regions had to be specified four times larger than normally used. As a result an 1mbyte 370/165 would typically run only four concurrent regions, insufficient to keep the system busy and justified. Going to 16mbyte virtual memory (aks MVT->VS2/SVS, similar to running MVT in a CP67 16mbyte virtual machine) allowed number of regions to be increased by a factor of four times (multiprogramming/multitasking overlapping with I/O) with little or no paging. Later as systems got larger, had to go from VS2/SVS to VS2/MVS to further increase number of concurrently running programs (aka MVT/SVS relied on 4bit storage protect keys to separate different programs, so MVS gave each program its own 16mbyte virtual address spaces to separate/protect programs ... but that introduced another set of problems limiting concurrent programs ... forcing the need for MVS/XA).

So for introduction of hardware translation of i86 instructions to RISC micro-ops for execution went along with a big increase in number of things that could be executed concurrently (along with advanced out-of-order execution compensating for cache misses and execution stalling).

The one mainframe note that I've run across for using such with mainframes was statement that half of the per processor throughput increase, going from z10 to z196 was the introduction of some out-of-order execution (but i86 effort appears to have stayed way ahead of IBM mainframe effort).

trivia: Shortly after graduating and joining IBM, I got asked to help with multi-threading (simulating two processor executing) the 370/195. It had 64 instruction pipeline with (370) out-of-order execution ... but no "branch prediction" so conditional branches drained the pipeline ... typical codes limited 370/195 to half its throughput. Going to two instruction stream, multi-threading (simulating two processors), each running at half throughput, could keep the 370/195 at full throughput. Multi-threading patent in the ACS/360 effort is described here:
https://people.computing.clemson.edu/~mark/acs_end.html

Then with the decision to add virtual memory all 370s, all new work on 370/195 was dropped because it was felt it would take too much effort to retrofit virtual memory to 370/195 (as it was, 370/165 started complaining that implementing the full virtual memory architecture would slip the virtual memory announcement by six months, and those features were dropped to stay on schedule). Note that the 370/195 two instruction stream, multi-threaded effort (simulating two processors) wouldn't have been that much of a benefit since MVT/SVS/MVS documentation of the period claimed two CPU multiprocessor operation only had 1.2-1.5 times the throughput of a single processor (because of significant system multiprocessor overhead).

SMP, multiprocessor, tightly-coupled, shared memory
https://www.garlic.com/~lynn/subtopic.html#smp

I/O throughput topic drift; In 1980, I got con'ed into doing channel-extender support for STL (since renamed SVL) moving 300 people from the IMS group to offsite bldg with dataprocessing service back to STL datacenter. They had tried "remote" 3270 and found human factors unacceptable. Channel-extender support allow placing channel-attached 3270 controllers at the offsite bldg with no perceived different in human factors. As an aside: STL had spread all the 3270 channel controllers across all available channels shared with DASD. The channel-extender boxes now directly interfaced to the processor channels were much faster than the 3270 channel controllers, significantly reducing channel busy for 3270 operations resulting in lower interference with DASD I/O and 10-15% improvement in system throughput (some consideration using channel extender boxes for all 3270 controllers ... even those still in the same datacenter).

Then in 1988, the IBM branch office asked if I could help LLNL get some serial stuff they were playing with, standardized ... which quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec full-duplex, aggregate 200mbyte/sec). Then in the 90s, POK announces some serial stuff they had been working on for over a decade with, for ES/9000 as ESCON (when it was already obsolete, 17mbyte/sec).

Then some POK engineers become involved in "FCS" and define a heavy-weight protocol that radically reduces the native throughput which is eventually announced as "FICON". The latest numbers I've found is z196 "PEAK I/O" benchmarks that got 2M IOPS using 104 FICON. About the same time, a FCS was announced for E5-2600 server blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON). Also note IBM documentation recommends keeping SAPs (system assist processors that actually handle I/O) to 70% CPU, which would be around 1.5M IOPS. There is also the issue that no new CKD DASD have been made for decades, all being simulation on industry standard fixed-block disks.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

some posts mentioning 370/195 two i-stream/hyper-thread
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016c.html#3 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#82 IBM Automatic (COBOL) Binary Optimizer Now Availabile
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014c.html#64 Optimization, CPU time, and related issues
https://www.garlic.com/~lynn/2014.html#62 Imprecise Interrupts and the 360/195
https://www.garlic.com/~lynn/2013o.html#73 "Death of the mainframe"
https://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core
https://www.garlic.com/~lynn/2013c.html#67 relative speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#33 390 vector instruction set reuse, was 8-bit bytes
https://www.garlic.com/~lynn/2012n.html#32 390 vector instruction set reuse, was 8-bit bytes
https://www.garlic.com/~lynn/2012e.html#96 Indirect Bit
https://www.garlic.com/~lynn/2009k.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009e.html#5 registers vs cache
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?
https://www.garlic.com/~lynn/2006r.html#2 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2003f.html#33 PDP10 and RISC

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 07 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#100 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't

Last product done at IBM was HA/CMP (originally HA/6000, but I renamed it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors; Oracle, Sybase, Informix, Ingres) ... early jan92 meeting with Oracle, AWD/Hester tells Oracle CEO that there would be 16processor clusters mid92 and 128processor clusters ye92; then late jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told that we couldn't work with anything that had more than four processors; we leave IBM a few months later.

Not long later was brought in as consultant to small client/server startup ... two former Oracle people (worked with on cluster scale-up and were in the Hester/Ellison meeting) were there responsible for something called "commerce server" and wanted to do payment transactions, the startup also done something they called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had responsibility for everything between "commerce servers", payment network gateways, and payment networks.

I made some effort to get them to use XTP instead of TCP; TCP had minimum seven packet exchange (while XTP had minimum three packet exchange for reliable "transaction" by piggy-backing info) ... besides the TCP session close overhead (as web server use ramped up, there was six month period where servers were spending 95% of CPU running FINWAIT list ... before vendors started shipping performance fixes). trivia: later I did a talk on "Why Internet Isn't Business Critical Dataprocessing" based on software, documentation, and procedures I did for electronic commerce (which Postel sponsored at ISI/USC).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
ecommerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some specific posts mentioning business critical talk:
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 08 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964

little over a decade ago I was asked to track down 370 virtual memory decision, found staff to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, as a result standard 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (sort of like running MVT in CP67 16mbyte virtual machine) allowed to increase the number of concurrent running regions by factor of four times with little or no page (i.e. VS2/SVS). As machines got bigger, even that wasn't sufficient ... but with single virtual memory, concurrent region protection was limited by 4bit storage protection keys, resulted in each region getting its own (16mbyte) virtual address space ... VS2/MVS ... which has a different limitation (associated with MVS kernel image and CSA totally consuming every 16mbyte address space) ... and the mad rush to get to MVS/XA (similar to getting from SVS to MVS). Note dual-address space support was retrofitted from 370/XA to 3033 trying to alleviate some of the CSA explosion.

Note CERN had done analysis comparing VM370/CMS and MVS/TSO, presented at SHARE (copies freely available outside IBM, but inside IBM copies were stamped "IBM Confidential - Restricted", 2nd highest security classification, available on "need to know" only).

During the FS period (which was suppose to completely replace 370), internal politics was killing off 370 efforts (lack of new 370 during the period is credited with giving 370 clone makers their market foothold). Then with FS "implosion", there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts. The head of POK also convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (presumably because MVS/XA wouldn't ship on time, but possibly also in retaliation for the CERN report, POK executives were also bullying internal datacenters to convert from VM370 to MVS, claiming that at least it would no longer be available on high-end POK machines). Endicott eventually manages to save the VM370 product mission (for the mid-range), but has to recreate a development group from scratch.

Also, after the FS implosion, I had gotten roped into working on a 16processor, tightly-coupled, shared memory multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time. Everybody thought it was great until somebody tells the head of POK it could be decades before the POK favorite son operating system (MVS) had effective 16-way support. The head of POK then invites some of us to never visit POK again, and told the 3033 processor engineers to be heads down on *only* 3033. Note POK doesn't ship a 16-way system until after the dawn of the new century.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

a couple posts mentioning 16 processor work
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 08 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964

little over a decade ago I was asked to track down 370 virtual memory decision, found staff to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, as a result standard 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (sort of like running MVT in CP67 16mbyte virtual machine) allowed to increase the number of concurrent running regions by factor of four times with little or no page (i.e. VS2/SVS). As machines got bigger, even that wasn't sufficient ... but with single virtual memory, concurrent region protection was limited by 4bit storage protection keys, resulted in each region getting its own (16mbyte) virtual address space ... VS2/MVS ... which has a different limitation (associated with MVS kernel image and CSA totally consuming every 16mbyte address space) ... and the mad rush to get to MVS/XA (similar to getting from SVS to MVS). Note dual-address space support was retrofitted from 370/XA to 3033 trying to alleviate some of the CSA explosion.

Note CERN had done analysis comparing VM370/CMS and MVS/TSO, presented at SHARE (copies freely available outside IBM, but inside IBM copies were stamped "IBM Confidential - Restricted", 2nd highest security classification, available on "need to know" only). During the FS period (which was suppose to completely replace 370), internal politics was killing off 370 efforts (lack of new 370 during the period is credited with giving 370 clone makers their market foothold). Then with FS "implosion", there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts. The head of POK also convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (presumably because MVS/XA wouldn't ship on time, but possibly also in retaliation for the CERN report, POK executives were also bullying internal datacenters to convert from VM370 to MVS, claiming that at least it would no longer be available on high-end POK machines). Endicott eventually manages to save the VM370 product mission (for the mid-range), but has to recreate a development group from scratch.

Also, after the FS implosion, I had gotten roped into working on a 16processor, tightly-coupled, shared memory multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK it could be decades before the POK favorite son operating system (MVS) had effective 16-way support. The head of POK then invites some of us to never visit POK again, and told the 3033 processor engineers to be heads down on *only* 3033. Note POK doesn't ship a 16-way system until after the dawn of the new century.

The FS failure cast a dark shadow over IBM all through the 80s and by early 90s, IBM has one of the largest losses in the history of US corporations and was being re-orged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
... had already left IBM but get a call from he bowels of Armonk asking if could help with company breakup ... before we get started, the board brings in the former president of AMEX as CEO, who somewhat reverses the breakup.

turn of the century, mainframe hardware was a few percent of revenue and dropping. z12 time-frame, mainframe hardware was couple percent of revenue and still dropping, but the mainframe group was 25% of revenue (and 40% of profit), nearly all software and services (80s, mainframe hardware was at least half the revenue).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessort, tightly-coupled, shared memory posts
https://www.garlic.com/~lynn/subtopic.html#smp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning current cache miss memory latency when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processor cycles
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#62 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#85 Vintage DASD
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2016h.html#98 A Christmassy PL/I tale
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2013c.html#67 relative speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012f.html#56 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2009s.html#20 Larrabee delayed: anyone know what's happening?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM->SMTP/822 conversion

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM->SMTP/822 conversion
Date: 10 Apr, 2024
Blog: Facebook
for internal "vmtools" application/software repository i.e. global internal network resource) ... handle most of the internal email formats ... including PROFS, VMSG, RMSG, etc.

trivia: the PROFS group had been collecting internal apps for wrapping 3270 menus aroung ... and picked up very early version of VMSG for the PROFS email client. When the VMSG author tried to offer them a much enhanced version, the PROFS group tried to get him separated from the company (apparently having taken credit for it), the whole quieted down when he demonstrated that all PROFS mail had his initials in non-displayed field. After that the VMSG author only shared his source with me and one other person.


**********************  XXXX Internal Use Only  ************************
• :nick.REMAIL
• :sec.XXXX Internal Use Only
• :title.REMAIL - trivial exec for VM/822 mail forwarding
• :version.1
• :date.87/09/28
• :scp.VM/SP.3 ONWARDS
• :oname.Lynn Wheeler
• :onode.
• :ouser.WHEELER
• :aname.Lynn Wheeler
• :anode.
• :auser.WHEELER
• :lang.REXX
• :abs.REMAIL will process all spooled reader mail, convert it to
• 822 mail format and forward it to the specified TCP/SMTP mail
• gateway for sending to the specified tcp/ip node. If the VM/TCP/IP
• SMTP mail gateway is installed, REMAIL can be used to forward
• all VM mail to your <unix workstation>.
• :kwd.MAIL TCP/IP SMTP 822
• :sw.
• :doc.REMAIL MEMO
• :support.N
*********************************************************************  ***
&1 &2 REMAIL   EXEC         * Process spool reader files
&1 &2 REMAIL   XEDIT        * Reformat cms mail to 822 format
•                                    *
&1 &2 DISCRDR  EXEC         * Toy exec that activates REMAIL
&1 &2 XROSSCAL EXEC         * Toy exec for generating PROFS cal. req.
•                                    *
&1 &2 REMAIL   MEMO         * Brief documentation

... snip ...

posts mentioning "remail"
https://www.garlic.com/~lynn/2021i.html#86 IBM EMAIL
https://www.garlic.com/~lynn/2018.html#22 IBM Profs
https://www.garlic.com/~lynn/2016f.html#8 IBM email
https://www.garlic.com/~lynn/2012b.html#85 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2011c.html#82 A History of VM Performance
https://www.garlic.com/~lynn/2007j.html#50 Using rexx to send an email

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... recent posts about person responsible for the Cambridge CP67 wide-area network, which evolves into the internal corporate network and technology also used for the corporate sponsored univ BITNET ... also attempts to get it all moved to TCP/IP thwarted by the communication group that would force it all to SNA.
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#86 Vintage BITNET
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure

some recent posts mentioning VMSG and PROFS
https://www.garlic.com/~lynn/2024b.html#69 3270s For Management
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022b.html#29 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021i.html#86 IBM EMAIL
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2019b.html#20 Internal Telephone Message System

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 12 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964

didn't mean to imply "commission *only*" .... and no salary, but percent of order ... as opposed to percent of "quota" (exp: 80% salary base of $100k, quota could be set at $10M, if they sold $10M they would have 100% of quota and get $20k, if they sold $20M they would have 200% of quota and might get $40K ... if quota wasn't reset to $20M; compared to 5% straight commission of $20M or $1m), that might be adjusted over the course of the year.

Did spend some time trying to help in the 70s&80s (including 3880, engineers complaining non-technical&accountants were making technical decisions rather than engineers) ... one of the things discussed in "Tandem Memos" during 1981 (see entry in IBM Jargon).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Mid-80s, the IBM "father of risc" cons me into helping him with "wide" disk head. Original 3380 had 20 track spacing between data tracks. That was then cut in half resulting in twice the number of tracks(cylinders) and capacity, then spacing cut again to triple capacity. "Wide head" handle 18 closely spaced tracks, transferring 16 data tracks in parallel (50mbytes/sec) plus servo tracks on each side (format 16 data tracks plus servo track). Problem was that mainframe channels were still stuck at 3mbytes/sec.

Also note that transition to FBA from CKD had started in the 70s ... but POK favorite son operating systems (OS360, MVT, SVS, MVS, etc) didn't make it ... could even be seen in 3380 where formulas for records/track required rounding length up to fixed "cell size". Early 80s, I had offered MVS FBA support and was told that even if it was fully integrated and tested ... I would still have to come up with incremental $26M to cover cost of documentation and training (couple hundred million sales), and since IBM was selling every disk it made, FBA support would just translate to same amount of disk sales (and I wasn't allow to use long term life cycle savings in business case). For past couple decades, CKD has all been simulation on industry standard fixed-block disks.

1988, IBM branch office asks if I can help LLNL (national lab) get some serial technology they were playing with, "standardized" (including some stuff I had done in 1980) ... which quickly becomes fibre-channel standard ("FCS", initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec). IBM POK eventually announces some serial stuff they had been playing with for over a decade, with ES/9000 as ESCON (when it was already obsolete, 17mbytes/sec). Later some POK engineers become involved with FCS and define a heavy-weight protocol that radically reduces the native throughput ... which eventually released as FICON. The newest public numbers I've found is "PEAK I/O" for z196 .... getting 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 server blades, claiming over a million IOPS (two such FCS, have higher throughput than 104 FICON). Also, IBM docs recommend that SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would make throughput around 1.5M IOPS.

IBM had tried to do seastar (but was going to be late 1995 slipping to 1997) which was meant to compete with STK ICEBERG (which IBM eventually logo'ed); from internal jun1992 forum (just before leaving IBM):
The Seastar project presents a classic case of an early start receiving very little support from the corresponding development division followed by massive switch to an excess of attention. Among the items that have been transferred are religion, bubble gum and bailing wire, and a few algorithms. Transfer successes came from persistence, modeling, small prototypes, and lots of help from competitors.
... snip ...

... and archived post with email from 30Dec1991 about last (emulated) CKD DASD canceled and all future will be fixed-block DASD with simulated CKD.
https://www.garlic.com/~lynn/2019b.html#email911230
also mentions that we were working on making (national lab) LLNL's filesystem (branded "Unitree") available on HA/CMP (HA/CMP had started out HA/6000, originally for NYTimes to move their newspaper system "ATEX" off VAXCluster to RS/6000, I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors; Oracle, Sybase, Ingres, Informix). Then cluster scale-up is transferred for announce as IBM "supercomputer" and we were told we couldn't work with anything that had more than four processors (we leave IBM a few months later).

getting to play disk engineer in bldg14&5 posts
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing posts on internal network
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some posts mentioning disk "wide-head"
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018f.html#33 IBM Disks
https://www.garlic.com/~lynn/2018d.html#17 3390 teardown
https://www.garlic.com/~lynn/2018d.html#12 3390 teardown
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017d.html#60 Optimizing the Hard Disk Directly
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2012e.html#103 Hard Disk Drive Construction

some posts mentioning incremental $26M needed for MVS FBA support to cover documentation and training
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2021b.html#78 CKD Disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014b.html#18 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2014.html#94 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013n.html#54 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013f.html#80 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013d.html#2 Query for Destination z article -- mainframes back to the future
https://www.garlic.com/~lynn/2013c.html#68 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#40 Searching for storage (DASD) alternatives
https://www.garlic.com/~lynn/2012p.html#32 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2011j.html#57 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010o.html#12 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010n.html#65 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010k.html#10 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010f.html#18 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2009j.html#73 DCSS ... when shared segments were implemented in VM
https://www.garlic.com/~lynn/2008o.html#55 Virtual
https://www.garlic.com/~lynn/2008j.html#49 Another difference between platforms
https://www.garlic.com/~lynn/2006f.html#4 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360 Announce 7Apr1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360 Announce 7Apr1964
Date: 12 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964

When we start HA/CMP ... we knew from numerous studies that service outages had been moving from hardware failures (as commodity hardware got increasingly more reliable) to mostly environmental (earthquakes, floods, fires, power outages, etc) ... and to get better than 5-nines, had to go to replicated systems at multiple geographic separated distances (which would also handle increasingly rare hardware outages/failures) ... out marketing I coined terms disaster survivability and geographic survivability (to differentiate from disaster/recovery). Then the IBM S/88 (rebranded fault tolerant hardware) product administrator started taking us around to their customers ... and also got me to write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/as400 and POK/mainframe complained they couldn't meet the requirements).

high availability, HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available

in the first post in this thread, I mention as undergraduate in the 60s, being hired into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services. I thought the Renton datacenter possibly largest in the world (360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room, somebody joked that Boeing was bringing in 360/65s like other companies brought in keypunches). There was a disaster plan to replicate Renton up at the new 747 plant in Everett (mt rainier heats up and the resulting mud slide takes out the Renton datacenter).

some (recent) posts mentioning replicating renton datacenter as counter to mt rainer and mud slide takes out renton
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#31 Mainframe Datacenter
https://www.garlic.com/~lynn/2023f.html#105 360/67 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023.html#66 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021b.html#5 Availability
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI: The Internet That Wasn't

From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI: The Internet That Wasn't
Date: 13 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#100 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't

... from the article (IBM reps making sure that OSI was kept in line with IBM SNA):
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

I ran afoul of the communication group several times in the 80s ... starting with my HSDT project (T1 and faster computer links, both terrestrial and satellite, while SNA was cap'ed at 56kbit). The communication group analysis for the corporate executive committee why customers didn't need T1 until sometime into the 90s. They surveyed customeers using 37x5 "fat piples" ... multiple parallel 56kbit links treated as single logical link ... dropping to zero by 6or7 parallel links. What they didn't know (or didn't want to tell the executive committee) was that telco tariffs for T1 were about the same as five or six 56kbit links ... trivial survey found 200 customers moving to full T1 and using non-IBM gear.

I was scheduled for presentation to the corporate backbone meeting about upgrading it to T1 ... when I got email that the communication group fiercely fighting to get the internal network converted to SNA ... had managed to get the meetings restricted to management only and my appearance was canceled (I joked that they didn't want facts conflicting with their fantasy).

Start of HSDT was also working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer centers, then congress cuts the budget, some other things happen and finally an RFP was released (in part based on what we already had running). from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet. I didn't take the follow offer:

Date: 4 January 1988, 14:12:35 EST
To: distribution
Subject: NSFNET Technical Review Board Kickoff Meeting 1/7/88

On November 24th, 1987 the National Science Foundation announced that MERIT, supported by IBM and MCI was selected to develop and operate the evolving NSF Network and gateways integrating 12 regional networks. The Computing Systems Department at IBM Research will design and develop many of the key software components for this project including the Nodal Switching System, the Network Management applications for NETVIEW and some of the Information Services Tools.

I am asking you to participate on an IBM NSFNET Technical Review Board. The purpose of this Board is to both review the technical direction of the work undertaken by IBM in support of the NSF Network, and ensure that this work is proceeding in the right direction. Your participation will also ensure that the work complements our strategic products and provides benefits to your organization. The NSFNET project provides us with an opportunity to assume leadership in national networking, and your participation on this Board will help achieve this goal.

... snip ... top of post, old email index, NSFNET email

... somebody had been collecting executive misinformation email (not only forcing the internal network to SNA, but claiming that SNA could be used for NSFnet) and it was forwarded to us ... old post with email heavily clipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
https://www.garlic.com/~lynn/2024b.html#email870109

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

a few recent posts mentioning communication group "fat pipe" analysis for the corporate executive committee
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023f.html#82 Vintage Mainframe OSI
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#5 What is IBM SNA?
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021f.html#54 Switch over to Internetworking Protocol
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

EBCDIC

From: Lynn Wheeler <lynn@garlic.com>
Subject: EBCDIC
Date: 13 Apr, 2024
Blog: Facebook
360s were suppose to be ASCII machines but the ASCII unit record gear wasn't ready ... so they were (supposedly) going to temporarily use the (old) BCD unit gear with EBCDIC ... "the biggest computer goof ever"
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Unfortunately, the software for the 360 was constructed by thousands of programmers, with great and unexpected difficulties, and with considerable lack of controls. As a result, the nearly $300 million worth of software (at first delivery!) was filled with coding that depended upon the EBCDIC representation to work, and would not work with any other! Dr. Frederick Brooks, one of the chief designers of the IBM 360, informed me that IBM indeed made an estimate of how much it would cost to provide a reworked set of software to run under ASCII. The figure was $5 million, actually negligible compared to the base cost. However, IBM (present-day note: Read "Learson") made the decision not to take that action, and from this time the worldwide position of IBM hardened to "any code as long as it is ours".
... snip ...

https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

above attributes it to Learson ... however, it was also Learson that was trying (and failed) to block the bureaucrats, careerists (and MBAs) from destroying the Watson Legacy/Culture.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
So by the early 90s, it was looking like it was nearly over, 1992 IBM has one of the largest losses in history of US corporations and was being re-orged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of Amex that (mostly) reverses the breakup (although it wasn't long before the disk division is gone).

not the only case ("any code as long as its is ours") ... from the article (IBM reps making sure that OSI was kept in line with IBM SNA):
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't

OSI: The Internet That Wasn't. How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking
https://spectrum.ieee.org/osi-the-internet-that-wasnt
Meanwhile, IBM representatives, led by the company's capable director of standards, Joseph De Blasi, masterfully steered the discussion, keeping OSI's development in line with IBM's own business interests. Computer scientist John Day, who designed protocols for the ARPANET, was a key member of the U.S. delegation. In his 2008 book Patterns in Network Architecture(Prentice Hall), Day recalled that IBM representatives expertly intervened in disputes between delegates "fighting over who would get a piece of the pie.... IBM played them like a violin. It was truly magical to watch."
... snip ...

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning Bob Bemer (and the biggest computer goof)
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#53 IBM Vintage ASCII 360
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021d.html#92 EBCDIC Trivia
https://www.garlic.com/~lynn/2020.html#7 IBM timesharing terminal--offline preparation?

--
virtualization experience starting Jan1968, online at home since Mar1970

EBCDIC

From: Lynn Wheeler <lynn@garlic.com>
Subject: EBCDIC
Date: 14 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#113 EBCDIC

I took two credit hr intro to fortran/computers, at end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. Univ. was getting 360/67 replacing 709/1401, for tss/360 and got a 360/30 (replacing 1401, had 1401 microcode emulation, but was suppose to start gaining 360 experience) temporarily pending arrival of 360/67. The univ. shutdown datacenter on weekends and I had the place dedicated (although 48hrs w/o sleep made monday classes hard). They gave me a bunch of hardware&software manuals and I got to design and implement monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card 360 assembler program. Within a year of taking intro class, the 360/67 came in and I was hired fulltime responsible for os/360 (ran as 360/65, tss/360 came to production level). Student fortran had run under a second on 709, initially on os/360 ran over a minutes. I install HASP and it cuts the time in half. I then start redoing STAGE2 SYSGEN, careful placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Never got better than 709 until I install Univ. of Waterloo WATFOR.

At one point, Univ library got ONR grant to do online catalog and used part of the money to get a 2321 datacell. Online catalog was also selected as one of betatest sites for the original CICS product and CICS support was added to tasks. First problem was bringing up CICS was failing ... eventually diagnosed that CICS had some hard coded (undocumented) BDAM options and the library had built the datasets with different set of options. ... Yelavich website gone, but still lives on at the wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

CSC had come out to install CP67/CMS (precursor to vm370, 3rd installation after CSC itself and MIT Lincoln Labs) and I mostly played with it in my weekend dedicated time. Initially mostly worked on rewriting pathlengths for running os/360 in virtual machine. OS/360 test ran 322 secs on "bare machine", , initially 856secs in virtual machine (CP67 CPU 534secs), after a few months, got CP67 CPU down to 113secs (from 534secs). CP67 came with 2741&1052 terminal with automagic terminal type support (SAD CCW to switch port terminal type scanner). The univ. had some number of TTY/ASCII terminals and I integrated ASCII terminal support with automagic terminal type support (trivia: ASCII terminal type support had come in a "HEATHKIT" box for install in the IBM telecommunication controller).

I then wanted a single dialup telephone number ("hunt group") for all terminals. Didn't quite work, while could dynamically change terminal type scanner ... IBM had taken a short cut and hardwired port line speed. This kicks off a univ project to do clone controller, build a channel interface board for an Interdata/3 programmed to simulate IBM telecommunication controller, with addition it could do dynamic line speed). Later was upgraded to a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer) were selling it as clone controller and four of us are written up for (some part of) clone controller business. Around the turn of century I run into descendant at large datacenter that was handling majority of point-of-sale dailup credit card machines east of the Mississippi.

360 plug-compatable controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

trivia: this claims that major motivation for "Future System" (in the early 70s) was countermeasure for clone controllers ... interface so complex that the clone makers couldn't keep up.
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

note: FS was completely different and going to completely replace 370 and internal politics was killing off 370 efforts during FS (claim is that lack of new 370 during the period allowed the clone 370 makers their market foothold, aka counter to clone controller gave rise to clone makers). Apparently even IBM couldn't handle the complexity and when FS implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. some more detail
http://www.jfsowa.com/computer/memo125.htm

... after graduation I had joined IBM CSC and continued to work on 360&370 all during the FS period (including periodically ridiculing what they were doing, which wasn't exactly a career enhancing activity). Learson had been trying (and failed) to block the bureaucrats, careerists and MBAs from destroying the Watson culture&legacy.
https://www.amazon.com/Computer-Wars-Future-Global-Technology/dp/0812923006/
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."
... snip ...

csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

The Learson failure to block the bureaucrats/careerists/MBAs and FS shadow of defeat continued all through the 80s and in the early 90s, IBM had one of the largest losses in the history of US corporations and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Other recent ref:
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk & TCP/IP I/O

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk & TCP/IP I/O
Date: 15 Apr, 2024
Blog: Facebook
transferred out to San Jose Research in 2nd half 70s and got to wander around datacenters (both IBM and non-IBM), including disk engineering & product test (bldg14&15 across the street). they were running prescheduled, around the clock, stand-alone mainframe testing; they had mentioned that they had recently tried MVS ... but it had 15min mean-time between failure (in that environment). I offer to rewrite I/O supervisor making it bullet proof and never fail, so they could do any amount on-demand, concurrent testing ... greatly improving productivity ... downside is they got in habit of blaming me any time there was problem and I had to spend increasing amount of time diagnosing their hardware problems. One particular bad one was 3880 disk controller, engineers bitterly complaining that accountants had forced move to slow, microprocessor for 3880 (to save a few cents) ... had a special hardware path to handle 3380 3mbyte/sec transfer ... but everything else was much slower and otherwise significantly drove up channel busy. They were trying to mask how bad it was by signaling end-of-operation interrupt as soon as data transfer was done ... but before controller had finished with some of its operation ... making elapsed time & channel busy look closer to 3830 controller ... hoping that operation clean-up could be overlapped with software interrupt processing .... but if software tried redriving too soon with queued operation, it would have to signal controller busy (SM+BUSY) ... driving up software overhead.

Bldg 15 (disk product test) also would get brand spanking new engineering models (usually no. three or four) for disk testing ... 3033 testing only took a couple percent CPU and so we scrounged up 3830 and string of 3330s for our private online service (and they ran a 3270 coax under the street to my office). Some branch found that bldg 15 also got early engineering 4341 and in jan1979 cons me into doing benchmark for a national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami) ... turns out (very) small cluster of 4341s was much cheaper, much less flr space, much less power/cooling, much less expensive and higher throughput than 3033 (4341 benchmark throughput was about the same as a decade earlier 6600)

posts getting to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk

In 1980, STL (now SVL) was bursting at the seams and they were transferring 300 people from IMS group to offsite bldg with dataprocessing service back to STL datacenter. They had tried "remote" 3270 but found human factors unacceptable (compared to what they were use to inside STL). I get con'ed into doing channel-extender support allowing channel attached 3270 controllers to be placed in offsite bldg with no difference in human factors. They had been spreading the 3270 controllers across channels with dasd. The channel-extenders were much faster (and less channel busy) than 3270 controllers (for same terminal traffic), allowing increase in disk I/O and 10-15% increase in overall throughput. They were then considering placing even in-house controllers on channel-extenders to get 10-15% increase in throughput for all systems.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

The 3090 people had configured number of channels assuming 3880 was like 3830 but with 3mbyte/sec support ... when they found out how bad the channel busy really was, they realized they would have to significantly increase the number of channels (to compensate for 3880 channel busy), which required an additional TCM (3090 group were facetiously claiming they were going to bill the 3880 group for the increase in 3090 manufacturing costs). Marketing eventually respun the big increase in channels as 3090 being fantastic I/O machine ... rather than necessary to offset the significant 3880 increase in channel busy.

In mid-80s, The communication group was fighting off release of mainframe TCP/IP support ... when they lost that battle, they changed strategy and claimed that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped would get aggregate of 44kbytes/sec using nearly whole 3090 processor. I then did the support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

In 1988, the IBM branch office asked if I could help LLNL standardize some serial stuff they were playing with which quickly becomes fibre-channel standard ("FCS", including some stuff i had done in 1980 for STL channel extender), initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate. Then IBM POK mainframe releases some serial stuff that they had been playing with for over a decade with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then some POK engineers become involved with FCS, defining a heavy-weight protocal that significantly reduces throughput ... which eventually ships as "FICON". The most recent public benchmark is IBM (2010) max-configured z196 "Peak I/O" that gets 2M IOPS using 104 FICON. About the same time a "native" FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON running over FCS). Also IBM recommendation was holding SAPs (system assist processors, dedicated processors that do actual I/O) to 70% CPU ... which would mean more like 1.5M IOPS.

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some posts mentioning national lab 4341 benchmark for compute farm
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
https://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014g.html#83 Costs of core

some posts mentioning 3090 channel busy problems with 3880 disk controller
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#14 Mainframe I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019.html#79 How many years ago?
https://www.garlic.com/~lynn/2018.html#0 Intrigued by IBM
https://www.garlic.com/~lynn/2017k.html#25 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2015.html#36 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014k.html#17 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013m.html#78 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2012p.html#5 What is a Mainframe?
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011f.html#0 coax (3174) throughput
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk & TCP/IP I/O

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk & TCP/IP I/O
Date: 15 Apr, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O

trivia: SLAC (CERN sister institution) use to sponsor monthly meetings ... CERN&SLAC did 168E & 3081E (processors, sufficient 370 to run Fortran for initial data reduction from sensors)
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf

last product we did at IBM, late 80s, my wife presented five hand drawn charts to IBM executive that were approved to do HA/6000, initially for NYTimes newspaper system (ATEX) to move off VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early jan1992, in meeting with Oracle, IBM AWD/Hester presents to Oracle CEO that we would have 16processor cluster by mid92 and 128processor cluster by ye92.

During Jan1992, IBM FSD tells the IBM Kingston supercomputer group that FSD is going with our cluster scale-up for national lab supercomputers ... then end of Jan1992, we are told that our cluster scale-up is being transferred to Kingston for announce as IBM Supercomputer (technical/scientific *ONLY*) and we aren't allowed to work with anything that has more than four processors. We leave IBM a few months later. Besides national lab supercomputer work we had also been working with LLNL ("Unitree") and NCAR (MESA Archival) to move their filesystems to HA/CMP.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

posts mentioning "Unitree" and "MESA Archival":
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023c.html#19 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021g.html#2 IBM ESCON Experience
https://www.garlic.com/~lynn/2019e.html#116 Next Generation Global Prediction System
https://www.garlic.com/~lynn/2018d.html#41 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2017b.html#67 Zero-copy write on modern motherboards
https://www.garlic.com/~lynn/2015c.html#68 30 yr old email
https://www.garlic.com/~lynn/2012p.html#9 3270s & other stuff
https://www.garlic.com/~lynn/2012k.html#46 Slackware
https://www.garlic.com/~lynn/2012i.html#47 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2011n.html#34 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2007j.html#47 IBM Unionization
https://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2005e.html#15 Device and channel
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home